# Coevolutionary algorithms notes

Anthony • Dec 18, 2021

I added some of my notes about mainly to experiment with this web site but also to state an idea:

A coevolutionary algorithm application should specify:

I'm glossing evolutionary dynamics but maybe someday I'll put some notes up about those; there is a lot to say about and . After finishing graduate school I have focused more on and , because these are applicable beyond . The papers on are algorithm agnostic. When working on them we often talked about the class of co-optimization algorithms–a term Travis Service coined in his Master's thesis meant to be a superclass of –wherein algorithms could have dynamics that follow gradients, move randomly, or do something else without the mutation/recombination/evaluation/selection cycle that evolutionary dynamics have. The pitfalls and traps apparent in can also afflict other co-optimization algorithms, because these issue arise from misalignment between the dynamics and the solution concept.

In any event, an application of that does not explicitly state the interactive domain and the solution concept of interest is dubious from the start. It would be like saying you are going to apply optimization, but then not tell us what function you're optimizing nor whether you're minimizing or maximizing it. Obviously one does not have to use the formalized language that I prefer personally, but these two components should be stated clearly and precisely. That's the idea.

For several years I've been interested in the idea of state as an emergent phenomenon, and I may start uploading notes about that next. By "state" I mean some formal version of what you mean when you say something like "this machine is in an idle state". In computer science you tend to see formal constructs meant to capture this idea such as finite state machines defined with language like "Let $S$ be the set of states, …". In other words, the states the machine can be in are already given and collected into a set. While that is a good thing to do when you're designing a machine, there is more to the story.

It turns out that in some cases you can deduce the (observable) states of a system, given only traces of behavior through it. To put that in plainer language: just by tinkering with the system, you can learn what the internal states of it are, even if you have no information about its internal design when you start. I think this is pretty neat. It allows you to draw connections between weird artifacts like finite state machines and more familiar notions of space, geometry, and dynamical systems. It changes your point of view to thinking about learning and discovery, as opposed to design. It shines a different light on computer science results like the Myhill-Nerode theorem or Hopcroft's algorithm. Anyway!