If anybody happens to be listening I’m sorry if I made your feed reader go nuts. With luck this will be the last major restructuring like that.

]]>A coevolutionary algorithm application should specify:

- An [[interactive domain::interactive-domains]]
- A [[solution concept::solution-concepts]]
- An
evolutionary dynamicsover the interactive domain

I'm glossing evolutionary dynamics but maybe someday I'll put some notes up about those; there is a lot to say about [[monotonic solution concepts]] and [[coevolutionary free lunches]]. After finishing graduate school I have focused more on [[interactive domains]] and [[solution concepts]], because these are applicable beyond [[coevolutionary algorithms]]. The papers on [[coevolutionary free lunches]] are algorithm agnostic. When working on them we often talked about the class of *co-optimization algorithms*–a term Travis Service coined in his Master's thesis meant to be a superclass of [[coevolutionary algorithms]]–wherein algorithms could have dynamics that follow gradients, move randomly, or do something else without the mutation/recombination/evaluation/selection cycle that evolutionary dynamics have. The pitfalls and traps apparent in [[coevolutionary algorithms]] can also afflict other co-optimization algorithms, because these issue arise from misalignment between the dynamics and the solution concept.

In any event, an application of [[coevolutionary algorithms]] that does not explicitly state the [[interactive domain::interactive-domains]] and the [[solution concept::solution-concepts]] of interest is dubious from the start. It would be like saying you are going to apply optimization, but then not tell us what function you're optimizing nor whether you're minimizing or maximizing it. Obviously one does not have to use the formalized language that I prefer personally, but these two components should be stated clearly and precisely. That's the idea.

For several years I've been interested in the idea of *state* as an emergent phenomenon, and I may start uploading notes about that next. By "state" I mean some formal version of what you mean when you say something like "this machine is in an idle state". In computer science you tend to see formal constructs meant to capture this idea such as finite state machines defined with language like "Let $S$ be the set of states, …". In other words, the states the machine can be in are already given and collected into a set. While that is a good thing to do when you're designing a machine, there is more to the story.

It turns out that in some cases you can deduce the (observable) states of a system, given only traces of behavior through it. To put that in plainer language: just by tinkering with the system, you can learn what the internal states of it are, even if you have no information about its internal design when you start. I think this is pretty neat. It allows you to draw connections between weird artifacts like finite state machines and more familiar notions of space, geometry, and dynamical systems. It changes your point of view to thinking about learning and discovery, as opposed to design. It shines a different light on computer science results like the Myhill-Nerode theorem or Hopcroft's algorithm. Anyway!

]]>