Joel Lehman, Sebastian Risi, and Kenneth O. Stanley (2012)
On the Benefits of Divergent Search for Evolved Representations
In: Proceedings of the EvoNet 2012 Workshop at the Thirteenth International Conference on Artificial Life (ALIFE XIII). (4 pages).


Evolved representations in evolutionary computation are often fragile, which can impede representation-dependent mechanisms such as self-adaptation. In contrast, evolved representations in nature are robust, evolvable, and creatively exploit available representational features. This paper provides evidence that this disparity may partially result from a key difference between natural evolution and most evolutionary algorithms: Natural evolution has no overarching objective. That is, nature tends to continually accumulate novel forms without any final goal, while most evolutionary algorithms eventually converge to a point in the search space that locally maximizes the fitness function. The problem is that individuals that maximize fitness do not need good representations because a representation's future potential is not reflected by its current fitness. In contrast, search methods without explicit objectives that are consequently divergent may implicitly reward lineages that continually diverge, thereby indirectly selecting for evolvable representations that are better able to diverge further. This paper reviews a range of past results that support such a hypothesis from a method called novelty search, which explicitly rewards novelty, i.e. behaviors that diverge from previously encountered behaviors. In many experiments, novelty search demonstrates significant representational advantages over traditional fitness-based search, such as evolving more compact solutions, uncovering more evolvable representations, and more fully exploiting representational features. The conclusion is that divergent evolutionary algorithms like novelty search may exert selection pressure towards higher quality representations than traditional convergent approaches to search.