The Novelty Search Users Page


"To achieve your highest goals, you must be willing to abandon them."


2013 Keynote Now in High Quality on Youtube: Ken Stanley gives a keynote at the 16th Portuguese Conference on Artificial Intelligence: " When Algorithms Inform Real Life: Novelty Search and the Myth of the Objective"



2012 YouTube Video: Ken Stanley gives Joint ACM and NICTA-sponsored 2012 talk at RMIT on "Discovery Without Objectives"

2010 Video: SPLASH 2010 Keynote on Searching Without Objectives

YouTube Video: Bird Flying Behavior Evolved with Novelty Search by Ander Taylor.

This page provides information on the use and implementation of novelty search, an evolutionary search method that takes the radical step of ignoring the objective of search and instead rewarding only behavioral novelty. This visual demonstration (requires modern browser, IE users may need to install a plugin) contrasts a search for novelty with a search for the objective.

Please direct inquiries to Ken Stanley, kstanley@eecs.ucf.edu (Website) or Joel Lehman, jlehman@eecs.ucf.edu (Website)


Novelty search: Evolution without objectives

Novelty search is a unique approach to search inspired by natural evolution's open-ended propensity to perpetually discover novelty (an introductory journal article is here). Rather than converging on a single optimal solution, nature discovers a vast variety of different ways to meet the challenges of life. As an abstraction of novelty-discovering in nature, novelty search directly rewards behaving differently instead of rewarding progress to some ultimate goal, which is the traditional approach to search. That is, in a search for novelty there is no direct pressure to be better.

To be more precise, evolutionary search is usually driven by measuring how close the current candidate solution is to the objective. That measure then determines whether the candidate is rewarded (i.e. whether it will have offspring) or discarded. In contrast, novelty search never measures progress at all. Rather, it simply rewards those individuals that are different.

Instead of aiming for the objective, novelty search looks for novelty; surprisingly, sometimes not looking for the goal in this way leads to finding the goal more quickly and consistently. While it may sound strange, in some problems ignoring the goal outperforms looking for it. The reason for this phenomenon is that sometimes the intermediate steps to the goal do not resemble the goal itself. John Stuart Mill termed this source of confusion the "like-causes-like" fallacy. In such situations, rewarding resemblance to the goal does not respect the intermediate steps that lead to the goal, often causing search to fail.

More than an approach to solving problems

Although it is effective for solving some deceptive problems, novelty search is not just another approach to solving problems. A more general inspiration for novelty search is to create a better abstraction of how natural evolution discovers complexity. An ambitious goal of such research is to find an algorithm that can create an "explosion" of interesting complexity reminiscent of that found in natural evolution.

While we often assume that complexity growth in natural evolution is mostly a consequence of selection pressure from adaptive competition (i.e. the pressure for an organism to be better than its peers), biologists have shown that sometimes selection pressure can in fact inhibit innovation in evolution. Perhaps complexity in nature is not the result of optimizing fitness, but instead a byproduct of evolution's drive to discover novel ways of life.

A recent extension of novelty search called minimal criteria novelty search (MCNS) is a potential further step in this direction: Natural evolution has discovered many ways to meet the constraints of survival and reproduction. MCNS is a general formulation of such constrained novelty discovery that abstracts natural evolution as a search for many ways to do the same thing (and this same thing need not always be survival and reproduction like it is in nature).

Another application of novelty search and MCNS is to characterize a search space by sampling its most interesting points. As opposed to "solving" a problem, this approach gives a broad view of what is possible within a particular domain.

Applying novelty search to problems

If you do want to apply novelty search to a problem, here are some guidelines for deciding whether it is suited for your domain:

Novelty search is best suited for deceptive problems in which there is an intuitive way of characterizing the behavior of an individual. For example, mazes are often deceptive, and the behavior of a maze-navigator can easily be approximated by the location within the maze at which it ends.

A deceptive problem is one in which it is hard to craft an effective fitness function. Sometimes the stepping stones that lead to a solution do not resemble the solution itself. For example, imagine your fingers are stuck within a Chinese finger trap. The precursor to freeing your fingers is to push them together, which intuitively seems to actually make the entrapment worse (your fingers move deeper into the trap when you want them to be free from it).

This trick is why we call it a trap: The Chinese finger trap exploits our assumptions so that the steps that lead to the solution seem unpromising. Our assumptions, like the fitness function in evolutionary computation, have the potential to deceive us. Novelty search is well-suited for deceptive problems because it does not actively search for the goal and thus cannot be deceived in the same manner.

If a problem is not deceptive, an objective-based method that searches for the goal is likely more efficient at solving the problem because it can directly follow the gradient of improvement to the objective. However, the more ambitious the goal, the more likely it is to be deceptive. That is, the greater the separation between the initial starting point and the intended destination, the greater potential there is for deception. The implication is that the most difficult problems have deceptive structures for which objective-based search may be ineffective.

How to add novelty search to an existing algorithm

Novelty search can be easily implemented on top of most evolutionary algorithms. There are three important changes. The first is to add a measure of an individual's behavior to your domain. This measure is generally domain-dependent and should aim to instantiate a space of interesting behaviors. The second change is to replace the fitness function, which rewards progress towards objective(s), with a novelty metric, which rewards behaving differently from prior individuals in the search. The third change is to add an archive that remembers individuals that were highly novel when they were first discovered.

Instead of returning a fitness value, an evaluation of an individual in the domain should return a characterization of that individual's behavior. Typically this characterization is a vector of real numbers reflecting important aspects of what is interesting in the domain. One can then define a behavioral distance metric between different individuals, which is traditionally the Euclidean distance between two individuals' behavioral characterization vectors.

Although any measure of individual novelty can potentially work, the average distance to the k-nearest neighbors of an individual in behavior space has proven effective as the novelty metric in several publications. This metric measures how much a particular area of behavior space has been explored, thereby rewarding individuals in relatively unexplored areas.

The first step in calculating the novelty of a new individual is to measure its behavioral distance to all other individuals in the population and to all individuals in the archive, reflecting how different it is from current behaviors (i.e. in the current population) as well as behaviors that were novel in the past (i.e. in the archive). The k-nearest neighbors can be derived from this distance information (i.e. the k individuals that have the smallest distance to the new individual in the behavior space), and novelty is assigned as the average distance to the k-nearest neighbors. If a new individual's novelty is high, it is typically added to the archive.

Available implementations of novelty search

Novelty search has been applied to evolutionary robotics, most often in the context of neuroevolution (i.e. the evolution of artificial neural networks or ANNs). Neuroevolution is popular because novelty search rewards novel behaviors, which requires behaviors to be evolved. One way to evolve behaviors is through ANNs. An ANN is like a "brain" for a virtual robot that encodes how it will behave. While current available implementations are built upon neuroevolution, novelty search is general enough to apply to other areas of EC with similar properties, such as genetic programming.

Novelty Search Publications


Publications from EPlex

Novelty search was invented at the Evolutionary Complexity Research Group (EPlex) at the University of Central Florida. The following journal paper is a comprehensive introduction to novelty search: Abandoning Objectives: Evolution Through the Search for Novelty Alone

All novelty search related publications from EPlex are listed below:

Effective Diversity Maintenance in Deceptive Domains
Joel Lehman, Kenneth O. Stanley and Risto Miikkulainen (2013)
To Appear In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2013).

Beyond Open-endedness: Quantifying Impressiveness
Joel Lehman and Kenneth O. Stanley (2012)
In: Proceedings of the Thirteenth International Conference on Artificial Life (ALIFE XIII).

Novelty Search and the Problem with Objectives
Joel Lehman and Kenneth O. Stanley (2011)
In: Genetic Programming Theory and Practice IX (GPTP 2011).

Evolving a Diversity of Virtual Creatures Through Novelty Search and Local Competition
Joel Lehman and Kenneth O. Stanley (2011)
To appear in: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2011).

Improving Evolvability through Novelty Search and Self-Adaptation
Joel Lehman and Kenneth O. Stanley (2011)
To appear in: Proceedings of the 2011 Congress on Evolutionary Computation (CEC-2011)

On the Deleterious Effects of A Priori Objectives on Evolution and Representation
Brian G. Woolley and Kenneth O. Stanley (2011)
To appear in: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2011).

Abandoning Objectives: Evolution Through the Search for Novelty Alone
Joel Lehman and Kenneth O. Stanley (2011)
In: Evolutionary Computation journal.

Evolving Plastic Neural Networks with Novelty Search
Sebastian Risi and Kenneth O. Stanley (2010)
In: Adaptive Behavior journal.

Revising the Evolutionary Computation Abstraction: Minimal Criteria Novelty Search
Joel Lehman and Kenneth O. Stanley (2010)
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2010).

Efficiently Evolving Programs through the Search for Novelty
Joel Lehman and Kenneth O. Stanley (2010)
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2010).

How Novelty Search Escapes the Deceptive Trap of Learning to Learn
Sebastian Risi, Sandy Vanderbleek, Charles Hughes and Kenneth Stanley
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2009). New York, NY: ACM, 2009. 8 pages.
Winner of a Best Paper Award at GECCO-2009

Exploiting Open-Endedness to Solve Problems Through the Search for Novelty
Joel Lehman and Kenneth O. Stanley (2008)
In: Proceedings of the Eleventh International Conference on Artificial Life (ALIFE XI).

Publications outside EPlex

Novelty search is also being explored in research groups outside of EPlex. All novelty search related publications from researchers outside of EPlex are listed below:

Improving Grammatical Evolution in Santa Fe Trail using Novelty Search
Paulo Urbano and Loukas Georgiou (2013).
To Appear In: Proceedings of the European Conference on Artificial Life.

Searching for Novel Clustering Programs
Enrique Naredo and Leonardo Trujillo (2013).
To Appear In: Proceedings of the Genetic and Evolutionary Computation Conference.

A Behavior-based Analysis of Modal Problems
Leonardo Trujillo, Lee Spector, and Yuliana Martinez (2013).
To Appear In: Proceedings of the Genetic and Evolutionary Computation Conference.

Preliminary Study of Bloat in Genetic Programming with Behavior-based Search
Leonardo Trujillo, Enrique Naredo, and Yulliana Martinez (2013).
To Appear In: EVOLVE.

Searching for Novel Regression Functions
Yuliana Martinez, Enrique Naredo, Leonardo Trujillo and Edgar Galvan-Lopez (2013).
To Appear In: Proceedings of the Congress on Evolutionary Computation.

Searching for Novel Classifiers
Enrique Naredo, Leonardo Trujillo and Yuliana Martinez (2013).
To Appear In: Proceedings of the European Conference on Genetic Programming.

Evolution of Swarm Robotic Systems with Novelty Search
Jorge Gomes, Paulo Urbano, and Anders Lyhne Christensen (2013).
In: Swarm Intelligence journal.

Generic Behavior Similarity Measures for Evolutionary Swarm Robotics
Jorge Gomes and Anders Christensen (2013).
To Appear In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2013).

Open Ended Evolution of 3D Multicelluar Development Controlled by Gene Regulatory Networks
Michal Joachimczak and Borys Wróbel (2012).
In: Proceedings of the Artificial Life Conference.

Encouraging Behavioral Diversity in Evolutionary Robotics: An Empirical Study
Jean-Baptiste Mouret and Stephane Donciuex (2012).
In: Evolutionary Computation journal.

Solving deceptive tasks in robot body-brain co-evolution by searching for behavioral novelty
Peter Krčah (2011).
In: Advances in Robotics and Virtual Reality.

Why and How to Measure Exploration in Behavioral Space
Charles Ollion and Stéphane Doncieux (2011).
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2011).

Critical Factors in the Performance of Novelty Search
Steijn Kistemaker and Shimon Whiteson (2011).
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2011).

Novelty Restarts for Evolution Strategies
Giuseppe Cuccu and Faustino Gomez (2011).
In: Proceedings of the 2011 Congress on Evolutionary Computation (CEC 2011).

When Novelty is Not Enough
Giuseppe Cuccu and Faustino Gomez (2011).
In: Proceedings of Evo* 2011.

Combination of Novelty Search and Fitness-Based Search Applied to Robot Body-Brain Co-Evolution
Peter Krčah and Daniel Toropila (2010).
In: Proceedings of the 13th Czech-Japan Seminar on Data Analysis and Decision Making in Service Science.

Automatically Discovering Properties that Specify the Latent Behavior of UML Models
Heather J. Goldsby and Betty H. C. Cheng (2010).
In: Proceedings of MODELS 2010.

Novelty-based Fitness: An Evaluation under the Santa Fe Trail
John Doucette and Malcolm I. Heywood (2010).
In: Proceedings of the European Conference on Genetic Programming (EuroGP 2010).

Novelty-based Multiobjectivization
Jean-Baptiste Mouret (2009).
In: Exploring New Horizons in Evolutionary Design of Robots (IROS Workshop)

Frequently Asked Questions


Do you really think that novelty search is better than objective-based search?

No, and the question of which is "better" ultimately misses the deeper point: While objective-based search does sometimes work, search is about more than just objectives. Search can lead to serendipity and some people search without a goal, driven by a gut instinct that something interesting is around the corner. For humans, these kinds of searches may even be more common than objective-driven search. Much of life is the pursuit of whatever is interesting, regardless of where it may lead. Yet evolutionary computation has focused for decades almost exclusively on objective-driven search. We are missing a significant opportunity to explore the benefits of search driven by other means. Novelty search is a kind of treasure hunter that collects hot spots in the search space. Such treasure hunting algorithms are a different class than objective-driven algorithms and will ultimately reveal their own unique benefits and niches.

On the other hand, that an algorithm ignorant of the objective often defeats an algorithm that knows the objective is a major red flag for the traditional assumption that the objective is the best impetus for search. As the growing set of publications in this area show, novelty search has produced superior behaviors than an objective-driven search with the same parameters in the medium maze, hard maze, T-Maze and Double T-Maze (adaptive domains), Bee Domain (another adaptive domain), biped walking, and artificial ant GP domains. Several of these domains include a high-dimensional behavior space. These results do not mean that novelty search is better, but they do mean that objective-based search is far more susceptible to deception and thereby brittle than we may have thought. This point alone is an important contribution. Knowing where you want to go is often not enough to get there.

Won't novelty search simply get lost in a vast search space by trying to find everything there?

You might think so, but in practice that is not really what search spaces are like. In fact, all our intuitions about search are built off assumptions from the predominant objective-based worldview in which we were trained. Yet actually search spaces are not really so full of crazy arbitrary behaviors as we might guess. Rather, to really find novel behavior requires accumulating information. For example, in a maze with walls, at first novel behaviors will simply crash into nearby walls, yet eventually the search must find a behavior that avoids walls to do something new. Then, once it exploits this new ability within the initial hallways, it will have to learn to go through doors to do something new. In effect, it is forced to accumulate information about the world (e.g. walls and doors) to produce novel behaviors.

Thus there is a meaningful order to the behaviors that are discovered in novelty search, which is a kind of information accumulator. Of course, eventually crazy behaviors such as running around in random patterns become possible, but those usually require a great deal of information to encode because they are close to random. That is, the most complex behaviors to encode tend to be the most crazy. Thus they will be encountered well after the more interesting behaviors are discovered. Furthermore, good behaviors are stepping stones to even more good behaviors. Finding a behavior is not just the end of a story but the beginning of a new one. By learning how to walk, suddenly running becomes a possibility. As it accumulates information, novelty search finds stepping-stones that lead to new behaviors. Thus it is not simply executing a "random" search through the behavior space.

If you find that hard to believe, keep in mind that the results in which novelty search performs well would be impossible if novelty search were simply a naive exhaustive search of behavior space. The gradient of novelty is actually an information-rich lattice of treasures that lead to other treasures, just waiting to be found. The problem is only that we haven't been looking.


Updates: 3/21/14: Changed EPIA-2013 to Youtube version. 10/29/13: Added link to EPIA-2013 keynote video. 7/31/13: Added links to new outside novelty search publications. 6/4/13: Added links to new outside novelty search publications and added a new YouTube video. 4/16/13: Added links to new UCF and outside novelty search publications. 3/12/12: Added link to "Combination of Novelty Search and Fitness-Based Search Applied to Robot Body-Brain Co-Evolution" by Peter Krčah and Daniel Toropila. 8/6/11: Added link to "Solving deceptive tasks in robot body-brain co-evolution by searching for behavioral novelty" by Peter Krčah. 5/31/11: Added link to "Novelty Search and the Problem with Objectives" by Joel Lehman and Kenneth O. Stanley. 4/29/11: Added link to "On the Deleterious Effects of A Priori Objectives on Evolution and Representation" by Brian Woolley and Kenneth O. Stanley. 4/26/11: Added link to "Why and How to Measure Exploration in Behavioral Space" by Charles Ollion and Stéphane Doncieux. 4/12/11: Added links to "Critical Factors in the Performance of Novelty Search" by Steijn Kistemaker and Shimon Whiteson, "Novelty Restarts for Evolution Strategies" by Giuseppe Cuccu and Faustino Gomez, and "When Novelty is Not Enough" also by Giuseppe Cuccu and Faustino Gomez. 4/10/11: Added links to new UCF novelty search publications, "Evolving a Diversity of Virtual Creatures through Novelty Search and Local Competition", and "Improving Evolvability through Novelty Search and Self-Adapatation." 7/20/10: Added link to "Automatically Discovering Properties that Specify the Latent Behavior of UML Models" by Heather J. Goldsby. Modified online demo to change both mazes at once.