The ES-HyperNEAT Users Page


This page provides information on the use and implementation of evolvable-substrate HyperNEAT (ES-HyperNEAT). ES-HyperNEAT is an extension of the original HyperNEAT method for evolving large-scale artificial neural networks. While in the original HyperNEAT the human user had to decide the placement and number of hidden neurons, ES-HyperNEAT can determine the proper density and position of hidden neurons entirely on its own while still preserving the advances introduced by the original HyperNEAT.

Please direct inquiries to Sebastian Risi sebastian.risi@gmail.com (Website) or Ken Stanley, kstanley@eecs.ucf.edu (Website)


Introduction

While past approaches to neuroevolution generally concentrated on deciding which node is connected to which, the recently introduced Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) method provided a new perspective on evolving ANNs by showing that the pattern of weights across the connectivity of an ANN can be generated as a function if its geometry. If you are not familiar with HyperNEAT, this article provides a good overview: A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks. The HyperNEAT Users Page provides many more references and additional information.

Despite its novel capabilities, a significant limitation with the original HyperNEAT is that the user must literally place hidden nodes at locations within a two-dimensional or three-dimensional space called the substrate. Researchers therefore wondered if there is a way to make HyperNEAT decide on the placement and density of hidden neurons in the substrate on its own. It turns out that there is indeed a way for HyperNEAT to decide on the placement and density of hidden neurons without any addition representation beyond the traditional HyperNEAT CPPN. This enhanced approach is called evolvable-substrate HyperNEAT (ES-HyperNEAT).

How does it work

The main idea in ES-HyperNEAT is to search through the pattern in the hypercube painted by the CPPN starting from the user-defined input neurons to find areas of high information, from which it chooses connections to express. The nodes that these connections connect are then naturally also chosen to place in the substrate along with the connectons. Thus the philosophy is that density should follow information: Where there is more information in the CPPN-encoded pattern, there should be higher density within the substrate to capture it. By following this approach, there is no need for the user to decide anything about hidden nodes placement or density. Furthermore, ES-HyperNEAT can represent clusters of neurons with arbitrarily high density, even varying in density by region. The following journal paper is a comprehensive introduction and also includes the complete ES-HyperNEAT pseudocode: An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density and Connectivity of Neurons.

The information extraction in ES-HyperNEAT is based on the quadtree algorithm, which works by recursively splitting a two-dimensional region into four sub-regions, as illustrated below.

The pseudocode in Algorithm 1 below shows how the quadtree is generated and how the CPPN values are determined. The initialized quadtree is then be used in the second stage of ES-HyperNEAT to decide where to place the neurons in the substrate.

The quadtree representation created in the initialization phase serves as a heuristic indicator of variance (and hence information) to decide on the connections (and therefore placement and density of associated neurons) to express. In the pruning and extraction phase (illustrated below) the quadtree is traversed depth-first until the current quadtree node's variance is smaller than a variance threshold or until the quadtree node has no children. Subsequently, a connection for each qualifying quadtree node is created.

The pseudocode for the pruning and extraction phase is shown below:

The ES-HyperNEAT algorithm is depicted in the figure below. It starts by iteratively discovering the placement of the hidden neurons from the inputs (a) and then ties the network into the outputs (c). The two-dimensional motif in (a) represent outgoing connectivity patterns from a single input node whereas the motif in (c) represent incoming connectivity pattern for a single output node. The target nodes discovered (through the quadtree algorithm) are those that reside within bands in the hypercube. In this way regions of high variance are sought only in the two-dimensional cross-section of the hypercube containing the source or target node. The algorithm can be iteratively applied beyond the inputs to the discovered hidden nodes (b). Only those nodes are kept at the end that have a path to an input and output neuron (d). That way, the search through the hypercube is restricted to functional ANN topologies.

The pseudocode in Algorithm 3 implements the idea shown in the previous figure:

Available implementations of ES-HyperNEAT

The NEAT Software Catalog lists available implementations of ES-HyperNEAT

ES-HyperNEAT Publications

The following journal paper is a comprehensive introduction to ES-HyperNEAT: An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density and Connectivity of Neurons.

All ES-HyperNEAT-related publications are listed below:

Sebastian Risi and Kenneth O. Stanley (2012)
An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density and Connectivity of Neurons
To appear in: Artificial Life journal. Cambridge, MA: MIT Press, 2012.

Sebastian Risi and Kenneth O. Stanley (2012)
A Unified Approach to Evolving Plasticity and Neural Geometry
To appear in: Proceedings of the International Joint Conference on Neural Networks (IJCNN-2012). Piscataway, NJ: IEEE.

Sebastian Risi and Kenneth O. Stanley (2011).
Enhancing ES-HyperNEAT to Evolve More Complex Regular Neural Networks
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2011, Dublin, Ireland). New York, NY: The Association for Computing Machinery.

Sebastian Risi, Joel Lehman, and Kenneth O. Stanley (2010)
Evolving the Placement and Density of Neurons in the HyperNEAT Substrate
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2010). 563-57, New York, NY:ACM.

Outside Projects with ES-HyperNEAT

Paul T. Oliver - Learning to race in real time with rtES-HyperNeat



Frequently Asked Questions


When should I use the original HyperNEAT and when should I use ES-HyperNEAT?

Recent experiments have shown that ES-HyperNEAT outperforms the original HyperNEAT in domains like multi-task, maze navigation and modular domains. One of ES-HyperNEAT's main advantages is that it can elaborate on existing ANN structure because it can increase the number of connections and nodes in the substrate during evolution. Regular HyperNEAT tends to produce fully-connected ANNs, which often take the entire set of ANN connection weights to represent a partial task solution. An example of ES-HyperNEAT elaborating on ANN structure can be seen in the figure below, together with the CPPN and the resulting behavior of the agent.

The more complex the task the more important it will be to relieve the user from the need to decide where to place the hidden neurons. However, it is important to keep in mind that HyperNEAT and ES-HyperNEAT are both experimental methods and much remains to be learned about their capabilities. HyperNEAT is easier to implement and often works well, so it is still a valid choice for many tasks. However, ultimately it seems likely that ES-HyperNEAT will prove a better choice and a more convenient choice in many future application.

Why not just start the original HyperNEAT with a million hidden neurons?

Recent results, reported in our paper An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density and Connectivity of Neurons, suggest that simply uniformly increasing the number of hidden nodes in the substrate does not always increase HyperNEAT's performance and can in fact significantly reduce it, which is likely due to the increased crosstalk that each neuron experiences. (In domains in which hidden neurons do not connect to each other, this issue may be less prevalent.)

One way to view the problem is that more complex domains require incrementally building on previously discovered stepping stones. While direct encodings like NEAT can complexify ANNs over generations by adding new nodes and connections through mutation, the indirect HyperNEAT encoding tends to start already with fully-connected ANNs, which take the entire set of ANN connection weights to represent a partial task solution.

Therefore starting the original HyperNEAT with a million interconnected nodes will likely not work, while ES-HyperNEAT can increase the number of hidden neurons during evolution, which allows it to elaborate on existing structure in the substrate. While the separate HyperNEAT-LEO extension also may mitigate the problem of cross-talk and full connectivity, it does not necessarily lead to the kind of incremental growing that ES-HyperNEAT offers. Note that recent experiments also suggest that HyperNEAT-LEO and ES-HyperNEAT are complementary (i.e. can help each other when implemented together).

What settings should I use for the ES-HyperNEAT parameters?

ES-HyperNEAT has the following parameters:

Initial Resolution: The initial resolution for the division phase of ES-HyperNEAT.
Maximum Resolution: The maximum resolution for the division phase of ES-HyperNEAT.
Band Pruning Threshold: The value that the band level of a connection must exceed to be expressed in ES-HyperNEAT.
Variance Threshold: The variance value that determines how far the depth-first search in the pruning and extraction phase in ES-HyperNEAT should traverse the quadtree.
Iteration Level: The parameter that determines how many times the quadtree extraction algorithm in ES-HyperNEAT should be iteratively applied to discovering hidden neurons.
Division Threshold: The variance value that a quadtree node in ES-HyperNEAT must exceed to be further divided.

For most tasks the following default settings can remain unchanged:

Bandpruning threshold = 0.3
Initial resolution = 8x8 (corresponds to a quadtree depth of 4)
Variance treshold = 0.03
Division threshold = 0.03
Iteration level = 1

Finally, The maximum resolution allows the user to set an upper bound on the number of hidden neurons and can be modified depending on the complexity of the task.

Why do I still have to decide on the placement of Input and Output nodes?

While ES-HyperNEAT frees the user from deciding on the position of the hidden nodes, placing the inputs and outputs allows the user to insert knowledge about the problem geometry into the evolutionary search, which is invisible to traditional encodings. For example, the sensors of an autonomous robot can be placed from left to right on the substrate in the same order that they exist on the robot. Outputs for moving left or right can also be placed in the same order, allowing ES-HyperNEAT to understand from the outset the correlation of sensors to effectors. Therefore, allowing to user to inject such knowledge from the start is a potential advantage.

Can I restrict the complexity of the networks that ES-HyperNEAT generates?

One common question about ES-HyperNEAT is whether the number of neurons created in ES-HyperNEAT networks is too many. In fact, in some cases it may find solutions with several times more connections and/or nodes than the minimum solution requires, so while the real test of this question will emerge from further research, perhaps the singular focus within the field of neuroevolution on absolute minimal structure is misplaced. When the products of evolution contain potentially billions of neurons as in nature, it is almost certainly necessary that an encoding that can reach high levels of intelligence will have the ability to solve particular problems with a significant flexibility in the number of neurons in the solution structures. Of course, if a particular level of intelligence can be achieved with only a million neurons then a billion solution would be undesirable. However, too much restriction on variation in the number of neurons is likely equally destructive. In that sense, quibbling about a few dozen more or less neurons may be missing the forest for the trees. It is also important to keep in mind that all these solutions are found relatively quickly, which highlights that with an indirect encoding, exact size and solution time are not directly correlated.

However, the user has some control over the maximum number of hidden neurons mainly with ES-HyperNEAT's maximum resolution paramter.


Updates: 4/11/12 Initial Page. 5/15/12 Added link to "An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density and Connectivity of Neurons" in the Publication section. 1/12/13 Fixed missing link to Alife journal paper. 7/3/13 Added new section "Outside Projects with ES-HyperNEAT". 5/6/16 Linked to NEAT Software Catalog for list of implementations.