Kenneth O. Stanley, Jeff Clune, David B. D'Ambrosio, Colin D. Green, Joel Lehman, Gregory Morse, Justin K. Pugh, Sebastian Risi, and Paul Szerlip (2013)
CPPNs Effectively Encode Fracture: A Response to Critical Factors in the Performance of HyperNEAT
University of Central Florida Dept. of EECS Technical Report CS-TR-13-05 (Maunscript 37 pages).

This paper is accompanied by a companion page of source code, parameters, and videos at http://eplex.cs.ucf.edu/cfresponse13.html.

Abstract  

This paper demonstrates that compositional pattern producing networks (CPPNs) can produce phenotypic patterns that exhibit fracture (i.e. discontinuities in patterns) and that when neural networks are evolved with CPPNs, as in the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach, the algorithm effectively incorporates hidden nodes to improve performance. These findings contradict those of a recent paper by van den Berg and Whiteson [51], to which this paper is in part a response. In particular, while van den Berg and Whiteson [51] report difficulty for HyperNEAT in several experimental domains, this paper provides counter-evidence suggesting that HyperNEAT in fact performs well in these domains. It also examines the reasons for the discrepancy, which is largely a result of implementation details. Building on this foundation of counter-evidence in the domains from van den Berg and Whiteson [51], the paper then shifts to refuting their central hypothesis, which is that patterns with fracture are problematic for CPPNs. This hypothesis is contradicted through the presentation of a wide variety of detailed examples of fracture in CPPNs, suggesting that fracture in CPPNs is the norm rather than the exception. In this way, this paper goes beyond examining particular experimental domains by highlighting the largely unexplored opportunity to study fracture in CPPN-based patterns, which can potentially initiate exciting future research into representation.