Gregory Morse and Kenneth O. Stanley (2016)
Simple Evolutionary Optimization Can Rival Stochastic Gradient Descent in Neural Networks
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2016). New York, NY: ACM, 2015 (8 pages).

This paper is accompanied by source code.

Abstract  

While evolutionary algorithms (EAs) have long off ered an alternative approach to optimization, in recent years backpropagation through stochastic gradient descent (SGD) has come to dominate the fi elds of neural network optimization and deep learning. One hypothesis for the absence of EAs in deep learning is that modern neural networks have become so high dimensional that evolution with its inexact gradient cannot match the exact gradient calculations of backpropagation. Furthermore, the evaluation of a single individual in evolution on the big data sets now prevalent in deep learning would present a prohibitive obstacle towards efficient optimization. This paper challenges these views, suggesting that EAs can be made to run signi cantly faster than previously thought by evaluating individuals only on a small number of training examples per generation. Surprisingly, using this approach with only a simple EA (called the limited evaluation EA or LEEA) is competitive with the performance of the state-of-the-art SGD variant RMSProp on several benchmarks with neural networks with over 1,000 weights. More investigation is warranted, but these initial results suggest the possibility that EAs could be the fi rst viable training alternative for deep learning outside of SGD, thereby opening up deep learning to all the tools of evolutionary computation.