Greg Dubbin and Kenneth O. Stanley (2010)
Learning to Dance through Interactive Evolution
In: Proceedings of the Eighth European Event on Evolutionary and Biologically Inspired Music, Sound, Art and Design (EvoMUSART 2010). New York, NY: Springer, 2010 (10 pages)

Note: This paper is accompanied with a set of videos at http://eplex.cs.ucf.edu/dance-evolution-videos.html

Abstract

A relatively rare application of arti cial intelligence at the nexus of art and music is dance. The impulse shared by all humans to express ourselves through dance represents a unique opportunity to artifi cially capture human creative expression. In particular, the spontaneity and relative ease of moving to the music without any overall plan suggests a natural connection between temporal patterns and motor control. To explore this potential, this paper presents a model called Dance Evolution, which allows the user to train virtual humans to dance to MIDI songs or raw audio, that is, the dancers can dance to any song heard on the radio, including the latest pop music. The dancers are controlled by arti cial neural networks (ANNs) that "hear" MIDI sequences or raw audio processed through a discrete Fourier transform-based technique. ANNs learn to dance in new ways through an interactive evolutionary process driven by the user. The main result is that when motion is expressed as a function of sound the effect is a plausible approximation of the natural human tendency to move to music.