David B. D'Ambrosio and Kenneth O. Stanley (2013)
Scalable Multiagent Learning through Indirect Encoding of Policy Geometry
In: Evolutionary Intelligence Journal. New York, NY: Springer-Verlag, 2013 (Maunscript 30 pages).

This paper is accompanied with a set of video demos at http://eplex.cs.ucf.edu/demos/multiagentcompared.
The final publication is available at http://link.springer.com/article/10.1007%2Fs12065-012-0086-3. 

Abstract  

Multiagent systems present many challenging, real-world problems to artificial intelligence. Because it is difficult to engineer the behaviors of multiple cooperating agents by hand, multiagent learning has become a popular approach to their design.  While there are a variety of traditional approaches to multiagent learning, many suffer from increased computational costs for large teams and the problem of reinvention (that is, the inability to recognize that certain skills are shared by some or all team member). This paper presents an alternative approach to multiagent learning called multiagent HyperNEAT that represents the team as a pattern of policies rather than as a set of individual agents.  The main idea is that an agent's location within a canonical team layout (which can be physical, such as positions on a sports team, or conceptual, such as an agent's relative speed) tends to dictate its role within that team.  This paper introduces the term policy geometry to describe this relationship between role and position on the team.  Interestingly, such patterns effectively represent up to an infinite number of multiagent policies that can be sampled from the policy geometry as needed to allow training very large teams or, in some cases, scaling up the size of a team without additional learning. In this paper, multiagent HyperNEAT is compared to a traditional learning method, multiagent Sarsa, in a predator-prey domain, where it demonstrates its ability to train large teams.