Keepaway is a simpler subtask of robot soccer where three `keepers' attempt to keep possession of the ball while a `taker' tries to steal it from them. This is a less complex task than full robot soccer, and lends itself well as a testbed for multi-agent systems. This thesis does a comprehensive evaluation of various learning methods using neuroevolution with Enforced Sub-Populations (ESP) with the robocup soccer simulator. Both single and multi-component ESP are evaluated using various learning methods on homogeneous and heterogeneous teams of agents. In particular, the effectiveness of modularity and task decomposition for evolving keepaway teams is evaluated. It is shown that in the robocup soccer simulator, homogeneous agents controlled by monolithic networks perform the best. More complex learning approaches like layered learning, concurrent layered learning and co-evolution decrease the performance as does making the agents heterogeneous. The results are also compared with previous results in the keepaway domain. / text
Identifer | oai:union.ndltd.org:UTEXAS/oai:repositories.lib.utexas.edu:2152/20022 |
Date | 24 April 2013 |
Creators | Subramoney, Anand |
Source Sets | University of Texas |
Language | en_US |
Detected Language | English |
Format | application/pdf |
Page generated in 0.0024 seconds