Multiagent learning (MAL) is the study of agents learning while in the presence of other agents who are also learning. As a field, MAL is built upon work done in both artificial intelligence and game theory. Game theory has mostly focused on proving that certain theoretical properties hold for a wide class of learning situations while ignoring computational issues, whereas artificial intelligence has mainly focused on designing practical multiagent learning algorithms for small classes of games.
This thesis is concerned with finding a balance between the game-theory and artificial-intelligence approaches. We introduce a new learning algorithm, FRAME, which provably converges to the set of Nash
equilibria in self-play, while consulting experts which can greatly improve the convergence rate to the set of equilibria. Even if the experts are not well suited to the learning problem, or are hostile, then FRAME will still provably converge. Our second contribution takes this idea further by allowing agents to consult multiple experts, and dynamically adapting so that the best expert for the given game is consulted. The result is a flexible algorithm capable
of dealing with new and unknown games. Experimental results validate our approach.
Identifer | oai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:OWTU.10012/2785 |
Date | January 2007 |
Creators | Hines, Greg |
Source Sets | Library and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada |
Language | English |
Detected Language | English |
Type | Thesis or Dissertation |
Format | 677115 bytes, application/pdf |
Page generated in 0.0021 seconds