Return to search

Improving Convergence Rates in Multiagent Learning Through Experts and Adaptive Consultation

Multiagent learning (MAL) is the study of agents learning while in the presence of other agents who are also learning. As a field, MAL is built upon work done in both artificial intelligence and game theory. Game theory has mostly focused on proving that certain theoretical properties hold for a wide class of learning situations while ignoring computational issues, whereas artificial intelligence has mainly focused on designing practical multiagent learning algorithms for small classes of games.


This thesis is concerned with finding a balance between the game-theory and artificial-intelligence approaches. We introduce a new learning algorithm, FRAME, which provably converges to the set of Nash
equilibria in self-play, while consulting experts which can greatly improve the convergence rate to the set of equilibria. Even if the experts are not well suited to the learning problem, or are hostile, then FRAME will still provably converge. Our second contribution takes this idea further by allowing agents to consult multiple experts, and dynamically adapting so that the best expert for the given game is consulted. The result is a flexible algorithm capable
of dealing with new and unknown games. Experimental results validate our approach.

Identiferoai:union.ndltd.org:WATERLOO/oai:uwspace.uwaterloo.ca:10012/2785
Date January 2007
CreatorsHines, Greg
Source SetsUniversity of Waterloo Electronic Theses Repository
LanguageEnglish
Detected LanguageEnglish
TypeThesis or Dissertation
Format677115 bytes, application/pdf

Page generated in 0.0021 seconds