Return to search

Scaling reinforcement learning to the unconstrained multi-agent domain

Reinforcement learning is a machine learning technique designed to mimic the
way animals learn by receiving rewards and punishment. It is designed to train
intelligent agents when very little is known about the agent’s environment, and consequently
the agent’s designer is unable to hand-craft an appropriate policy. Using
reinforcement learning, the agent’s designer can merely give reward to the agent when
it does something right, and the algorithm will craft an appropriate policy automatically.
In many situations it is desirable to use this technique to train systems of agents
(for example, to train robots to play RoboCup soccer in a coordinated fashion). Unfortunately,
several significant computational issues occur when using this technique
to train systems of agents. This dissertation introduces a suite of techniques that
overcome many of these difficulties in various common situations.
First, we show how multi-agent reinforcement learning can be made more tractable
by forming coalitions out of the agents, and training each coalition separately. Coalitions
are formed by using information-theoretic techniques, and we find that by using
a coalition-based approach, the computational complexity of reinforcement-learning
can be made linear in the total system agent count. Next we look at ways to integrate
domain knowledge into the reinforcement learning process, and how this can signifi-cantly improve the policy quality in multi-agent situations. Specifically, we find that
integrating domain knowledge into a reinforcement learning process can overcome training data deficiencies and allow the learner to converge to acceptable solutions
when lack of training data would have prevented such convergence without domain
knowledge. We then show how to train policies over continuous action spaces, which
can reduce problem complexity for domains that require continuous action spaces
(analog controllers) by eliminating the need to finely discretize the action space. Finally,
we look at ways to perform reinforcement learning on modern GPUs and show
how by doing this we can tackle significantly larger problems. We find that by offloading
some of the RL computation to the GPU, we can achieve almost a 4.5 speedup
factor in the total training process.

Identiferoai:union.ndltd.org:tamu.edu/oai:repository.tamu.edu:1969.1/ETD-TAMU-1908
Date02 June 2009
CreatorsPalmer, Victor
ContributorsIoerger, Thomas
Source SetsTexas A and M University
Languageen_US
Detected LanguageEnglish
TypeBook, Thesis, Electronic Dissertation, text
Formatelectronic, application/pdf, born digital

Page generated in 0.0027 seconds