Spelling suggestions: "subject:"multiagent learning"" "subject:"multiagente learning""
1 |
Sample efficient multiagent learning in the presence of Markovian agentsChakraborty, Doran 14 February 2013 (has links)
The problem of multiagent learning (or MAL) is concerned with the study of how agents can learn and adapt in the presence of other agents that are simultaneously adapting. The problem is often studied in the stylized settings provided by repeated matrix games. The goal of this thesis is to develop MAL algorithms for such a setting that achieve a new set of objectives which have not been previously achieved. The thesis makes three main contributions.
The first main contribution proposes a novel MAL algorithm, called Convergence with Model Learning and Safety (or CMLeS), that is the first to achieve the following three objectives: (1) converges to following a Nash equilibrium joint-policy in self-play; (2) achieves close to the best response when interacting with a set of memory-bounded agents whose memory size is upper bounded by a known value; and (3) ensures an individual return that is very close to its security value when interacting with any other set of agents.
The second main contribution proposes another novel MAL algorithm that models a significantly more complex class of agent behavior called Markovian agents, that subsumes the class of memory-bounded agents. Called Joint Optimization against Markovian Agents (or Joma), it achieves the following two objectives: (1) achieves a joint-return very close to the social welfare maximizing joint-return when interacting with Markovian agents; (2) ensures an individual return that is very close to its security value when interacting with any other set of agents.
Finally, the third main contribution shows how a key subroutine of Joma can be extended to solve a broader class of problems pertaining to Reinforcement Learning, called ``Structure Learning in factored state MDPs".
All of the algorithms presented in this thesis are well backed with rigorous theoretical analysis, including an analysis on sample
complexity wherever applicable, as well as representative empirical tests. / text
|
2 |
Improving Convergence Rates in Multiagent Learning Through Experts and Adaptive ConsultationHines, Greg January 2007 (has links)
Multiagent learning (MAL) is the study of agents learning while in the presence of other agents who are also learning. As a field, MAL is built upon work done in both artificial intelligence and game theory. Game theory has mostly focused on proving that certain theoretical properties hold for a wide class of learning situations while ignoring computational issues, whereas artificial intelligence has mainly focused on designing practical multiagent learning algorithms for small classes of games.
This thesis is concerned with finding a balance between the game-theory and artificial-intelligence approaches. We introduce a new learning algorithm, FRAME, which provably converges to the set of Nash
equilibria in self-play, while consulting experts which can greatly improve the convergence rate to the set of equilibria. Even if the experts are not well suited to the learning problem, or are hostile, then FRAME will still provably converge. Our second contribution takes this idea further by allowing agents to consult multiple experts, and dynamically adapting so that the best expert for the given game is consulted. The result is a flexible algorithm capable
of dealing with new and unknown games. Experimental results validate our approach.
|
3 |
Improving Convergence Rates in Multiagent Learning Through Experts and Adaptive ConsultationHines, Greg January 2007 (has links)
Multiagent learning (MAL) is the study of agents learning while in the presence of other agents who are also learning. As a field, MAL is built upon work done in both artificial intelligence and game theory. Game theory has mostly focused on proving that certain theoretical properties hold for a wide class of learning situations while ignoring computational issues, whereas artificial intelligence has mainly focused on designing practical multiagent learning algorithms for small classes of games.
This thesis is concerned with finding a balance between the game-theory and artificial-intelligence approaches. We introduce a new learning algorithm, FRAME, which provably converges to the set of Nash
equilibria in self-play, while consulting experts which can greatly improve the convergence rate to the set of equilibria. Even if the experts are not well suited to the learning problem, or are hostile, then FRAME will still provably converge. Our second contribution takes this idea further by allowing agents to consult multiple experts, and dynamically adapting so that the best expert for the given game is consulted. The result is a flexible algorithm capable
of dealing with new and unknown games. Experimental results validate our approach.
|
4 |
Semi-Cooperative Learning in Smart Grid AgentsReddy, Prashant P. 01 December 2013 (has links)
Striving to reduce the environmental impact of our growing energy demand creates tough new challenges in how we generate and use electricity. We need to develop Smart Grid systems in which distributed sustainable energy resources are fully integrated and energy consumption is efficient. Customers, i.e., consumers and distributed producers, require agent technology that automates much of their decision-making to become active participants in the Smart Grid. This thesis develops models and learning algorithms for such autonomous agents in an environment where customers operate in modern retail power markets and thus have a choice of intermediary brokers with whom they can contract to buy or sell power. In this setting, customers face a learning and multiscale decision-making problem – they must manage contracts with one or more brokers and simultaneously, on a finer timescale, manage their consumption or production levels under existing contracts. On a contextual scale, they can optimize their isolated selfinterest or consider their shared goals with other agents. We advance the idea that a Learning Utility Management Agent (LUMA), or a network of such agents, deployed on behalf of a Smart Grid customer can autonomously address that customer’s multiscale decision-making responsibilities. We study several relationships between a given LUMA and other agents in the environment. These relationships are semi-cooperative and the degree of expected cooperation can change dynamically with the evolving state of the world. We exploit the multiagent structure of the problem to control the degree of partial observability. Since a large portion of relevant hidden information is visible to the other agents in the environment, we develop methods for Negotiated Learning, whereby a LUMA can offer incentives to the other agents to obtain information that sufficiently reduces its own uncertainty while trading off the cost of offering those incentives. The thesis first introduces pricing algorithms for autonomous broker agents, time series forecasting models for long range simulation, and capacity optimization algorithms for multi-dwelling customers. We then introduce Negotiable Entity Selection Processes (NESP) as a formal representation where partial observability is negotiable amongst certain classes of agents. We then develop our ATTRACTIONBOUNDED- LEARNING algorithm, which leverages the variability of hidden information for efficient multiagent learning. We apply the algorithm to address the variable-rate tariff selection and capacity aggregate management problems faced by Smart Grid customers. We evaluate the work on real data using Power TAC, an agent-based Smart Grid simulation platform and substantiate the value of autonomous Learning Utility Management Agents in the Smart Grid.
|
Page generated in 0.091 seconds