1 |
The use of organizational self-design to coordinate multiagent systemsKamboj, Sachin. January 2010 (has links)
Thesis (Ph.D.)--University of Delaware, 2009. / Principal faculty advisor: Keith S. Decker, Dept. of Computer & Information Sciences. Includes bibliographical references.
|
2 |
A controller operator graph model for cooperative multi-agent systemsCarter, Steven Andrew 03 June 2010 (has links)
M.Sc.(Computer Science) / Agent technology has become more common in mainstream applications as it allows systems to perform routine operations without input from human users. The current evolution of the internet and the increasingly distributed nature of commercial interests, such as bidding auctions, personal shopping assistants and corporate management systems, require software in a distributed environment to be capable of acting autonomously. Multi-agent systems have emerged to deal with these distributed environments which can range in size from a couple of agents to potentially infinite agents [Vla03, Rus03]. When considering a cooperative multi-agent system, it is important for agents to coordinate effectively. Strategic game theory has introduced a means to coordinate by providing social conventions and roles [Vla03]. Both social conventions and roles help simplify the coordination problem between agents when performing coordination actions. An additional simplification of the coordination problem is to utilise coordination graphs. This reduces the number of agents in the environment to consider for a coordination action [Gue02]. Communication in multi-agent systems extends the ability of agents to coordinate with one another. It allows the removal of the requirement to determine the state of participating agents by inspection. Instead, an agent could request the state of another agent by utilising a communication action. Communication does require an additional level of management since agents are rarely allowed to communicate freely and the communication language is not always guaranteed to be standard between agents [Cha02, Vla03]. The dissertation covers the background information regarding multi-agent systems and focuses on the elements that are unique to these systems, such as coordination, communication and methods to represent knowledge structures in a multi-agent system. A model is then proposed as a framework in which scalable populations of agents are able to coordinate when limited knowledge is available about other agents in the environment. The model, which is called the Controller Operator Graph (COG) model, introduces two unique agent types which help coordinate a large population of agents. The unique agents are provided to assist with communication and coordination in the COG model. The COG model is designed to help agents coordinate in a dynamic environment by providing mechanisms to monitor agent population and goal states. The operator agent is responsible for maintaining communication links between agents and provides the ability to monitor a population of agents for the multi-agent system. The controller agent is responsible for ensuring that coordination actions are performed between agents which have no prior knowledge of one another. It provides a means to handle a dynamic situation in which the coordination actions can be extended beyond the original requirements. An implementation of the COG model is provided utilising a supply chain scenario which compares increasing agent populations. The COG implementation demonstrates by means of unified modelling language diagrams a method to design and develop the different concepts in the COG model, such as the execution tree, controller agent and operator agent. The implementation demonstrates the strengths of the COG model, which are handling dynamic environments and achieving dynamic goal states for the environment. The implementation also indicates some of the weaknesses in the COG model, such as greedy agent selection by the controller agent, and single points of failure.
|
3 |
An upgradeable agent-based model to explore non-linearity and intangibles in peacekeeping operationsLehmann, Wolfgang. 06 1900 (has links)
Peacekeeping operations (PKO) have become a significant challenge to the German Armed Forces. For the development of tactics, techniques, procedures and equipment with combat operations, agent-based models have been developed, used and exploited for many years. Modeling and simulation of PKO, however, is still in a very early stage. This thesis develops an agent-based model to analyze PKO. Unlike many other multi-agent systems (MAS), it implements the rules of discrete event simulation. The chosen software architecture makes the model upgradeable and useful for a breadth of future applications. The modelâ s open architecture and the underlying principle of loosely coupled components make it easy to change or enhance the model. The software agentsâ design incorporates individuality, which is characterized by personality factors. Furthermore, the model is data-farmable. Required data inputs into the simulation tool, i.e., PKO scenarios, are formatted utilizing a state-of-theart technology called Extensible Markup Language (XML), which facilitates use of the data in nearly all computer software packages. The model executes multiple runs of multiple scenarios automatically, demonstrating a robust nature. Finally, an exemplary analysis demonstrates data-farming concepts on the effect of personality factor settings on the potential escalation of a PKO scenario. / German Army author.
|
4 |
Models of argument for deliberative dialogue in complex domainsToniolo, Alice January 2013 (has links)
In dynamic multiagent systems, self-motivated agents pursuing individual goals may interfere with each other's plans. Agents must, therefore, coordinate their plans to resolve dependencies among them. This drives the need for agents to engage in dialogue to decide what to do in collaboration. Agreeing what to do is a complex activity, however, when agents come to an encounter with different objectives and norm expectations (i.e. societal norms that constrain acceptable behaviour). Argumentation-based models of dialogue support agents in deciding what to do analysing pros/cons for decisions, and enable conflict resolution by revealing structured background information that facilitates the identification of acceptable solutions. Existing models of deliberative dialogue, however, commonly assume that agents have a shared goal, and to date their effectiveness has been shown only through the use of extended examples. In this research, we propose a novel model of argumentation schemes to be integrated in a dialogue for the identification of plan, goal and norm conflicts when agents have individual but interdependent objectives. We empirically evaluate our model within a dynamic system to establish how the information shared with argumentation schemes influence dialogue outcomes. We show that by employing our model of arguments in dialogue, agents achieve more successful agreements. The resolution of conflicts and identification of more feasible interdependent plans is achieved through the sharing of focussed information driven by argumentation schemes. Agents may also consider more important conflicts, or conflicts that cause higher loss of utility if unresolved. We explore the use of strategies for agents to select arguments that are more likely to solve important conflicts.
|
5 |
Scaling multiagent reinforcement learning /Proper, Scott. January 1900 (has links)
Thesis (Ph. D.)--Oregon State University, 2010. / Printout. Includes bibliographical references (leaves 121-123). Also available on the World Wide Web.
|
6 |
Sample efficient multiagent learning in the presence of Markovian agentsChakraborty, Doran 14 February 2013 (has links)
The problem of multiagent learning (or MAL) is concerned with the study of how agents can learn and adapt in the presence of other agents that are simultaneously adapting. The problem is often studied in the stylized settings provided by repeated matrix games. The goal of this thesis is to develop MAL algorithms for such a setting that achieve a new set of objectives which have not been previously achieved. The thesis makes three main contributions.
The first main contribution proposes a novel MAL algorithm, called Convergence with Model Learning and Safety (or CMLeS), that is the first to achieve the following three objectives: (1) converges to following a Nash equilibrium joint-policy in self-play; (2) achieves close to the best response when interacting with a set of memory-bounded agents whose memory size is upper bounded by a known value; and (3) ensures an individual return that is very close to its security value when interacting with any other set of agents.
The second main contribution proposes another novel MAL algorithm that models a significantly more complex class of agent behavior called Markovian agents, that subsumes the class of memory-bounded agents. Called Joint Optimization against Markovian Agents (or Joma), it achieves the following two objectives: (1) achieves a joint-return very close to the social welfare maximizing joint-return when interacting with Markovian agents; (2) ensures an individual return that is very close to its security value when interacting with any other set of agents.
Finally, the third main contribution shows how a key subroutine of Joma can be extended to solve a broader class of problems pertaining to Reinforcement Learning, called ``Structure Learning in factored state MDPs".
All of the algorithms presented in this thesis are well backed with rigorous theoretical analysis, including an analysis on sample
complexity wherever applicable, as well as representative empirical tests. / text
|
7 |
LMI conditions for robust consensus of uncertain nonlinear multi-agent systemsHan, Dongkun, 韓東昆 January 2014 (has links)
Establishing consensus is a key probleminmulti-agent systems (MASs). This thesis proposes a novel methodology based on convex optimization in the form of linear matrix inequalities (LMIs) for establishing consensus in linear and nonlinear MAS in the presence of model uncertainties, i.e., robust consensus.
Firstly, this thesis investigates robust consensus for uncertain MAS with linear dynamics. Specifically, it is supposed that the system is described by a weighted adjacency matrix whose entries are generic polynomial functions of an uncertain vector constrained in a set described by generic polynomial inequalities. For continuous-time dynamics, necessary and sufficient conditions are proposed to ensure the robust first-order consensus and the robust second-order consensus, in both cases of positive and non-positive weighted adjacency matrices. For discrete-time dynamics, necessary and sufficient conditions are provided for robust consensus based on the existence of a Lyapunov function polynomially dependent on the uncertainty. In particular, an upper bound on the degree required for achieving necessity is provided. Furthermore, a necessary and sufficient condition is provided for robust consensus with single integrator and nonnegative weighted adjacency matrices based on the zeros of a polynomial. Lastly, it is shown how these conditions can be investigated through convex optimization by exploiting LMIs.
Secondly, local and global consensus are considered in MAS with intrinsic nonlinear dynamics with respect to bounded solutions, like equilibrium points, periodic orbits, and chaotic orbits. For local consensus, a method is proposed based on the transformation of the original system into an uncertain polytopic system and on the use of homogeneous polynomial Lyapunov functions (HPLFs). For global consensus, another method is proposed based on the search for a suitable polynomial Lyapunov function (PLF). In addition, robust local consensus in MAS is considered with time-varying parametric uncertainties constrained in a polytope. Also, by using HPLFs, a new criteria is proposed where the original system is suitably approximated by an uncertain polytopic system. Tractable conditions are hence provided in terms of LMIs. Then, the polytopic consensus margin problem is proposed and investigated via generalized eigenvalue problems (GEVPs).
Lastly, this thesis investigates robust consensus problem of polynomial nonlinear system affected by time-varying uncertainties on topology, i.e., structured uncertain parameters constrained in a bounded-rate polytope. Via partial contraction analysis, novel conditions, both for robust exponential consensus and for robust asymptotical consensus, are proposed by using parameter-dependent contraction matrices. In addition, for polynomial nonlinear system, this paper introduces a new class of contraction matrix, i.e., homogeneous parameter-dependent polynomial contraction matrix (HPD-PCM), by which tractable conditions of LMIs are provided via affine space parametrizations. Furthermore, the variant rate margin for robust asymptotical consensus is proposed and investigated via handling generalized eigenvalue problems (GEVPs).
For each section, a set of representative numerical examples are presented to demonstrate the effectiveness of the proposed results. / published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
|
8 |
Agent based modeling for supply chain management examining the impact of information sharing /Zhu, Xiaozhou. January 2008 (has links)
Thesis (Ph.D.)--Kent State University, 2008. / Title from PDF t.p. (viewed April 16, 2010). Advisor: Marvin Troutt. Keywords: ABM; agent; repast; information sharing. Includes bibliographical references (p. 161-179).
|
9 |
An agent-based co-operative preference modelJayousi, Rashid January 2003 (has links)
No description available.
|
10 |
Combining coordination mechanisms to improve performance in multi-robot teamsNasroullahi, Ehsan 09 March 2012 (has links)
Coordination is essential to achieving good performance in cooperative multiagent systems. To date, most work has focused on either implicit or explicit coordination mechanisms, while relatively little work has focused on the benefits of combining these two approaches. In this work we demonstrate that combining explicit and implicit mechanisms can significantly improve coordination and system performance over either approach individually. First, we use difference evaluations (which aim to compute an agent's contribution to the team) and stigmergy to promote implicit coordination. Second, we introduce an explicit coordination mechanism dubbed Intended Destination Enhanced Artificial State (IDEAS), where an agent incorporates other agents' intended destinations directly into its state. The IDEAS approach does not require any formal negotiation between agents, and is based on passive information sharing. Finally, we combine these two approaches on a variant of a team-based multi-robot exploration domain, and show that agents using a both explicit and implicit coordination outperform other learning agents up to 25%. / Graduation date: 2012
|
Page generated in 0.0755 seconds