Spelling suggestions: "subject:"multiagent systems"" "subject:"multitangent systems""
41 |
Debugging Multi-Agent Systems With Design DocumentsPoutakidis, David Alexander, davpout@cs.rmit.edu.au January 2008 (has links)
Debugging multi-agent systems, which are concurrent, distributed, and consist of complex components is difficult, yet crucial. The development of these complex systems is supported by agent-oriented software engineering methodologies which utilise agents as the central design metaphor. The systems that are developed are inherently complex since the components of these systems may interact in flexible and sophisticated ways and traditional debugging techniques are not appropriate. Despite this, very little effort has been applied to developing appropriate debugging tools and techniques. Debugging multi-agent systems without good debugging tools is highly impractical and without suitable debugging support developing and maintaining multi-agent systems will be more difficult than it need be. In this thesis we propose that the debugging process can be supported by following an agent-oriented design methodology, and then using the developed design artifacts in the debugging phase. We propose a domain independent debugging framework which comprises the developed processes and components that are necessary in using design artifacts as debugging artifacts. Our approach is to take a non-formal design artifact, such as an AUML protocol design, and encode it in a machine interpretable manner such that the design can be used as a model of correct system behaviour. These models are used by a run-time debugging system to compare observed behaviour against specified behaviour. We provide details for transforming two design artifact types into equivalent debugging artifacts and show how these can be used to detect bugs. During a debugging episode in which a bug has been identified our debugging approach can provide detailed information about the possible reason for the bug occurring. To determine if this information was useful in helping to debug programs we undertook a thorough empirical study and identified that use of the debugging tool translated to an improvement in debugging performance. We conclude that the debugging techniques developed in this thesis provide effective debugging support for multi-agent systems and by having an extensible framework new design artifacts can be explored and as translations are developed they can be added to the debugging system.
|
42 |
All learning is local: Multi-agent learning in global reward gamesChang, Yu-Han, Ho, Tracey, Kaelbling, Leslie P. 01 1900 (has links)
In large multiagent games, partial observability, coordination, and credit assignment persistently plague attempts to design good learning algorithms. We provide a simple and efficient algorithm that in part uses a linear system to model the world from a single agent’s limited perspective, and takes advantage of Kalman filtering to allow an agent to construct a good training signal and effectively learn a near-optimal policy in a wide variety of settings. A sequence of increasingly complex empirical tests verifies the efficacy of this technique. / Singapore-MIT Alliance (SMA)
|
43 |
Building Grounded Abstractions for Artificial Intelligence ProgrammingHearn, Robert A. 16 June 2004 (has links)
Most Artificial Intelligence (AI) work can be characterized as either ``high-level'' (e.g., logical, symbolic) or ``low-level'' (e.g., connectionist networks, behavior-based robotics). Each approach suffers from particular drawbacks. High-level AI uses abstractions that often have no relation to the way real, biological brains work. Low-level AI, on the other hand, tends to lack the powerful abstractions that are needed to express complex structures and relationships. I have tried to combine the best features of both approaches, by building a set of programming abstractions defined in terms of simple, biologically plausible components. At the ``ground level'', I define a primitive, perceptron-like computational unit. I then show how more abstract computational units may be implemented in terms of the primitive units, and show the utility of the abstract units in sample networks. The new units make it possible to build networks using concepts such as long-term memories, short-term memories, and frames. As a demonstration of these abstractions, I have implemented a simulator for ``creatures'' controlled by a network of abstract units. The creatures exist in a simple 2D world, and exhibit behaviors such as catching mobile prey and sorting colored blocks into matching boxes. This program demonstrates that it is possible to build systems that can interact effectively with a dynamic physical environment, yet use symbolic representations to control aspects of their behavior.
|
44 |
Morphologically Responsive Self-Assembling RobotsO'Grady, Rehan 07 October 2010 (has links)
We investigate the use of self-assembly in a robotic system as a means of responding
to dierent environmental contingencies. Self-assembly is the mechanism through which
agents in a multi-robot system autonomously form connections with one another to create
larger composite robotic entities. Initially, we consider a simple response mechanism
that uses stochastic self-assembly without any explicit control over the resulting morphology
| the robots self-assemble into a larger, randomly shaped composite entity if the
task they encounter is beyond the physical capabilities of a single robot. We present distributed
behavioural control that enables a group of robots to make this collective decision
about when and if to self-assemble in the context of a hill crossing task. In a series of
real-world experiments, we analyse the eect of dierent distributed timing and decision
strategies on system performance. Outside of a task execution context, we present fully
decentralised behavioural control capable of creating periodically repeating global morphologies.
We then show how arbitrary morphologies can be generated by abstracting our
behavioural control into a morphology control language and adding symbolic communication
between connected agents. Finally, we integrate our earlier distributed response
mechanism into the morphology control language. We run simulated and real-world experiments
to demonstrate a self-assembling robotic system that can respond to varying
environmental contingencies by forming dierent appropriate morphologies.
|
45 |
Roadmap-Based Techniques for Modeling Group Behaviors in Multi-Agent SystemsRodriguez, Samuel Oscar 2012 May 1900 (has links)
Simulating large numbers of agents, performing complex behaviors in realistic environments is a difficult problem with applications in robotics, computer graphics and animation. A multi-agent system can be a useful tool for studying a range of situations in simulation in order to plan and train for actual events. Systems supporting such simulations can be used to study and train for emergency or disaster scenarios including search and rescue, civilian crowd control, evacuation of a building, and many other training situations.
This work describes our approach to multi-agent systems which integrates a roadmap-based approach with agent-based systems for groups of agents performing a wide range of behaviors. The system that we have developed is highly customizable and allows us to study a variety of behaviors and scenarios. The system is tunable in the kinds of agents that can exist and parameters that describe the agents. The agents can have any number of behaviors which dictate how they react throughout a simulation. Aspects that are unique to our approach to multi-agent group behavior are the environmental encoding that the agents use when navigating and the extensive usage of the roadmap in our behavioral framework. Our roadmap-based approach can be utilized to encode both basic and very complex environments which include multi- level buildings, terrains and stadiums.
In this work, we develop techniques to improve the simulation of multi-agent systems. The movement strategies we have developed can be used to validate agent movement in a simulated environment and evaluate building designs by varying portions of the environment to see the effect on pedestrian flow. The strategies we develop for searching and tracking improve the ability of agents within our roadmap-based framework to clear areas and track agents in realistic environments.
The application focus of this work is on pursuit-evasion and evacuation planning. In pursuit-evasion, one group of agents, the pursuers, attempts to find and capture another set of agents, the evaders. The evaders have a goal of avoiding the pursuers. In evacuation planning, the evacuating agents attempt to find valid paths through potentially complex environments to a safe goal location determined by their environmental knowledge. Another group of agents, the directors may attempt to guide the evacuating agents. These applications require the behaviors created to be tunable to a range of scenarios so they can reflect real-world reactions by agents. They also potentially require interaction and coordination between agents in order to improve the realism of the scenario being studied. These applications illustrate the scalability of our system in terms of the number of agents that can be supported, the kinds of realistic environments that can be handled, and behaviors that can be simulated.
|
46 |
Trust Alignment and Adaptation: Two Approaches for Talking about Trust in Multi-Agent SystemsKoster, Andrew 05 February 2012 (has links)
En els sistemes multiagent els models de confiança són una eina important perquè les interaccions entre agents siguin efectives. Ara bé, la confiança és una noció inherentment subjectiva, i per això els agents necessiten informació addicional per poder comunicar les avaluacions de confiança. Aquesta tesi doctoral se centra en dos mètodes per comunicar la confiança: l'alineament de la confiança i l'adaptació de la confiança.
En el primer mètode, el problema de la comunicació de la confiança es modela com un problema d'alineament. Mostrem que les solucions actuals basades en ontologies comunes o en l'alineament d'ontologies generen problemes addicionals. Per això proposem com a alternativa alinear la confiança, basant-nos en les interaccions que dos agents comparteixen per tal d'aprendre un alineament. Fent servir el marc matemàtic de la teoria de canals formalitzem com les avaluacions subjectives de dos agents sobre la confiança es relacionen a través de les interaccions que fonamenten aquestes avaluacions. Com que els agents no poden accedir a les avaluacions de confiança dels altres, cal establir una comunicació. Especifiquem la rellevància i la consistència com a propietats necessàries per a aquesta comunicació. L'agent receptor de la confiança comunicada pot generalitzar els missatges fent servir la θ-subsumció, el que duu a un model predictiu que permet a un agent traduir futures comunicacions rebudes del mateix agent emissor.
Mostrem aquest procés d'alineament a la pràctica fent servir TILDE, un algorisme de regressió de primer ordre, per tal d'aprendre un alineament. També il·lustrem la seva aplicació en un escenari d'exemple. De forma empírica demostrem: (1) que la dificultat d'aprendre un alineament depèn de la complexitat relativa dels diversos models de confiança; (2) que el nostre mètode millora altres mètodes existents d'alineament de la confiança; i (3) que el nostre mètode funciona bé sota condicions d'engany.
El segon mètode per comunicar la confiança es basa en permetre que els agents raonin sobre llurs models de confiança i que personalitzin les comunicacions per adaptar-se millor a les necessitats d'un altre agent. Els mètodes actuals no permeten la suficient introspecció o adaptació del models de confiança. Per això presentem AdaptTrust, un mètode per incorporar un model computacional de confiança a l'arquitectura cognitiva d'un agent. En AdapTrust les creences i els objectius d'un agent influencien les prioritats entre aquells factors que són importants per a la computació de la confiança. Aquests al seu torn defineixen els valors dels paràmetres del model de confiança, i així l'agent pot dur a terme canvis en el seu model computacional de confiança a base de raonar sobre les seves creences i objectius. D'aquesta manera és capaç de modificar proactivament el seu model i produir avaluacions de confiança que millor s'adaptin a les seves necessitats actuals. Donem una formalització declarativa d'aquest sistema integrant-lo en una representació ⎯ fonamentada en un sistema multicontext ⎯ d'una arquitectura d'agent basada en creences, desitjos i intencions (BDI). També mostrem com amb el nostre marc es poden incorporar tres dels actuals models de confiança en el sistema de raonament d'un agent.
Finalment fem servir AdapTrust en un marc d'argumentació que permet als agents construir una justificació per a llurs avaluacions de confiança. A través d'aquest marc els agents justifiquen les seves avaluacions segons unes prioritats entre factors, les quals al seu torn són justificades per les creences i objectius dels agents. Aquestes justificacions es poden comunicar a altres agents a través d'un diàleg formal. Així un agent, a base d'argumentar i raonar sobre les prioritats d'un altre agent, pot adaptar el seu model de confiança per oferir-li una recomanació de confiança personalitzada. Aquest sistema l'hem comprovat empíricament i hem vist que millora els actuals sistemes que permeten argumentar sobre avaluacions de confiança. / In open multi-agent systems, trust models are an important tool for agents to achieve effective interactions; however, trust is an inherently subjective concept, and thus for the agents to
communicate about trust meaningfully, additional information is required. This thesis focuses on Trust Alignment and Trust Adaptation, two approaches for communicating about trust.
The first approach is to model the problem of communicating trust as a problem of alignment. We show that currently proposed solutions, such as common ontologies or ontology alignment methods, lead to additional problems, and propose trust alignment as an alternative. We propose to use the interactions that two agents share as a basis for learning an alignment. We model this using the mathematical framework of Channel Theory, which allows us to formalise how two agents' subjective trust evaluations are related through the interactions that support them. Because the agents do not have access to each other's trust evaluations, they must communicate; we specify relevance and consistency, two necessary properties for this communication. The receiver of the communicated trust evaluations can generalise the messages using θ-subsumption, leading to a predictive model that allows an agent to translate future communications from the same sender.
We demonstrate this alignment process in practice, using TILDE, a first-order regression algorithm, to learn an alignment and demonstrate its functioning in an example scenario. We find empirically that: (1) the difficulty of learning an alignment depends on the relative complexity of different trust models; (2) our method outperforms other methods for trust alignment; and (3) our alignment method deals well with deception.
The second approach to communicating about trust is to allow agents to reason about their trust model and personalise communications to better suit the other agent's needs. Contemporary models do not allow for enough introspection into ⎯ or adaptation of ⎯ the trust model, so we present AdapTrust, a method for incorporating a computational trust model into the cognitive architecture of the agent. In AdapTrust, the agent's beliefs and goals influence the priorities between factors that are important to the trust calculation. These, in turn, define the values for parameters of the trust model, and the agent can effect changes in its computational trust model, by reasoning about its beliefs and goals. This way it can proactively change its model to produce trust evaluations that are better suited to its current needs. We give a declarative formalisation of this system by integrating it into a multi-context system representation of a beliefs-desires-intentions (BDI) agent architecture. We show that three contemporary trust models can be incorporated into an agent's reasoning system using our framework.
Subsequently, we use AdapTrust in an argumentation framework that allows agents to create a justification for their trust evaluations. Agents justify their evaluations in terms of priorities
between factors, which in turn are justified by their beliefs and goals. These justifications can be communicated to other agents in a formal dialogue, and by arguing and reasoning about other agents' priorities, goals and beliefs, the agent may adapt its trust model to provide a personalised trust recommendation for another agent. We test this system empirically and see that it performs better than the current state-of-the-art system for arguing about trust evaluations.
|
47 |
Performance Comparison of Multi Agent Platforms in Wireless Sensor Networks.Bösch, Bernhard Bösch January 2012 (has links)
The technology for the realization of wireless sensors has been available for a long time, but due to progress and development in electrical engineering such sensors can be manufactured cost effectively and in large numbers nowadays. This availability and the possibility of creating cooperating wireless networks which consist of such sensors nodes, has led to a rapidly growing popularity of a technology named Wireless Sensor Networks (WSN). Its disadvantage is a high complexity in the task of programming applications based on WSN, which is a result of its distributed and embedded characteristic. To overcome this shortcoming, software agents have been identified as a suitable programming paradigm. The agent based approach commonly uses a middleware for the execution of the software agent. This thesis is meant to compare such agent middleware in their performance in the WSN domain. Therefore two prototypes of applications based on different agent models are implemented for a given set of middleware. After the implementation measurements are extracted in various experiments, which give information about the runtime performance of every middleware in the test set. In the following analysis it is examined whether each middleware under test is suited for the implemented applications in WSN. Thereupon, the results are discussed and compared with the author’s expectations. Finally a short outlook of further possible development and improvements is presented.
|
48 |
A Targeting Approach To Disturbance Rejection In Multi-Agent SystemsLiu, Yining January 2012 (has links)
This thesis focuses on deadbeat disturbance rejection for discrete-time linear multi-agent systems. The multi-agent systems, on which Spieser and Shams’ decentralized deadbeat output regulation problem is based, are extended by including disturbance agents. Specifically, we assume that there are one or more disturbance agents interacting with the plant agents in some known manner. The disturbance signals are assumed to be unmeasured and, for simplicity, constant. Control agents are introduced to interact with the plant agents, and each control agent is assigned a target plant agent. The goal is to drive the outputs of all plant agents to zero in finite time, despite the presence of the disturbances. In the decentralized deadbeat output regulation problem, two analysis schemes were introduced: targeting analysis, which is used to determine
whether or not control laws can be found to regulate, not all the agents, but only the target agents; and growing analysis, which is used to determine the behaviour of all the non-target agents when the control laws are applied. In this thesis these two analyses are adopted to the deadbeat disturbance rejection problem. A new necessary condition for successful disturbance rejection is derived, namely that a control agent must be connected to the same plant agent to which a disturbance agent is connected. This result puts a bound on the minimum number of control agents and constraints the locations of control agents. Then, given the premise that both targeting and growing
analyses succeed in the special case where the disturbances are all ignored, a new control approach is proposed for the linear case based on the idea of integral control and the regulation methods of Spieser and Shams. Preliminary studies show that this approach is also suitable for some nonlinear systems.
|
49 |
Cybernetic automata: An approach for the realization of economical cognition for multi-robot systemsMathai, Nebu John 2008 May 1900 (has links)
The multi-agent robotics paradigm has attracted much attention due to the
variety of pertinent applications that are well-served by the use of a multiplicity of
agents (including space robotics, search and rescue, and mobile sensor networks). The
use of this paradigm for most applications, however, demands economical, lightweight
agent designs for reasons of longer operational life, lower economic cost, faster and
easily-verified designs, etc.
An important contributing factor to an agent’s cost is its control architecture.
Due to the emergence of novel implementation technologies carrying the promise of
economical implementation, we consider the development of a technology-independent
specification for computational machinery. To that end, the use of cybernetics toolsets
(control and dynamical systems theory) is appropriate, enabling a principled specifi-
cation of robotic control architectures in mathematical terms that could be mapped
directly to diverse implementation substrates.
This dissertation, hence, addresses the problem of developing a technologyindependent
specification for lightweight control architectures to enable robotic agents
to serve in a multi-agent scheme. We present the principled design of static and dynamical
regulators that elicit useful behaviors, and integrate these within an overall
architecture for both single and multi-agent control. Since the use of control theory
can be limited in unstructured environments, a major focus of the work is on the engineering of emergent behavior.
The proposed scheme is highly decentralized, requiring only local sensing and
no inter-agent communication. Beyond several simulation-based studies, we provide
experimental results for a two-agent system, based on a custom implementation employing
field-programmable gate arrays.
|
50 |
Multi-Agent Potential Field based Architectures for Real-Time Strategy Game BotsHagelbäck, Johan January 2012 (has links)
Real-Time Strategy (RTS) is a sub-genre of strategy games which is running in real-time, typically in a war setting. The player uses workers to gather resources, which in turn are used for creating new buildings, training combat units, build upgrades and do research. The game is won when all buildings of the opponent(s) have been destroyed. The numerous tasks that need to be handled in real-time can be very demanding for a player. Computer players (bots) for RTS games face the same challenges, and also have to navigate units in highly dynamic game worlds and deal with other low-level tasks such as attacking enemy units within fire range. This thesis is a compilation grouped into three parts. The first part deals with navigation in dynamic game worlds which can be a complex and resource demanding task. Typically it is solved by using pathfinding algorithms. We investigate an alternative approach based on Artificial Potential Fields and show how an APF based navigation system can be used without any need of pathfinding algorithms. In RTS games players usually have a limited visibility of the game world, known as Fog of War. Bots on the other hand often have complete visibility to aid the AI in making better decisions. We show that a Multi-Agent PF based bot with limited visibility can match and even surpass bots with complete visibility in some RTS scenarios. We also show how the bot can be extended and used in a full RTS scenario with base building and unit construction. In the next section we propose a flexible and expandable RTS game architecture that can be modified at several levels of abstraction to test different techniques and ideas. The proposed architecture is implemented in the famous RTS game StarCraft, and we show how the high-level architecture goals of flexibility and expandability can be achieved. In the last section we present two studies related to gameplay experience in RTS games. In games players usually have to select a static difficulty level when playing against computer oppo- nents. In the first study we use a bot that during runtime can adapt the difficulty level depending on the skills of the opponent, and study how it affects the perceived enjoyment and variation in playing against the bot. To create bots that are interesting and challenging for human players a goal is often to create bots that play more human-like. In the second study we asked participants to watch replays of recorded RTS games between bots and human players. The participants were asked to guess and motivate if a player was controlled by a human or a bot. This information was then used to identify human-like and bot-like characteristics for RTS game players.
|
Page generated in 0.0629 seconds