• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 203
  • 140
  • 51
  • 26
  • 9
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 517
  • 517
  • 498
  • 153
  • 101
  • 83
  • 83
  • 82
  • 80
  • 69
  • 67
  • 62
  • 59
  • 59
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Autonomia de planejamento no modelo organizacional MOISE. / Planning autonomy in the MOISE organizational model.

Maia, Artur Vidal 29 October 2018 (has links)
Essa dissertação apresenta um mecanismo para incorporar autonomia de planejamento em modelos organizacionais multiagentes. Para tanto, propõe-se um modelo formal para representar a presença e ausência de autonomia de planejamento, utilizando dois tipos de objetivos: procedurais e declarativos. O modelo é implementado na plataforma JaCaMo na qual se realiza um estudo de caso de uma organização, onde coexistem agentes que possuem ou não possuem autonomia de planejamento. / This dissertation presents a mechanism to incorporate planning autonomy in multiagent organizational models. Therefore, we propose a formal model to represent presence or absence of planning autonomy, using two types of objectives: procedural and declarative ones. The model is implemented using the JaCaMo platform, in which an organization case study is proposed, where agents who have or do not have planning autonomy co-exist.
42

Multi-Agent systems and organizations / Multi-Agent systems and organizations

Kúdela, Lukáš January 2012 (has links)
Multi-agent systems (MAS) are emerging as a promising paradigm for conceptualizing, designing and implementing large-scale heterogeneous software systems. The key advantage of looking at components in such systems as autonomous agents is that as agents they are capable of flexible self-organization, instead of being rigidly organized by the system's architect. However, self-organization is like evolution-it takes a lot of time and the results are not guaranteed. More often than not, the system's architect has an idea about how the agents should organize themselves-what types of organizations they should form. In our work, we tried to solve the problem of modelling organizations and their roles in a MAS, independent of the particular agent platform on which the MAS will eventually run. First and foremost, we have proposed a metamodel for expressing platform-independent organization models. Furthermore, we have implemented the proposed metamodel for the Jade agent platform as a module extending this framework. Finally, we have demonstrated the use of our module by modelling three specific organizations: remote function invocation, arithmetic expression evaluation and sealed-bid auction. Our work shows how to separate the behaviour acquired through a role from the behaviour intrinsic to an agent. This...
43

A Coverage Metric to Aid in Testing Multi-Agent Systems

Linn, Jane Ostergar 01 December 2017 (has links)
Models are frequently used to represent complex systems in order to test the systems before they are deployed. Some of the most complicated models are those that represent multi-agent systems (MAS), where there are multiple decision makers. Brahms is an agent-oriented language that models MAS. Three major qualities affect the behavior of these MAS models: workframes that change the state of the system, communication activities that coordinate information between agents, and the schedule of workframes. The primary method to test these models that exists is repeated simulation. Simulation is useful insofar as interesting test cases are used that enable the simulation to explore different behaviors of the model, but simulation alone cannot be fully relied upon to adequately cover the test space, especially in the case of non-deterministic concurrent systems. It takes an exponential number of simulation trials to uncover schedules that reveal unexpected behaviors. This thesis defines a coverage metric to make simulation more meaningful before verification of the model. The coverage metric is divided into three different metrics: workframe coverage, communication coverage, and schedule coverage. Each coverage metric is defined through static analysis of the system, resulting in the coverage requirements of that system. These coverage requirements are compared to the logged output of the simulation run to calculate the coverage of the system. The use of the coverage metric is illustrated in several empirical studies and explored in a detailed case study of the SATS concept (Small Aircraft Transportation System). SATS outlines the procedures aircraft follow around runways that do not have communication towers. The coverage metric quantifies the test effort, and can be used as a basis for future automated test generation and active test.
44

Debugging Multi-Agent Systems With Design Documents

Poutakidis, David Alexander, davpout@cs.rmit.edu.au January 2008 (has links)
Debugging multi-agent systems, which are concurrent, distributed, and consist of complex components is difficult, yet crucial. The development of these complex systems is supported by agent-oriented software engineering methodologies which utilise agents as the central design metaphor. The systems that are developed are inherently complex since the components of these systems may interact in flexible and sophisticated ways and traditional debugging techniques are not appropriate. Despite this, very little effort has been applied to developing appropriate debugging tools and techniques. Debugging multi-agent systems without good debugging tools is highly impractical and without suitable debugging support developing and maintaining multi-agent systems will be more difficult than it need be. In this thesis we propose that the debugging process can be supported by following an agent-oriented design methodology, and then using the developed design artifacts in the debugging phase. We propose a domain independent debugging framework which comprises the developed processes and components that are necessary in using design artifacts as debugging artifacts. Our approach is to take a non-formal design artifact, such as an AUML protocol design, and encode it in a machine interpretable manner such that the design can be used as a model of correct system behaviour. These models are used by a run-time debugging system to compare observed behaviour against specified behaviour. We provide details for transforming two design artifact types into equivalent debugging artifacts and show how these can be used to detect bugs. During a debugging episode in which a bug has been identified our debugging approach can provide detailed information about the possible reason for the bug occurring. To determine if this information was useful in helping to debug programs we undertook a thorough empirical study and identified that use of the debugging tool translated to an improvement in debugging performance. We conclude that the debugging techniques developed in this thesis provide effective debugging support for multi-agent systems and by having an extensible framework new design artifacts can be explored and as translations are developed they can be added to the debugging system.
45

All learning is local: Multi-agent learning in global reward games

Chang, Yu-Han, Ho, Tracey, Kaelbling, Leslie P. 01 1900 (has links)
In large multiagent games, partial observability, coordination, and credit assignment persistently plague attempts to design good learning algorithms. We provide a simple and efficient algorithm that in part uses a linear system to model the world from a single agent’s limited perspective, and takes advantage of Kalman filtering to allow an agent to construct a good training signal and effectively learn a near-optimal policy in a wide variety of settings. A sequence of increasingly complex empirical tests verifies the efficacy of this technique. / Singapore-MIT Alliance (SMA)
46

Building Grounded Abstractions for Artificial Intelligence Programming

Hearn, Robert A. 16 June 2004 (has links)
Most Artificial Intelligence (AI) work can be characterized as either ``high-level'' (e.g., logical, symbolic) or ``low-level'' (e.g., connectionist networks, behavior-based robotics). Each approach suffers from particular drawbacks. High-level AI uses abstractions that often have no relation to the way real, biological brains work. Low-level AI, on the other hand, tends to lack the powerful abstractions that are needed to express complex structures and relationships. I have tried to combine the best features of both approaches, by building a set of programming abstractions defined in terms of simple, biologically plausible components. At the ``ground level'', I define a primitive, perceptron-like computational unit. I then show how more abstract computational units may be implemented in terms of the primitive units, and show the utility of the abstract units in sample networks. The new units make it possible to build networks using concepts such as long-term memories, short-term memories, and frames. As a demonstration of these abstractions, I have implemented a simulator for ``creatures'' controlled by a network of abstract units. The creatures exist in a simple 2D world, and exhibit behaviors such as catching mobile prey and sorting colored blocks into matching boxes. This program demonstrates that it is possible to build systems that can interact effectively with a dynamic physical environment, yet use symbolic representations to control aspects of their behavior.
47

Morphologically Responsive Self-Assembling Robots

O'Grady, Rehan 07 October 2010 (has links)
We investigate the use of self-assembly in a robotic system as a means of responding to dierent environmental contingencies. Self-assembly is the mechanism through which agents in a multi-robot system autonomously form connections with one another to create larger composite robotic entities. Initially, we consider a simple response mechanism that uses stochastic self-assembly without any explicit control over the resulting morphology | the robots self-assemble into a larger, randomly shaped composite entity if the task they encounter is beyond the physical capabilities of a single robot. We present distributed behavioural control that enables a group of robots to make this collective decision about when and if to self-assemble in the context of a hill crossing task. In a series of real-world experiments, we analyse the eect of dierent distributed timing and decision strategies on system performance. Outside of a task execution context, we present fully decentralised behavioural control capable of creating periodically repeating global morphologies. We then show how arbitrary morphologies can be generated by abstracting our behavioural control into a morphology control language and adding symbolic communication between connected agents. Finally, we integrate our earlier distributed response mechanism into the morphology control language. We run simulated and real-world experiments to demonstrate a self-assembling robotic system that can respond to varying environmental contingencies by forming dierent appropriate morphologies.
48

Roadmap-Based Techniques for Modeling Group Behaviors in Multi-Agent Systems

Rodriguez, Samuel Oscar 2012 May 1900 (has links)
Simulating large numbers of agents, performing complex behaviors in realistic environments is a difficult problem with applications in robotics, computer graphics and animation. A multi-agent system can be a useful tool for studying a range of situations in simulation in order to plan and train for actual events. Systems supporting such simulations can be used to study and train for emergency or disaster scenarios including search and rescue, civilian crowd control, evacuation of a building, and many other training situations. This work describes our approach to multi-agent systems which integrates a roadmap-based approach with agent-based systems for groups of agents performing a wide range of behaviors. The system that we have developed is highly customizable and allows us to study a variety of behaviors and scenarios. The system is tunable in the kinds of agents that can exist and parameters that describe the agents. The agents can have any number of behaviors which dictate how they react throughout a simulation. Aspects that are unique to our approach to multi-agent group behavior are the environmental encoding that the agents use when navigating and the extensive usage of the roadmap in our behavioral framework. Our roadmap-based approach can be utilized to encode both basic and very complex environments which include multi- level buildings, terrains and stadiums. In this work, we develop techniques to improve the simulation of multi-agent systems. The movement strategies we have developed can be used to validate agent movement in a simulated environment and evaluate building designs by varying portions of the environment to see the effect on pedestrian flow. The strategies we develop for searching and tracking improve the ability of agents within our roadmap-based framework to clear areas and track agents in realistic environments. The application focus of this work is on pursuit-evasion and evacuation planning. In pursuit-evasion, one group of agents, the pursuers, attempts to find and capture another set of agents, the evaders. The evaders have a goal of avoiding the pursuers. In evacuation planning, the evacuating agents attempt to find valid paths through potentially complex environments to a safe goal location determined by their environmental knowledge. Another group of agents, the directors may attempt to guide the evacuating agents. These applications require the behaviors created to be tunable to a range of scenarios so they can reflect real-world reactions by agents. They also potentially require interaction and coordination between agents in order to improve the realism of the scenario being studied. These applications illustrate the scalability of our system in terms of the number of agents that can be supported, the kinds of realistic environments that can be handled, and behaviors that can be simulated.
49

Trust Alignment and Adaptation: Two Approaches for Talking about Trust in Multi-Agent Systems

Koster, Andrew 05 February 2012 (has links)
En els sistemes multiagent els models de confiança són una eina important perquè les interaccions entre agents siguin efectives. Ara bé, la confiança és una noció inherentment subjectiva, i per això els agents necessiten informació addicional per poder comunicar les avaluacions de confiança. Aquesta tesi doctoral se centra en dos mètodes per comunicar la confiança: l'alineament de la confiança i l'adaptació de la confiança. En el primer mètode, el problema de la comunicació de la confiança es modela com un problema d'alineament. Mostrem que les solucions actuals basades en ontologies comunes o en l'alineament d'ontologies generen problemes addicionals. Per això proposem com a alternativa alinear la confiança, basant-nos en les interaccions que dos agents comparteixen per tal d'aprendre un alineament. Fent servir el marc matemàtic de la teoria de canals formalitzem com les avaluacions subjectives de dos agents sobre la confiança es relacionen a través de les interaccions que fonamenten aquestes avaluacions. Com que els agents no poden accedir a les avaluacions de confiança dels altres, cal establir una comunicació. Especifiquem la rellevància i la consistència com a propietats necessàries per a aquesta comunicació. L'agent receptor de la confiança comunicada pot generalitzar els missatges fent servir la θ-subsumció, el que duu a un model predictiu que permet a un agent traduir futures comunicacions rebudes del mateix agent emissor. Mostrem aquest procés d'alineament a la pràctica fent servir TILDE, un algorisme de regressió de primer ordre, per tal d'aprendre un alineament. També il·lustrem la seva aplicació en un escenari d'exemple. De forma empírica demostrem: (1) que la dificultat d'aprendre un alineament depèn de la complexitat relativa dels diversos models de confiança; (2) que el nostre mètode millora altres mètodes existents d'alineament de la confiança; i (3) que el nostre mètode funciona bé sota condicions d'engany. El segon mètode per comunicar la confiança es basa en permetre que els agents raonin sobre llurs models de confiança i que personalitzin les comunicacions per adaptar-se millor a les necessitats d'un altre agent. Els mètodes actuals no permeten la suficient introspecció o adaptació del models de confiança. Per això presentem AdaptTrust, un mètode per incorporar un model computacional de confiança a l'arquitectura cognitiva d'un agent. En AdapTrust les creences i els objectius d'un agent influencien les prioritats entre aquells factors que són importants per a la computació de la confiança. Aquests al seu torn defineixen els valors dels paràmetres del model de confiança, i així l'agent pot dur a terme canvis en el seu model computacional de confiança a base de raonar sobre les seves creences i objectius. D'aquesta manera és capaç de modificar proactivament el seu model i produir avaluacions de confiança que millor s'adaptin a les seves necessitats actuals. Donem una formalització declarativa d'aquest sistema integrant-lo en una representació ⎯ fonamentada en un sistema multicontext ⎯ d'una arquitectura d'agent basada en creences, desitjos i intencions (BDI). També mostrem com amb el nostre marc es poden incorporar tres dels actuals models de confiança en el sistema de raonament d'un agent. Finalment fem servir AdapTrust en un marc d'argumentació que permet als agents construir una justificació per a llurs avaluacions de confiança. A través d'aquest marc els agents justifiquen les seves avaluacions segons unes prioritats entre factors, les quals al seu torn són justificades per les creences i objectius dels agents. Aquestes justificacions es poden comunicar a altres agents a través d'un diàleg formal. Així un agent, a base d'argumentar i raonar sobre les prioritats d'un altre agent, pot adaptar el seu model de confiança per oferir-li una recomanació de confiança personalitzada. Aquest sistema l'hem comprovat empíricament i hem vist que millora els actuals sistemes que permeten argumentar sobre avaluacions de confiança. / In open multi-agent systems, trust models are an important tool for agents to achieve effective interactions; however, trust is an inherently subjective concept, and thus for the agents to communicate about trust meaningfully, additional information is required. This thesis focuses on Trust Alignment and Trust Adaptation, two approaches for communicating about trust. The first approach is to model the problem of communicating trust as a problem of alignment. We show that currently proposed solutions, such as common ontologies or ontology alignment methods, lead to additional problems, and propose trust alignment as an alternative. We propose to use the interactions that two agents share as a basis for learning an alignment. We model this using the mathematical framework of Channel Theory, which allows us to formalise how two agents' subjective trust evaluations are related through the interactions that support them. Because the agents do not have access to each other's trust evaluations, they must communicate; we specify relevance and consistency, two necessary properties for this communication. The receiver of the communicated trust evaluations can generalise the messages using θ-subsumption, leading to a predictive model that allows an agent to translate future communications from the same sender. We demonstrate this alignment process in practice, using TILDE, a first-order regression algorithm, to learn an alignment and demonstrate its functioning in an example scenario. We find empirically that: (1) the difficulty of learning an alignment depends on the relative complexity of different trust models; (2) our method outperforms other methods for trust alignment; and (3) our alignment method deals well with deception. The second approach to communicating about trust is to allow agents to reason about their trust model and personalise communications to better suit the other agent's needs. Contemporary models do not allow for enough introspection into ⎯ or adaptation of ⎯ the trust model, so we present AdapTrust, a method for incorporating a computational trust model into the cognitive architecture of the agent. In AdapTrust, the agent's beliefs and goals influence the priorities between factors that are important to the trust calculation. These, in turn, define the values for parameters of the trust model, and the agent can effect changes in its computational trust model, by reasoning about its beliefs and goals. This way it can proactively change its model to produce trust evaluations that are better suited to its current needs. We give a declarative formalisation of this system by integrating it into a multi-context system representation of a beliefs-desires-intentions (BDI) agent architecture. We show that three contemporary trust models can be incorporated into an agent's reasoning system using our framework. Subsequently, we use AdapTrust in an argumentation framework that allows agents to create a justification for their trust evaluations. Agents justify their evaluations in terms of priorities between factors, which in turn are justified by their beliefs and goals. These justifications can be communicated to other agents in a formal dialogue, and by arguing and reasoning about other agents' priorities, goals and beliefs, the agent may adapt its trust model to provide a personalised trust recommendation for another agent. We test this system empirically and see that it performs better than the current state-of-the-art system for arguing about trust evaluations.
50

Performance Comparison of Multi Agent Platforms in Wireless Sensor Networks.

Bösch, Bernhard Bösch January 2012 (has links)
The technology for the realization of wireless sensors has been available for a long time, but due to progress  and  development  in  electrical  engineering  such  sensors  can  be  manufactured  cost effectively  and  in  large  numbers  nowadays.  This  availability  and  the  possibility  of  creating cooperating  wireless  networks  which  consist  of  such  sensors  nodes,  has  led  to  a  rapidly  growing popularity  of  a  technology  named  Wireless  Sensor  Networks  (WSN).  Its  disadvantage  is  a  high complexity in the task of programming applications based on WSN, which is a result of its distributed and  embedded  characteristic.  To  overcome  this  shortcoming,  software  agents  have  been  identified as  a  suitable  programming  paradigm.  The  agent  based  approach  commonly  uses  a  middleware  for the execution of the software agent. This thesis is meant to compare such agent middleware in their performance in the WSN domain. Therefore two prototypes of applications based on different agent models are implemented for a given set of middleware. After the implementation measurements are extracted  in  various  experiments,  which  give  information  about  the  runtime  performance  of  every middleware in the test set.  In the following analysis it is examined whether each middleware under test  is  suited  for  the  implemented  applications  in  WSN.  Thereupon,  the  results  are  discussed  and compared with the author’s expectations. Finally a short outlook of further possible development and improvements is presented.

Page generated in 0.0535 seconds