• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 456
  • 181
  • 165
  • 51
  • 16
  • 9
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 1068
  • 1068
  • 583
  • 297
  • 191
  • 187
  • 183
  • 175
  • 150
  • 133
  • 131
  • 120
  • 116
  • 105
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Cooperative Localization based Multi-Agent Coordination and Control

Chakraborty, Anusna 05 October 2021 (has links)
No description available.
142

Learning Successful Strategies in Repeated General-sum Games

Crandall, Jacob W. 21 December 2005 (has links) (PDF)
Many environments in which an agent can use reinforcement learning techniques to learn profitable strategies are affected by other learning agents. These situations can be modeled as general-sum games. When playing repeated general-sum games with other learning agents, the goal of a self-interested learning agent is to maximize its own payoffs over time. Traditional reinforcement learning algorithms learn myopic strategies in these games. As a result, they learn strategies that produce undesirable results in many games. In this dissertation, we develop and analyze algorithms that learn non-myopic strategies when playing many important infinitely repeated general-sum games. We show that, in many of these games, these algorithms outperform existing multiagent learning algorithms. We derive performance guarantees for these algorithms (for certain learning parameters) and show that these guarantees become stronger and apply to larger classes of games as more information is observed and used by the agents. We establish these results through empirical studies and mathematical proofs.
143

Automated Negotiation for Complex Multi-Agent Resource Allocation

An, Bo 01 February 2011 (has links)
The problem of constructing and analyzing systems of intelligent, autonomous agents is becoming more and more important. These agents may include people, physical robots, virtual humans, software programs acting on behalf of human beings, or sensors. In a large class of multi-agent scenarios, agents may have different capabilities, preferences, objectives, and constraints. Therefore, efficient allocation of resources among multiple agents is often difficult to achieve. Automated negotiation (bargaining) is the most widely used approach for multi-agent resource allocation and it has received increasing attention in the recent years. However, information uncertainty, existence of multiple contracting partners and competitors, agents' incentive to maximize individual utilities, and market dynamics make it difficult to calculate agents' rational equilibrium negotiation strategies and develop successful negotiation agents behaving well in practice. To this end, this thesis is concerned with analyzing agents' rational behavior and developing negotiation strategies for a range of complex negotiation contexts. First, we consider the problem of finding agents' rational strategies in bargaining with incomplete information. We focus on the principal alternating-offers finite horizon bargaining protocol with one-sided uncertainty regarding agents' reserve prices. We provide an algorithm based on the combination of game theoretic analysis and search techniques which finds agents' equilibrium in pure strategies when they exist. Our approach is sound, complete and, in principle, can be applied to other uncertainty settings. Simulation results show that there is at least one pure strategy sequential equilibrium in 99.7% of various scenarios. In addition, agents with equilibrium strategies achieved higher utilities than agents with heuristic strategies. Next, we extend the alternating-offers protocol to handle concurrent negotiations in which each agent has multiple trading opportunities and faces market competition. We provide an algorithm based on backward induction to compute the subgame perfect equilibrium of concurrent negotiation. We observe that agents' bargaining power are affected by the proposing ordering and market competition and for a large subset of the space of the parameters, agents' equilibrium strategies depend on the values of a small number of parameters. We also extend our algorithm to find a pure strategy sequential equilibrium in concurrent negotiations where there is one-sided uncertainty regarding the reserve price of one agent. Third, we present the design and implementation of agents that concurrently negotiate with other entities for acquiring multiple resources. Negotiation agents are designed to adjust 1) the number of tentative agreements and 2) the amount of concession they are willing to make in response to changing market conditions and negotiation situations. In our approach, agents utilize a time-dependent negotiation strategy in which the reserve price of each resource is dynamically determined by 1) the likelihood that negotiation will not be successfully completed, 2) the expected agreement price of the resource, and 3) the expected number of final agreements. The negotiation deadline of each resource is determined by its relative scarcity. Since agents are permitted to decommit from agreements, a buyer may make more than one tentative agreement for each resource and the maximum number of tentative agreements is constrained by the market situation. Experimental results show that our negotiation strategy achieved significantly higher utilities than simpler strategies. Finally, we consider the problem of allocating networked resources in dynamic environment, such as cloud computing platforms, where providers strategically price resources to maximize their utility. While numerous auction-based approaches have been proposed in the literature, our work explores an alternative approach where providers and consumers negotiate resource leasing contracts. We propose a distributed negotiation mechanism where agents negotiate over both a contract price and a decommitment penalty, which allows agents to decommit from contracts at a cost. We compare our approach experimentally, using representative scenarios and workloads, to both combinatorial auctions and the fixed-price model, and show that the negotiation model achieves a higher social welfare.
144

Evaluating Multi-Agent Modeller Representations

Demke, Jonathan 15 November 2022 (has links)
The way a multi-agent modeller represents an agent not only affects its ability to reason about agents but also the interpretability of its representation space as well as its efficacy on future downstream tasks. We utilize and repurpose metrics from the field of representation learning to specifically analyze and compare multi-agent modellers that build real-valued vector representations of the agents they model. By generating two datasets and analyzing the representations of multiple LSTM- or transformer-based modellers with various embedding sizes, we demonstrate that representation metrics provide a more complete and nuanced picture of a modeller's representation space than an analysis based only on performance. We also provide insights regarding LSTM- and transformer-based representations. Our proposed metrics are general enough to work on a wide variety of modellers and datasets.
145

Application of Reinforcement Learning to Multi-Agent Production Scheduling

Wang, Yi-chi 13 December 2003 (has links)
Reinforcement learning (RL) has received attention in recent years from agent-based researchers because it can be applied to problems where autonomous agents learn to select proper actions for achieving their goals based on interactions with their environment. Each time an agent performs an action, the environment¡Šs response, as indicated by its new state, is used by the agent to reward or penalize its action. The agent¡Šs goal is to maximize the total amount of reward it receives over the long run. Although there have been several successful examples demonstrating the usefulness of RL, its application to manufacturing systems has not been fully explored. The objective of this research is to develop a set of guidelines for applying the Q-learning algorithm to enable an individual agent to develop a decision making policy for use in agent-based production scheduling applications such as dispatching rule selection and job routing. For the dispatching rule selection problem, a single machine agent employs the Q-learning algorithm to develop a decision-making policy on selecting the appropriate dispatching rule from among three given dispatching rules. In the job routing problem, a simulated job shop system is used for examining the implementation of the Q-learning algorithm for use by job agents when making routing decisions in such an environment. Two factorial experiment designs for studying the settings used to apply Q-learning to the single machine dispatching rule selection problem and the job routing problem are carried out. This study not only investigates the main effects of this Q-learning application but also provides recommendations for factor settings and useful guidelines for future applications of Q-learning to agent-based production scheduling.
146

Multi-Agent Based Control and Reconfiguration for Restoration of Distribution Systems with Distributed Generators

Solanki, Jignesh M 09 December 2006 (has links)
Restoration entails the development of a plan consisting of opening or closing of switches, which is called reconfiguration. This dissertation proposes the design of a fast and efficient service restoration with a load shedding method for land-based and ship systems, considering priority of customers and several other system operating constraints. Existing methods, based on centralized restoration schemes that require a powerful central computer, may lead to a single point of failure. This research uses a decentralized scheme based on agents. A group of agents created to realize a specific goal by their interactions is called a Multi-Agent System (MAS). Agents and their behaviors are developed in Java Agent DEvelopment Framework (JADE) and the power system is simulated in the Virtual Test Bed (VTB). The large-scale introduction of Distributed Generators (DGs) in distribution systems has made it increasingly necessary to develop restoration schemes considering DG. The separation of utility causes the system to decompose into electrically isolated islands with generation and load imbalance that can have severe consequences. Automated load shedding schemes are essential for systems with DGs, since the disconnection of the utility can lead to instability much faster than an operator intervention can repair. Load shedding may be the only option to maintain the island when conditions are so severe as to require correction by restoration schemes. Few algorithms have been reported for the problem of maintaining the island, even though load shedding has been reported for power systems using underrequency and under-voltage criteria. This research proposes a new operational strategy for sudden generator-load imbalance due to loss of utility that dynamically calculates the quantity of load to be shed for each island and the quantity of load that can be restored. Results presented in this dissertation are among the first to demonstrate a state-of-the-art MAS for load shedding under islanded conditions and restoration of the shed loads. The load shedding and restoration schemes developed here have behaviors that can incorporate most of the distribution topologies. Achieving service restoration with DG is complicated but new automated switch technologies and communications make MAS a better scheme than existing schemes.
147

Predicting and Facilitating the Emergence of Optimal Solutions for a Cooperative “Herding” Task and Testing their Similitude to Contexts Utilizing Full-Body Motion

Nalepka, Patrick 07 June 2018 (has links)
No description available.
148

Collective Path Planning by Robots on a Grid

Joseph, Sharon A. 05 August 2010 (has links)
No description available.
149

JiVE: JAFMAS INTEGRATED VISUAL ENVIRONMENT

GALAN, ALAN KEITH January 2000 (has links)
No description available.
150

A Negotiation Protocol for Optimal Decision Making by Collaborating Agents

Paliwal, Divya 21 October 2013 (has links)
No description available.

Page generated in 0.0608 seconds