• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 135
  • 50
  • 25
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 501
  • 501
  • 501
  • 148
  • 96
  • 82
  • 81
  • 79
  • 72
  • 67
  • 64
  • 59
  • 58
  • 58
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Distributed Decision Tree Induction Using Multi-agent Based Negotiation Protocol

Chattopadhyay, Dipayan 10 October 2014 (has links)
No description available.
212

MANILA: A Multi-Agent Framework for Emergent Associative Learning and Creativity in Social Networks

Shekfeh, Marwa January 2017 (has links)
No description available.
213

Robust, Real Time, and Scalable Multi-Agent Task Allocation

Kivelevitch, Elad H. 05 October 2012 (has links)
No description available.
214

Price-Based Distributed Optimization in Large-Scale Networked Systems

HomChaudhuri, Baisravan 12 September 2013 (has links)
No description available.
215

Planning and Control of Cooperative Multi-Agent Manipulator-Endowed Systems

Verginis, Christos January 2018 (has links)
Multi-agent planning and control is an active and increasingly studied topic of research, with many practical applications, such as rescue missions, security, surveillance, and transportation. More specifically, cases that involve complex manipulator-endowed systems  deserve extra attention due to potential complex cooperative manipulation tasks and their interaction with the environment. This thesis addresses the problem of cooperative motion- and task-planning of multi-agent and multi-agent-object systems under complex specifications expressed as temporal logic formulas. We consider manipulator-endowed robotic agents that can coordinate in order to perform, among other tasks, cooperative object manipulation/transportation. Our approach is based on the integration of tools from the following areas: multi-agent systems, cooperative object manipulation, discrete abstraction design of multi-agent-object systems, and formal verification. More specifically, we divide the main problem into three different parts.The first part is devoted to the control design for the formation control of a team of rigid-bodies, motivated by its application to cooperative manipulation schemes. We propose decentralized control protocols such that desired position and orientation-based formation between neighboring agents is achieved. Moreover, inter-agent collisions and connectivity breaks are guaranteed to be avoided. In the second part, we design continuous control laws explicitly for the cooperative manipulation/transportation of an object by a team of robotic agents. Firstly, we propose robust decentralized controllers for the trajectory tracking of the object's center of mass.  Secondly, we design model predictive control-based controllers for the transportation of the object with collision and singularity constraints. In the third part, we design discrete representations of multi-agent continuous systems and synthesize hybrid controllers for the satisfaction of complex tasks expressed as temporal logic formulas. We achieve this by combining the results of the previous parts and by proposing appropriate trajectory tracking- and potential field-based continuous control laws for the transitions of the agents among the discrete states. We consider teams of unmanned aerial vehicles and mobile manipulators as well as multi-agent-object systems where the specifications of the objects are also taken into account.Numerical simulations and experimental results verify the claimed results. / <p>QC 20180219</p>
216

Robust and Abstraction-free Control of Dynamical Systems under Signal Temporal Logic Tasks

Lindemann, Lars January 2018 (has links)
Dynamical systems that provably satisfy given specifications have become increasingly important in many engineering areas. For instance, safety-critical systems such as human-robot networks or autonomous driving systems are required to be safe and to also satisfy some complex specifications that may include timing constraints, i.e., when or in which order some tasks should be accomplished. Temporal logics have recently proven to be a valuable tool for these control systems by providing a rich specification language. Existing temporal logic-based control approaches discretize the underlying dynamical system in space and/or time, which is commonly referred to as the abstraction process. In other words, the continuous dynamical system is abstracted into a finite system representation, e.g., into a finite state automaton. Such approaches may lead to high computational burdens due to the curse of dimensionality, which makes it hard to use them in practice. Especially with respect to multi-agent systems, these methods do not scale computationally when the number of agents increases. We will address this open research question by deriving abstraction-free control methods for single- and multi-agent systems under signal temporal logic tasks. Another aim of this research is to consider robustness, which is partly taken care of by the robust semantics admitted by signal temporal logic as well as by the robustness properties of the derived control methods. In this work, we propose computationally-efficient frameworks that deal with the aforementioned problems for single- and multi-agent systems by using feedback control strategies such as optimization-based techniques, prescribed performance control, and control barrier functions in combination with hybrid systems theory that allows us to model some higher level decision-making. In each of these approaches, the temporal properties of the employed control methods are used to impose a temporal behavior on the closed-loop system dynamics, which eventually results in the satisfaction of the signal temporal logic task. With respect to the multi-agent case, we consider a bottom-up approach where each agent is subject to a local (individual) task. These tasks may depend on the behavior of other agents. Hence, the multi-agent system is subject to couplings induced on the task level as well as on the dynamical level. The main challenge then is to deal with these couplings and derive control methods that can still satisfy the given tasks or alternatively result in least violating solutions. The efficacy of the theoretical findings is demonstrated in simulations of single- and multi-agent systems under complex specifications. / <p>QC 20180502</p>
217

A Bayesian Network Approach to the Self-organization and Learning in Intelligent Agents

Sahin, Ferat 25 September 2000 (has links)
A Bayesian network approach to self-organization and learning is introduced for use with intelligent agents. Bayesian networks, with the help of influence diagrams, are employed to create a decision-theoretic intelligent agent. Influence diagrams combine both Bayesian networks and utility theory. In this research, an intelligent agent is modeled by its belief, preference, and capabilities attributes. Each agent is assumed to have its own belief about its environment. The belief aspect of the intelligent agent is accomplished by a Bayesian network. The goal of an intelligent agent is said to be the preference of the agent and is represented with a utility function in the decision theoretic intelligent agent. Capabilities are represented with a set of possible actions of the decision-theoretic intelligent agent. Influence diagrams have utility nodes and decision nodes to handle the preference and capabilities of the decision-theoretic intelligent agent, respectively. Learning is accomplished by Bayesian networks in the decision-theoretic intelligent agent. Bayesian network learning methods are discussed intensively in this paper. Because intelligent agents will explore and learn the environment, the learning algorithm should be implemented online. None of the existent Bayesian network learning algorithms has online learning. Thus, an online Bayesian network learning method is proposed to allow the intelligent agent learn during its exploration. Self-organization of the intelligent agents is accomplished because each agent models other agents by observing their behavior. Agents have belief, not only about environment, but also about other agents. Therefore, an agent takes its decisions according to the model of the environment and the model of the other agents. Even though each agent acts independently, they take the other agents behaviors into account to make a decision. This permits the agents to organize themselves for a common task. To test the proposed intelligent agent's learning and self-organizing abilities, Windows application software is written to simulate multi-agent systems. The software, IntelliAgent, lets the user design decision-theoretic intelligent agents both manually and automatically. The software can also be used for knowledge discovery by employing Bayesian network learning a database. Additionally, we have explored a well-known herding problem to obtain sound results for our intelligent agent design. In the problem, a dog tries to herd a sheep to a certain location, i.e. a pen. The sheep tries to avoid the dog by retreating from the dog. The herding problem is simulated using the IntelliAgent software. Simulations provided good results in terms of the dog's learning ability and its ability to organize its actions according to the sheep's (other agent) behavior. In summary, a decision-theoretic approach is applied to the self-organization and learning problems in intelligent agents. Software was written to simulate the learning and self-organization abilities of the proposed agent design. A user manual for the software and the simulation results are presented. This research is supported by the Office of Naval Research with the grant number N00014-98-1-0779. Their financial support is greatly appreciated. / Ph. D.
218

Multiscale Views of Multi-agent Interactions in the Context Of Collective Behavior

Roy, Subhradeep 01 August 2017 (has links)
In nature, many social species demonstrate collective behavior ranging from coordinated motion in flocks of birds and schools of fish to collective decision making in humans. Such distinct behavioral patterns at the group level are the consequence of local interactions among the individuals. We can learn from these biological systems, which have successfully evolved to operate in noisy and fault-prone environments, and understand how these complex interactions can be applied to engineered systems where robustness remains a major challenge. This dissertation addresses a two-scale approach to study these interactions- one in larger scale, where we are interested in the information exchange in a group and how it enables the group to reach a common decision, and the other in a smaller scale, where we are focused in the presence and directionality in the information exchange in a pair of individuals. To understand the interactions at large scale, we use a graph theoretic approach to study consensus or synchronization protocols over two types of biologically-inspired interaction networks. The first network captures both collaborative and antagonistic interactions and the second considers the impact of dynamic leaders in presence of purely collaborative interactions. To study the interactions at small scale, we use an information theoretic approach to understand the directionality of information transfer in a pair of individual using a real-world data-set of animal group motion. Finally, we choose the issue of same-sex marriage in the United States to demonstrate that collective opinion formation is not only a result of negotiations among the individuals, but also reflects inherent spatial and political similarities and temporal delays. / Ph. D.
219

Non-Reciprocating Sharing Methods in Cooperative Q-Learning Environments

Cunningham, Bryan 28 August 2012 (has links)
Past research on multi-agent simulation with cooperative reinforcement learning (RL) for homogeneous agents focuses on developing sharing strategies that are adopted and used by all agents in the environment. These sharing strategies are considered to be reciprocating because all participating agents have a predefined agreement regarding what type of information is shared, when it is shared, and how the participating agent's policies are subsequently updated. The sharing strategies are specifically designed around manipulating this shared information to improve learning performance. This thesis targets situations where the assumption of a single sharing strategy that is employed by all agents is not valid. This work seeks to address how agents with no predetermined sharing partners can exploit groups of cooperatively learning agents to improve learning performance when compared to Independent learning. Specifically, several intra-agent methods are proposed that do not assume a reciprocating sharing relationship and leverage the pre-existing agent interface associated with Q-Learning to expedite learning. The other agents' functions and their sharing strategies are unknown and inaccessible from the point of view of the agent(s) using the proposed methods. The proposed methods are evaluated on physically embodied agents in the multi-agent cooperative robotics field learning a navigation task via simulation. The experiments conducted focus on the effects of the following factors on the performance of the proposed non-reciprocating methods: scaling the number of agents in the environment, limiting the communication range of the agents, and scaling the size of the environment. / Master of Science
220

An Agent-based Platform for Demand Response Implementation in Smart Buildings

Khamphanchai, Warodom 28 April 2016 (has links)
The efficiency, security and resiliency are very important factors for the operation of a distribution power system. Taking into account customer demand and energy resource constraints, electric utilities not only need to provide reliable services but also need to operate a power grid as efficiently as possible. The objective of this dissertation is to design, develop and deploy the Multi-Agent Systems (MAS) - together with control algorithms - that enable demand response (DR) implementation at the customer level, focusing on both residential and commercial customers. For residential applications, the main objective is to propose an approach for a smart distribution transformer management. The DR objective at a distribution transformer is to ensure that the instantaneous power demand at a distribution transformer is kept below a certain demand limit while impacts of demand restrike are minimized. The DR objectives at residential homes are to secure critical loads, mitigate occupant comfort violation, and minimize appliance run-time after a DR event. For commercial applications, the goal is to propose a MAS architecture and platform that help facilitate the implementation of a Critical Peak Pricing (CPP) program. Main objectives of the proposed DR algorithm are to minimize power demand and energy consumption during a period that a CPP event is called out, to minimize occupant comfort violation, to minimize impacts of demand restrike after a CPP event, as well as to control the device operation to avoid restrikes. Overall, this study provides an insight into the design and implementation of MAS, together with associated control algorithms for DR implementation in smart buildings. The proposed approaches can serve as alternative solutions to the current practices of electric utilities to engage end-use customers to participate in DR programs where occupancy level, tenant comfort condition and preference, as well as controllable devices and sensors are taken into account in both simulated and real-world environments. Research findings show that the proposed DR algorithms can perform effectively and efficiently during a DR event in residential homes and during the CPP event in commercial buildings. / Ph. D.

Page generated in 0.3991 seconds