• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 470
  • 181
  • 165
  • 52
  • 16
  • 9
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 1088
  • 1088
  • 596
  • 304
  • 196
  • 187
  • 187
  • 182
  • 154
  • 140
  • 134
  • 120
  • 119
  • 107
  • 102
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Self-organizing Coordination of Multi-Agent Microgrid Networks

January 2019 (has links)
abstract: This work introduces self-organizing techniques to reduce the complexity and burden of coordinating distributed energy resources (DERs) and microgrids that are rapidly increasing in scale globally. Technical and financial evaluations completed for power customers and for utilities identify how disruptions are occurring in conventional energy business models. Analyses completed for Chicago, Seattle, and Phoenix demonstrate site-specific and generalizable findings. Results indicate that net metering had a significant effect on the optimal amount of solar photovoltaics (PV) for households to install and how utilities could recover lost revenue through increasing energy rates or monthly fees. System-wide ramp rate requirements also increased as solar PV penetration increased. These issues are resolved using a generalizable, scalable transactive energy framework for microgrids to enable coordination and automation of DERs and microgrids to ensure cost effective use of energy for all stakeholders. This technique is demonstrated on a 3-node and 9-node network of microgrid nodes with various amounts of load, solar, and storage. Results found that enabling trading could achieve cost savings for all individual nodes and for the network up to 5.4%. Trading behaviors are expressed using an exponential valuation curve that quantifies the reputation of trading partners using historical interactions between nodes for compatibility, familiarity, and acceptance of trades. The same 9-node network configuration is used with varying levels of connectivity, resulting in up to 71% cost savings for individual nodes and up to 13% cost savings for the network as a whole. The effect of a trading fee is also explored to understand how electricity utilities may gain revenue from electricity traded directly between customers. If a utility imposed a trading fee to recoup lost revenue then trading is financially infeasible for agents, but could be feasible if only trying to recoup cost of distribution charges. These scientific findings conclude with a brief discussion of physical deployment opportunities. / Dissertation/Thesis / Doctoral Dissertation Systems Engineering 2019
432

Mission and Motion Planning for Multi-robot Systems in Constrained Environments

January 2019 (has links)
abstract: As robots become mechanically more capable, they are going to be more and more integrated into our daily lives. Over time, human’s expectation of what the robot capabilities are is getting higher. Therefore, it can be conjectured that often robots will not act as human commanders intended them to do. That is, the users of the robots may have a different point of view from the one the robots do. The first part of this dissertation covers methods that resolve some instances of this mismatch when the mission requirements are expressed in Linear Temporal Logic (LTL) for handling coverage, sequencing, conditions and avoidance. That is, the following general questions are addressed: * What cause of the given mission is unrealizable? * Is there any other feasible mission that is close to the given one? In order to answer these questions, the LTL Revision Problem is applied and it is formulated as a graph search problem. It is shown that in general the problem is NP-Complete. Hence, it is proved that the heuristic algorihtm has 2-approximation bound in some cases. This problem, then, is extended to two different versions: one is for the weighted transition system and another is for the specification under quantitative preference. Next, a follow up question is addressed: * How can an LTL specified mission be scaled up to multiple robots operating in confined environments? The Cooperative Multi-agent Planning Problem is addressed by borrowing a technique from cooperative pathfinding problems in discrete grid environments. Since centralized planning for multi-robot systems is computationally challenging and easily results in state space explosion, a distributed planning approach is provided through agent coupling and de-coupling. In addition, in order to make such robot missions work in the real world, robots should take actions in the continuous physical world. Hence, in the second part of this thesis, the resulting motion planning problems is addressed for non-holonomic robots. That is, it is devoted to autonomous vehicles’ motion planning in challenging environments such as rural, semi-structured roads. This planning problem is solved with an on-the-fly hierarchical approach, using a pre-computed lattice planner. It is also proved that the proposed algorithm guarantees resolution-completeness in such demanding environments. Finally, possible extensions are discussed. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2019
433

Operations Analytics and Optimization for Unstructured Systems: Cyber Collaborative Algorithms and Protocols for Agricultural Systems

Puwadol Dusadeerungsikul (8782601) 01 May 2020 (has links)
<p>Food security is a major concern of human civilization. A way to ensure food security is to grow plants in a greenhouse under controlled conditions. Even under careful greenhouse production, stress in plants can emerge, and can cause damaging disease. To prevent yield loss farmers, apply resources, e.g., water, fertilizers, pesticides, higher/lower humidity, lighting, and temperature, uniformly in the infected areas. Research, however, shows that the practice leads to non-optimal profit and environmental protection.</p><p>Precision agriculture (PA) is an approach to address such challenges. It aims to apply the right amount or recourses at the right time and place. PA has been able to maximize crop yield while minimizing operation cost and environmental damage. The problem is how to obtain timely, precise information at each location to optimally treat the plants. There is scant research addressing strategies, algorithms, and protocols for analytics in PA. A monitoring and treating systems are the foci of this dissertation.</p><p>The designed systems comprise of agent- and system-level protocols and algorithms. There are four parts: (1) Collaborative Control Protocol for Cyber-Physical System (CCP-CPS); (2) Collaborative Control Protocol for Early Detection of Stress in Plants (CCP-ED); (3) Optimal Inspection Profit for Precision Agriculture; and (4) Multi-Agent System Optimization in Greenhouse for Treating Plants. CCP-CPS, a backbone of the system, establishes communication line among agents. CCP-ED optimizes the local workflow and interactions of agents. Next, the Adaptive Search algorithm, a key algorithm in CCP-ED, has analyzed to obtain the optimal procedure. Lastly, when stressed plants are detected, specific agents are dispatched to treat plants in a particular location with specific treatment. </p><p>Experimental results show that collaboration among agents statistically and significantly improves performance in terms of cost, efficiency, and robustness. CCP-CPS stabilizes system operations and significantly improves both robustness and responsiveness. CCP-ED enabling collaboration among local agents, significantly improves the number of infected plants found, and system efficiency. Also, the optimal Adaptive Search algorithm, which considers system errors and plant characteristics, significantly reduces the operation cost while improving performance. Finally, with collaboration among agents, the system can effectively perform a complex task that requires multiple agents, such as treating stressed plants with a significantly lower operation cost compared to the current practice.</p>
434

APPLYING MULTI AGENT SYSTEM TO TRACK UAV MOVEMENT

Shulin Li (8097878) 11 December 2019 (has links)
The thesis introduces an innovative UAV detection system. The commercial UAV market is booming. Meanwhile, the risks and threats from improper UAV usages are also booming. CUAS is to protect the public and facilities. The problem is a lack of an intelligent platform which can adapt many sensors for UAV detection. The hypothesis is that, the system can track the UAV’s movement by applying the multi-agent system (MAS) to UAV route track. The experiment proves that the multi-agent system benefits for the UAV track. <br>
435

Multi-Agent Based Simulations in the Grid Environment

Mengistu, Dawit January 2007 (has links)
The computational Grid has become an important infrastructure as an execution environment for scientific applications that require large amount of computing resources. Applications which would otherwise be unmanageable or take a prohibitively longer execution time under previous computing paradigms can now be executed efficiently on the Grid within a reasonable time. Multi-agent based simulation (MABS) is a methodology used to study and understand the dynamics of real world phenomena in domains involving interaction and/or cooperative problem solving where the participants are characterized by entities having autonomous and social behaviour. For certain domains the size of the simulation is extremely large, intractable without employing adequate computing resources such as the Grid. Although the Grid has come with immense opportunities to resource demanding applications such as MABS, it has also brought with it a number of challenges related to performance. Performance problems may have their origins either on the side of the computing infrastructure or the application itself, or both. This thesis aims at improving the performance of MABS applications by overcoming problems inherent to the behaviour of MABS applications. It also studies the extent to which the MABS technologies have been exploited in the field of simulation and find ways to adapt existing technologies for the Grid. It investigates performance monitoring and prediction systems in the Grid environment and their implementation for MABS application with the purpose of identifying application related performance problems and their solutions. Our research shows that large-scale MABS applications have not been implemented despite the fact that many problem domains that cannot be studied properly with only partial simulation. We assume that this is due to the lack of appropriate tools such as MABS platforms for the Grid. Another important finding of this work is the improvement of application performance through the use of MABS specific middleware.
436

Decentralized Packet Clustering in Router-Based Networks

Merkle, Daniel, Middendorf, Martin, Scheidler, Alexander 26 October 2018 (has links)
Different types of decentralized clustering problems have been studied so far for networks and multi-agent systems. In this paper we introduce a new type of a decentralized clustering problem for networks. The so called Decentralized Packet Clustering (DPC) problem is to find for packets that are sent around in a network a clustering. This clustering has to be done by the routers using only few computational power and only a small amount of memory. No direct information transfer between the routers is allowed. We investigate the behavior of new a type of decentralized k-means algorithm — called DPClust — for solving the DPC problem. DPClust has some similarities with ant based clustering algorithms. We investigate the behavior of DPClust for different clustering problems and for networks that consist of several subnetworks. The amount of packet exchange between these subnetworks is limited. Networks with different connection topologies for the subnetworks are considered. A dynamic situation where the packet exchange rates between the subnetworks varies over time is also investigated. The proposed DPC problem leads to interesting research problems for network clustering.
437

A framework for training Spiking Neural Networks using Evolutionary Algorithms and Deep Reinforcement Learning

Anirudh Shankar (10276349) 12 March 2021 (has links)
In this work two novel frameworks, one using evolutionary algorithms and another using Reinforcement Learning for training Spiking Neural Networks are proposed and analyzed. A novel multi-agent evolutionary robotics (ER) based framework, inspired by competitive evolutionary environments in nature, is demonstrated for training Spiking Neural Networks (SNN). The weights of a population of SNNs along with morphological parameters of bots they control in the ER environment are treated as phenotypes. Rules of the framework select certain bots and their SNNs for reproduction and others for elimination based on their efficacy in capturing food in a competitive environment. While the bots and their SNNs are given no explicit reward to survive or reproduce via any loss function, these drives emerge implicitly as they evolve to hunt food and survive within these rules. Their efficiency in capturing food as a function of generations exhibit the evolutionary signature of punctuated equilibria. Two evolutionary inheritance algorithms on the phenotypes, Mutation and Crossover with Mutation along with their variants, are demonstrated. Performances of these algorithms are compared using ensembles of 100 experiments for each algorithm. We find that one of the Crossover with Mutation variants promotes 40% faster learning in the SNN than mere Mutation with a statistically significant margin. Along with an evolutionary approach to training SNNs, we also describe a novel Reinforcement Learning(RL) based framework using the Proximal Policy Optimization to train a SNN for an image classification task. The experiments and results of the framework are then discussed highlighting future direction of the work.
438

Modélisation et implémentation de simulations multi-agents sur architectures massivement parallèles / Modeling and implementing multi-agents based simulations on massively parallel architectures

Hermellin, Emmanuel 18 November 2016 (has links)
La simulation multi-agent représente une solution pertinente pour l’ingénierie et l’étude des systèmes complexes dans de nombreux domaines (vie artificielle, biologie, économie, etc.). Cependant, elle requiert parfois énormément de ressources de calcul, ce qui représente un verrou technologique majeur qui restreint les possibilités d'étude des modèles envisagés (passage à l’échelle, expressivité des modèles proposés, interaction temps réel, etc.).Parmi les technologies disponibles pour faire du calcul intensif (High Performance Computing, HPC), le GPGPU (General-Purpose computing on Graphics Processing Units) consiste à utiliser les architectures massivement parallèles des cartes graphiques (GPU) comme accélérateur de calcul. Cependant, alors que de nombreux domaines bénéficient des performances du GPGPU (météorologie, calculs d’aérodynamique, modélisation moléculaire, finance, etc.), celui-ci est peu utilisé dans le cadre de la simulation multi-agent. En fait, le GPGPU s'accompagne d’un contexte de développement très spécifique qui nécessite une transformation profonde et non triviale des modèles multi-agents. Ainsi, malgré l'existence de travaux pionniers qui démontrent l'intérêt du GPGPU, cette difficulté explique le faible engouement de la communauté multi-agent pour le GPGPU.Dans cette thèse, nous montrons que, parmi les travaux qui visent à faciliter l'usage du GPGPU dans un contexte agent, la plupart le font au travers d’une utilisation transparente de cette technologie. Cependant, cette approche nécessite d’abstraire un certain nombre de parties du modèle, ce qui limite fortement le champ d’application des solutions proposées. Pour pallier ce problème, et au contraire des solutions existantes, nous proposons d'utiliser une approche hybride (l'exécution de la simulation est partagée entre le processeur et la carte graphique) qui met l'accent sur l'accessibilité et la réutilisabilité grâce à une modélisation qui permet une utilisation directe et facilitée de la programmation GPU. Plus précisément, cette approche se base sur un principe de conception, appelé délégation GPU des perceptions agents, qui consiste à réifier une partie des calculs effectués dans le comportement des agents dans de nouvelles structures (e.g. dans l’environnement). Ceci afin de répartir la complexité du code et de modulariser son implémentation. L'étude de ce principe ainsi que les différentes expérimentations réalisées montre l'intérêt de cette approche tant du point de vue conceptuel que du point de vue des performances. C'est pourquoi nous proposons de généraliser cette approche sous la forme d'une méthodologie de modélisation et d'implémentation de simulations multi-agents spécifiquement adaptée à l'utilisation des architectures massivement parallèles. / Multi-Agent Based Simulations (MABS) represents a relevant solution for the engineering and the study of complex systems in numerous domains (artificial life, biology, economy, etc.). However, MABS sometimes require a lot of computational resources, which is a major constraint that restricts the possibilities of study for the considered models (scalability, real-time interaction, etc.).Among the available technologies for HPC (High Performance Computing), the GPGPU (General-Purpose computing on Graphics Processing Units) proposes to use the massively parallel architectures of graphics cards as computing accelerator. However, while many areas benefit from GPGPU performances (meteorology, molecular dynamics, finance, etc.). Multi-Agent Systems (MAS) and especially MABS hardly enjoy the benefits of this technology: GPGPU is very little used and only few works are interested in it. In fact, the GPGPU comes along with a very specific development context which requires a deep and not trivial transformation process for multi-agents models. So, despite the existence of works that demonstrate the interest of GPGPU, this difficulty explains the low popularity of GPGPU in the MAS community.In this thesis, we show that among the works which aim to ease the use of GPGPU in an agent context, most of them do it through a transparent use of this technology. However, this approach requires to abstract some parts of the models, what greatly limits the scope of the proposed solutions. To handle this issue, and in contrast to existing solutions, we propose to use a nhybrid approach (the execution of the simulation is shared between both the processor and graphics card) that focuses on accessibility and reusability through a modeling process that allows to use directly GPU programming while simplifying its use. More specifically, this approach is based on a design principle, called GPU delegation of agent perceptions, consists in making a clear separation between the agent behaviors, managed by the processor, and environmental dynamics, handled by the graphics card. So, one major idea underlying this principle is to identify agent computations which can be transformed in new structures (e.g. in the environment) in order to distribute the complexity of the code and modulate its implementation. The study of this principle and the different experiments conducted show the advantages of this approach from both a conceptual and performances point of view. Therefore, we propose to generalize this approach and define a comprehensive methodology relying on GPU delegation specifically adapted to the use of massively parallel architectures for MABS.
439

Distribution of Control Effort in Multi-Agent Systems : Autonomous systems of the world, unite!

Axelson-Fisk, Magnus January 2020 (has links)
As more industrial processes, transportation and appliances have been automated or equipped with some level of artificial intelligence, the number and scale of interconnected systems has grown in the recent past. This is a development which can be expected to continue and therefore the research in performance of interconnected systems and networks is growing. Due to increased automation and sheer scale of networks, dynamically scaling networks is an increasing field and research into scalable performance measures is advancing. Recently, the notion gamma-robustness, a scalable network performance measure, was introduced as a measurement of interconnected systems robustness with respect to external disturbances. This thesis aims to investigate how the distribution of control effort and cost, within interconnected system, affects network performance, measured with gamma-robustness. Further, we introduce a notion of fairness and a measurement of unfairness in order to quantify the distribution of network properties and performance. With these in place, we also present distributed algorithms with which the distribution of control effort can be controlled in order to achieve a desired network performance. We close with some examples to show the strengths and weaknesses of the presented algorithms. / I och med att fler och fler system och enheter blir utrustade med olika grader av intelligens så växer både förekomsten och omfattningen av sammankopplade system, även kallat Multi-Agent Systems. Sådana system kan vi se exempel på i traffikledningssystem, styrning av elektriska nätverk och fordonståg, vi kan också hitta fler och fler exempel på så kallade sensornätverk i och med att Internet of Things och Industry 4.0 används och utvecklas mer och mer. Det som särskiljer sammankopplade system från mer traditionella system med flera olika styrsignaler och utsignaler är att dem sammankopplade systemen inte styrs från en central styrenhet. Istället styrs dem sammankopplade systemen på ett distribuerat sätt i och med att varje agent styr sig själv och kan även ha individuella mål som den försöker uppfylla. Det här gör att analysen av sammankopplade system försvåras, men tidigare forskning har hittat olika regler och förhållninssätt för agenterna och deras sammankoppling för att uppfylla olika krav, såsom stabilitet och robusthet. Men även om dem sammankopplade systemen är både robusta och stabila så kan dem ha egenskaper som vi vill kunna kontrollera ytterligare. Specifikt kan ett sådant prestandamått vara systemens motståndskraft mot påverkan av yttre störningar och i vanliga olänkade system finns det en inneboende avvägning mellan kostnad på styrsignaler och resiliens mot yttre störningar. Samma avvägning hittar vi i sammankopplade system, men i dessa system hittar vi också ytterligare en dimension på detta problem. I och med att ett visst mått av en nätverksprestanda inte nödvändigtvis betyder att varje agent i nätverket delar samma mått kan agenterna i ett nätverk ha olika utväxling mellan styrsignalskostnad och resiliens mot yttre störningar. Detta gör att vissa agenter kan ha onödigt höga styrsignalskonstander, i den mening att systemen skulle uppnå samma nätverksprestanda men med lägre styrsignalskostnad om flera av agenterna skulle vikta om sina kontrollinsatser. I det här examensarbetet har vi studerat hur olika val av kontrollinsats påverkar ett sammankopplat systems prestanda. Vi har gjort detta för att undersöka hur autonoma, men sammankopplade, agenter kan ändra sin kontrollinsats, men med bibehållen nätverksprestanda, och på det sättet minska sina kontrollkostnader. Detta har bland annat resulterat i en distruberad algoritm för att manipulera agenternas kontrollinsats så att skillnaderna mellan agenternas resiliens mot yttre störningar minskar och nätverksprestandan ökar. Vi avslutar rapporten med att visa ett par exempel på hur system anpassade med hjälp av den framtagna algoritmen får ökad prestanda. Avslutningsvis följer en diskussion kring hur vissa antaganden kring systemstruktur kan släppas upp, samt kring vilka områden framtida forskning skulle kunna fortsätta med.
440

A Multi-Agent System with Negotiation Agents for E-Trading of Securities

Bahar Shanjani, Mina January 2014 (has links)
The financial markets have been started to get decentralized and even distributed. Consumers can now purchase stocks from their home computers without the use of a traditional broker. The dynamism and unpredictability of this domain which is continuously growing in complexity and also the giant volume of information which can affect this market, makes it one of the best potential domains to take advantage of agents. This thesis considers the main concerns of securities e-trading area in order to highlight advantages and disadvantages of multi-agent negotiating systems for online trading of securities comparing to single-agent systems. And then presents a multi-agent system design named MASTNA which considers both decision making and negotiating. The design seeks to improve the main concerns of securities e-trading such as speed, accuracy and handling complexities. MASTNA works over a distributed market and engages different types of agents in order to perform different tasks. For handling the negotiations MASTNA takes advantage of mobile negotiator agents with the purpose of handling parallel negotiations over an unreliable network (Internet).

Page generated in 0.0513 seconds