• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 454
  • 181
  • 165
  • 51
  • 16
  • 9
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 1065
  • 1065
  • 582
  • 297
  • 188
  • 187
  • 182
  • 175
  • 150
  • 133
  • 128
  • 120
  • 116
  • 105
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Multi-Agent Potential Field based Architectures for Real-Time Strategy Game Bots

Hagelbäck, Johan January 2012 (has links)
Real-Time Strategy (RTS) is a sub-genre of strategy games which is running in real-time, typically in a war setting. The player uses workers to gather resources, which in turn are used for creating new buildings, training combat units, build upgrades and do research. The game is won when all buildings of the opponent(s) have been destroyed. The numerous tasks that need to be handled in real-time can be very demanding for a player. Computer players (bots) for RTS games face the same challenges, and also have to navigate units in highly dynamic game worlds and deal with other low-level tasks such as attacking enemy units within fire range. This thesis is a compilation grouped into three parts. The first part deals with navigation in dynamic game worlds which can be a complex and resource demanding task. Typically it is solved by using pathfinding algorithms. We investigate an alternative approach based on Artificial Potential Fields and show how an APF based navigation system can be used without any need of pathfinding algorithms. In RTS games players usually have a limited visibility of the game world, known as Fog of War. Bots on the other hand often have complete visibility to aid the AI in making better decisions. We show that a Multi-Agent PF based bot with limited visibility can match and even surpass bots with complete visibility in some RTS scenarios. We also show how the bot can be extended and used in a full RTS scenario with base building and unit construction. In the next section we propose a flexible and expandable RTS game architecture that can be modified at several levels of abstraction to test different techniques and ideas. The proposed architecture is implemented in the famous RTS game StarCraft, and we show how the high-level architecture goals of flexibility and expandability can be achieved. In the last section we present two studies related to gameplay experience in RTS games. In games players usually have to select a static difficulty level when playing against computer oppo- nents. In the first study we use a bot that during runtime can adapt the difficulty level depending on the skills of the opponent, and study how it affects the perceived enjoyment and variation in playing against the bot. To create bots that are interesting and challenging for human players a goal is often to create bots that play more human-like. In the second study we asked participants to watch replays of recorded RTS games between bots and human players. The participants were asked to guess and motivate if a player was controlled by a human or a bot. This information was then used to identify human-like and bot-like characteristics for RTS game players.
102

The Fern algorithm for intelligent discretization

Hall, John Wendell 06 November 2012 (has links)
This thesis proposes and tests a recursive, adpative, and computationally inexpensive method for partitioning real-number spaces. When tested for proof-of-concept on both one- and two- dimensional classification and control problems, the Fern algorithm was found to work well in one dimension, moderately well for two-dimensional classification, and not at all for two-dimensional control. Testing ferns as pure discretizers - which would involve a secondary discrete learner - has been left to future work. / text
103

Dynamic Credibility Threshold Assignment in Trust and Reputation Mechanisms Using PID Controller

2015 July 1900 (has links)
In online shopping buyers do not have enough information about sellers and cannot inspect the products before purchasing them. To help buyers find reliable sellers, online marketplaces deploy Trust and Reputation Management (TRM) systems. These systems aggregate buyers’ feedback about the sellers they have interacted with and about the products they have purchased, to inform users within the marketplace about the sellers and products before making purchases. Thus positive customer feedback has become a valuable asset for each seller in order to attract more business. This naturally creates incentives for cheating, in terms of introducing fake positive feedback. Therefore, an important responsibility of TRM systems is to aid buyers find genuine feedback (reviews) about different sellers. Recent TRM systems achieve this goal by selecting and assigning credible advisers to any new customer/buyer. These advisers are selected among the buyers who have had experience with a number of sellers and have provided feedback for their services and goods. As people differ in their tastes, the buyer feedback that would be most useful should come from advisers with similar tastes and values. In addition, the advisers should be honest, i.e. provide truthful reviews and ratings, and not malicious, i.e. not collude with sellers to favour them or with other buyers to badmouth some sellers. Defining the boundary between dishonest and honest advisers is very important. However, currently, there is no systematic approach for setting the honesty threshold which divides benevolent advisers from the malicious ones. The thesis addresses this problem and proposes a market-adaptive honesty threshold management mechanism. In this mechanism the TRM system forms a feedback system which monitors the current status of the e-marketplace. According to the status of the e-marketplace the feedback system improves the performance utilizing PID controller from the field of control systems. The responsibility of this controller is to set the the suitable value of honesty threshold. The results of experiments, using simulation and real-world dataset show that the market-adaptive honesty threshold allows to optimize the performance of the marketplace with respect to throughput and buyer satisfaction.
104

ROMIE, une approche d'alignement d'ontologies à base d'instances

Elbyed, Abdeltif 16 October 2009 (has links) (PDF)
L'interoperabilite semantique est une question importante, largement identifiee dans les technologies d'organisation et de l'information et dans la communaute de recherche en systemes d'information. L'adoption large du Web afin d'acceder a des informations distribuees necessite l'interoperabilite des systemes qui gerent ces informations. Des solutions et reflexions comme le Web Semantique facilitent la localisation et l'integration des donnees d'une maniere plus intelligente par l'intermediaire des ontologies. Il offre une vision plus semantique et comprehensible du web. Pourtant, il souleve un certain nombre de defis de recherche. Un des principaux defis est de comparer et aligner les differentes ontologies qui apparaissent dans des taches d'integration. Le principal objectif de cette these est de proposer une approche d'alignement pour identifier les liens de correspondance entre des ontologies. Notre approche combine les techniques et les methodes d'appariement linguistiques, syntaxiques, structurelles ou encore semantiques (basees sur les instances). Elle se compose de deux phases principales : la phase d'enrichissement semantique des ontologies a comparer et la phase d'alignement ou de mapping. La phase d'enrichissement est basee sur l'analyse des informations que les ontologies developpent (des ressources web, des donnees, des documents, etc.) et qui sont associes aux concepts de l'ontologie. Notre intuition est que ces informations ainsi que les relations qui peuvent exister entre elles participent a l'enrichissement semantique entre les concepts. A l'issue de la phase d'enrichissement, une ontologie contient plus de relations semantiques entre les concepts qui seront exploitees dans la deuxieme phase. La phase de mapping prend deux ontologies enrichies et calcule la similarite entre les couples de concepts. Un processus de filtrage nous permet de reduire automatiquement le nombre de fausses relations. La validation des correspondances est un processus interactif direct (avec un expert) ou indirect (en mesurant le degre de satisfaction de l'utilisateur). Notre approche a donne lieu a un systeme de mapping appele ROMIE (Resource based Ontology Mapping within an Interactive and Extensible environment). Il a ete experimente et evalue dans deux differentes applications : une application biomedicale et une application dans le domaine de l'apprentissage enrichi par les technologies (ou e-learning).
105

Une architecture multi-agent pour la détection, la reconnaissance et l'identification de cibles

Ealet, Fabienne 25 June 2001 (has links) (PDF)
La fonction détection, reconnaissance et identification de cibles (DRI) impose l'intégration de connaissances hétérogènes en grande quantité. Ceci nous conduit vers des architectures distribuant la connaissance et permettant d'effectuer des traitements en parallèle et en concurrence. Nous proposons une approche multi-agent basée sur la mise en oeuvre d'agents spécialisés répondant aux principes d'incrémentalité, de distribution, de coopération, de focalisation et d'adaptation. L'architecture utilisée est de type multi-thread. Elle est dotée d'un administrateur et spécifie les moyens de communication entre les agents. Ceux-ci sont définis par leur rôle, leurs comportements et les informations qu'ils manipulent et qu'ils produisent. À un instant donné, différents agents coexistent dans l'image. Chacun dispose d'une autonomie pour accéder aux données et élaborer sa propre stratégie en fonction des informations disponibles. La planification est prise en charge localement au niveau de l'agent. Les connaissances nécessaires sont spécifiées dans une base de connaissances partagée par tous les agents. Les informations acquises sont stockées dans un modèle du monde. Le système se construit et vient s'enrichir au cours du temps, ceci impose une stratégie incrémentale dans la mise à jour des hypothèses. Cette modélisation est faite sous forme de réseaux bayésiens.
106

Seniority as a Metric in Reputation Systems for E-Commerce

Cormier, Catherine 19 July 2011 (has links)
In order to succeed, it is imperative that all e-commerce systems include an effective and reliable trust and reputation modeling system. This is particularly true of decentralized e-commerce systems in which autonomous software engage in commercial transactions. Many researchers have sought to overcome the complexities of modeling a subjective, human concept like trust, resulting in several trust and reputation models. While these models each present a unique offering and solution to the problem, several issues persist. Most of the models require direct experience in the e-commerce system in order to make effective trust decisions. This leaves new agents and agents who only casually use the e-commerce system vulnerable. Additionally, the reputation ratings of agents who are relatively new to the system are often indistinguishable from scores for poorly performing agents. Finally, more tactics to defend against agents who exploit the characteristics of the open, distributed system for their own malicious needs are required. To address these issues, a new metric is devised and presented: seniority. Based on agent age and activity level within the e-commerce system, seniority provides a means of judging the credibility of other agents with little or no prior experience in the system. As the results of experimental analysis reveals, employing a reputation model that uses seniority provides considerable value to agents who are new agents, casual buyer agents and all other purchasing agents in the e-commerce system. This new metric therefore offers a significant contribution toward the development of enhanced and new trust and reputation models for deployment in real-world distributed e-commerce environments.
107

Consensus in multi-agent systems and bilateral teleoperation with communication constraints

Wu, Jian 01 March 2013 (has links)
With the advancement of communication technology, more and more control processes happen in networked environment. This makes it possible for us to deploy multiple systems in a spatially distributed way such that they could finish certain tasks collaboratively. While it brings about numerous advantages over conventional control, challenges arise in the mean time due to the imperfection of communication. This thesis is aimed to solve some problems in cooperative control involving multiple agents in the presence of communication constraints. Overall, it is comprised of two main parts: Distributed consensus in multi-agent systems and bilateral teleoperation. Chapter 2 to Chapter 4 deal with the consensus problem in multi-agent systems. Our goal is to design appropriate control protocols such that the states of a group of agents will converge to a common value eventually. The robustness of multi-agent systems against various adverse factors in communication is our central concern. Chapter 5 copes with bilateral teleoperation with time delays. The task is to design control laws such that synchronization is reached between the master plant and slave plant. Meanwhile, transparency should be maintained within an acceptable level. Chapter 2 investigates the consensus problem in a multi-agent system with directed communication topology. The time delays are modeled as a Markov chain, thus more characteristics of delays are taken into account. A delay-dependent approach has been proposed to design the Laplacian matrix such that the system is robust against stochastic delays. The consensus problem is converted into stabilization of its equivalent error dynamics, and the mean square stability is employed to characterize its convergence property. One feature of Chapter 2 is redesign of the adjacency matrix, which makes it possible to adjust communication weights dynamically. In Chapter 3, average consensus in single-integrator agents with time-varying delays and random data losses is studied. The interaction topology is assumed to be undirected. The communication constraints lie in two aspects: 1) time-varying delays that are non-uniform and bounded; 2) data losses governed by Bernoulli processes with non-uniform probabilities. By considering the upper bounds of delays and probabilities of packet dropouts, sufficient conditions are developed to guarantee that the multi-agent system will achieve consensus. Chapter 4 is concerned with the consensus problem with double-integrator dynamics and non-uniform sampling. The communication topology is assumed to be fixed and directed. With the adoption of time-varying control gains and the theory on stochastic matrices, we prove that when the graph has a directed spanning tree and the control gains are properly selected, consensus will be reached. Chapter 5 deals with bilateral teleoperation with probabilistic time delays. The delays are from a finite set and each element in the set has a probability of occurrence. After defining the tracking error between the master and slave, the input-to-state stability is used to characterize the system performance. By taking into account the probabilistic information in time delays and using the pole placement technique, the teleoperation system has achieved better position tracking and enhanced transparency. / Graduate
108

Consensus analysis of networked multi-agent systems with second-order dynamics and Euler-Lagrange dynamics

Mu, Bingxian 30 May 2013 (has links)
Consensus is a central issue in designing multi-agent systems (MASs). How to design control protocols under certain communication topologies is the key for solving consensus problems. This thesis is focusing on investigating the consensus protocols under different scenarios: (1) The second-order system dynamics with Markov time delays; (2) The Euler-Lagrange dynamics with uniform and nonuniform sampling strategies and the event-based control strategy. Chapter 2 is focused on the consensus problem of the multi-agent systems with random delays governed by a Markov chain. For second-order dynamics under the sampled-data setting, we first convert the consensus problem to the stability analysis of the equivalent error system dynamics. By designing a suitable Lyapunov function and deriving a set of linear matrix inequalities (LMIs), we analyze the mean square stability of the error system dynamics with fixed communication topology. Since the transition probabilities in a Markov chain are sometimes partially unknown, we propose a method of estimating the delay for the next sampling time instant. We explicitly give a lower bound of the probability for the delay estimation which can ensure the stability of the error system dynamics. Finally, by applying an augmentation technique, we convert the error system dynamics to a delay-free stochastic system. A sufficient condition is established to guarantee the consensus of the networked multi-agent systems with switching topologies. Simulation studies for a fleet of unmanned vehicles verify the theoretical results. In Chapter 3, we propose the consensus control protocols involving both position and velocity information of the MASs with the linearized Euler-Lagrange dynamics, under uniform sampling and nonuniform sampling schemes, respectively. Then we extend the results to the case of applying the centralized event-triggered strategy, and accordingly analyze the consensus property. Simulation examples and comparisons verify the effectiveness of the proposed methods. / Graduate / 0548
109

Trust Logics and Their Horn Fragments : Formalizing Socio-Cognitive Aspects of Trust

Nygren, Karl January 2015 (has links)
This thesis investigates logical formalizations of Castelfranchi and Falcone's (C&F) theory of trust [9, 10, 11, 12]. The C&F theory of trust defines trust as an essentially mental notion, making the theory particularly well suited for formalizations in multi-modal logics of beliefs, goals, intentions, actions, and time. Three different multi-modal logical formalisms intended for multi-agent systems are compared and evaluated along two lines of inquiry. First, I propose formal definitions of key concepts of the C&F theory of trust and prove some important properties of these definitions. The proven properties are then compared to the informal characterisation of the C&F theory. Second, the logics are used to formalize a case study involving an Internet forum, and their performances in the case study constitute grounds for a comparison. The comparison indicates that an accurate modelling of time, and the interaction of time and goals in particular, is integral for formal reasoning about trust. Finally, I propose a Horn fragment of the logic of Herzig, Lorini, Hubner, and Vercouter [25]. The Horn fragment is shown to be too restrictive to accurately express the considered case study.
110

A Targeting Approach To Disturbance Rejection In Multi-Agent Systems

Liu, Yining January 2012 (has links)
This thesis focuses on deadbeat disturbance rejection for discrete-time linear multi-agent systems. The multi-agent systems, on which Spieser and Shams’ decentralized deadbeat output regulation problem is based, are extended by including disturbance agents. Specifically, we assume that there are one or more disturbance agents interacting with the plant agents in some known manner. The disturbance signals are assumed to be unmeasured and, for simplicity, constant. Control agents are introduced to interact with the plant agents, and each control agent is assigned a target plant agent. The goal is to drive the outputs of all plant agents to zero in finite time, despite the presence of the disturbances. In the decentralized deadbeat output regulation problem, two analysis schemes were introduced: targeting analysis, which is used to determine whether or not control laws can be found to regulate, not all the agents, but only the target agents; and growing analysis, which is used to determine the behaviour of all the non-target agents when the control laws are applied. In this thesis these two analyses are adopted to the deadbeat disturbance rejection problem. A new necessary condition for successful disturbance rejection is derived, namely that a control agent must be connected to the same plant agent to which a disturbance agent is connected. This result puts a bound on the minimum number of control agents and constraints the locations of control agents. Then, given the premise that both targeting and growing analyses succeed in the special case where the disturbances are all ignored, a new control approach is proposed for the linear case based on the idea of integral control and the regulation methods of Spieser and Shams. Preliminary studies show that this approach is also suitable for some nonlinear systems.

Page generated in 0.078 seconds