• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 122
  • 92
  • 31
  • 21
  • 10
  • 5
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 331
  • 331
  • 119
  • 108
  • 105
  • 99
  • 81
  • 78
  • 76
  • 64
  • 57
  • 56
  • 47
  • 46
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Combining coordination mechanisms to improve performance in multi-robot teams

Nasroullahi, Ehsan 09 March 2012 (has links)
Coordination is essential to achieving good performance in cooperative multiagent systems. To date, most work has focused on either implicit or explicit coordination mechanisms, while relatively little work has focused on the benefits of combining these two approaches. In this work we demonstrate that combining explicit and implicit mechanisms can significantly improve coordination and system performance over either approach individually. First, we use difference evaluations (which aim to compute an agent's contribution to the team) and stigmergy to promote implicit coordination. Second, we introduce an explicit coordination mechanism dubbed Intended Destination Enhanced Artificial State (IDEAS), where an agent incorporates other agents' intended destinations directly into its state. The IDEAS approach does not require any formal negotiation between agents, and is based on passive information sharing. Finally, we combine these two approaches on a variant of a team-based multi-robot exploration domain, and show that agents using a both explicit and implicit coordination outperform other learning agents up to 25%. / Graduation date: 2012
42

Addressing the Issues of Coalitions and Collusion in Multiagent Systems

Kerr, Reid C. January 2013 (has links)
In the field of multiagent systems, trust and reputation systems are intended to assist agents in finding trustworthy partners with whom to interact. Earlier work of ours identified in theory a number of security vulnerabilities in trust and reputation systems, weaknesses that might be exploited by malicious agents to bypass the protections offered by such systems. In this work, we begin by developing the TREET testbed, a simulation platform that allows for extensive evaluation and flexible experimentation with trust and reputation technologies. We use this testbed to experimentally validate the practicality and gravity of attacks against vulnerabilities. Of particular interest are attacks that are collusive in nature: groups of agents (coalitions) working together to improve their expected rewards. But the issue of coalitions is not unique to trust and reputation; rather, it cuts across a range of fields in multiagent systems and beyond. In some scenarios, coalitions may be unwanted or forbidden; in others they may be benign or even desirable. In this document, we propose a method for detecting coalitions and identifying coalition members, a capability that is likely to be valuable in many of the diverse fields where coalitions may be of interest. Our method makes use of clustering in benefit space (a high-dimensional space reflecting how agents benefit others in the system) in order to identify groups of agents who benefit similar sets of agents. A statistical technique is then used to identify which clusters contain coalitions. Experimentation using the TREET platform verifies the effectiveness of this approach. A series of enhancements to our method are also introduced, which improve the accuracy and robustness of the algorithm. To demonstrate how this broadly-applicable tool can be used to address domain-specific problems, we focus again on trust and reputation systems. We show how, by incorporating our work into one such system (the existing Beta Reputation System), we can provide resistance to collusion. We conclude with a detailed discussion of the value of our work for a wide range of environments, including a variety of multiagent systems and real-world settings.
43

Promoting Honesty in Electronic Marketplaces: Combining Trust Modeling and Incentive Mechanism Design

Zhang, Jie 11 May 2009 (has links)
This thesis work is in the area of modeling trust in multi-agent systems, systems of software agents designed to act on behalf of users (buyers and sellers), in applications such as e-commerce. The focus is on developing an approach for buyers to model the trustworthiness of sellers in order to make effective decisions about which sellers to select for business. One challenge is the problem of unfair ratings, which arises when modeling the trust of sellers relies on ratings provided by other buyers (called advisors). Existing approaches for coping with this problem fail in scenarios where the majority of advisors are dishonest, buyers do not have much personal experience with sellers, advisors try to flood the trust modeling system with unfair ratings, and sellers vary their behavior widely. We propose a novel personalized approach for effectively modeling trustworthiness of advisors, allowing a buyer to 1) model the private reputation of an advisor based on their ratings for commonly rated sellers 2) model the public reputation of the advisor based on all ratings for the sellers ever rated by that agent 3) flexibly weight the private and public reputation into one combined measure of the trustworthiness of the advisor. Our approach tracks ratings provided according to their time windows and limits the ratings accepted, in order to cope with advisors flooding the system and to deal with changes in agents' behavior. Experimental evidence demonstrates that our model outperforms other models in detecting dishonest advisors and is able to assist buyers to gain the largest profit when doing business with sellers. Equipped with this richer method for modeling trustworthiness of advisors, we then embed this reasoning into a novel trust-based incentive mechanism to encourage agents to be honest. In this mechanism, buyers select the most trustworthy advisors as their neighbors from which they can ask advice about sellers, forming a social network. In contrast with other researchers, we also have sellers model the reputation of buyers. Sellers will offer better rewards to satisfy buyers that are well respected in the social network, in order to build their own reputation. We provide precise formulae used by sellers when reasoning about immediate and future profit to determine their bidding behavior and the rewards to buyers, and emphasize the importance for buyers to adopt a strategy to limit the number of sellers that are considered for each good to be purchased. We theoretically prove that our mechanism promotes honesty from buyers in reporting seller ratings, and honesty from sellers in delivering products as promised. We also provide a series of experimental results in a simulated dynamic environment where agents may be arriving and departing. This provides a stronger defense of the mechanism as one that is robust to important conditions in the marketplace. Our experiments clearly show the gains in profit enjoyed by both honest sellers and honest buyers when our mechanism is introduced and our proposed strategies are followed. In general, our research will serve to promote honesty amongst buyers and sellers in e-marketplaces. Our particular proposal of allowing sellers to model buyers opens a new direction in trust modeling research. The novel direction of designing an incentive mechanism based on trust modeling and using this mechanism to further help trust modeling by diminishing the problem of unfair ratings will hope to bridge researchers in the areas of trust modeling and mechanism design.
44

Promoting Honesty in Electronic Marketplaces: Combining Trust Modeling and Incentive Mechanism Design

Zhang, Jie 11 May 2009 (has links)
This thesis work is in the area of modeling trust in multi-agent systems, systems of software agents designed to act on behalf of users (buyers and sellers), in applications such as e-commerce. The focus is on developing an approach for buyers to model the trustworthiness of sellers in order to make effective decisions about which sellers to select for business. One challenge is the problem of unfair ratings, which arises when modeling the trust of sellers relies on ratings provided by other buyers (called advisors). Existing approaches for coping with this problem fail in scenarios where the majority of advisors are dishonest, buyers do not have much personal experience with sellers, advisors try to flood the trust modeling system with unfair ratings, and sellers vary their behavior widely. We propose a novel personalized approach for effectively modeling trustworthiness of advisors, allowing a buyer to 1) model the private reputation of an advisor based on their ratings for commonly rated sellers 2) model the public reputation of the advisor based on all ratings for the sellers ever rated by that agent 3) flexibly weight the private and public reputation into one combined measure of the trustworthiness of the advisor. Our approach tracks ratings provided according to their time windows and limits the ratings accepted, in order to cope with advisors flooding the system and to deal with changes in agents' behavior. Experimental evidence demonstrates that our model outperforms other models in detecting dishonest advisors and is able to assist buyers to gain the largest profit when doing business with sellers. Equipped with this richer method for modeling trustworthiness of advisors, we then embed this reasoning into a novel trust-based incentive mechanism to encourage agents to be honest. In this mechanism, buyers select the most trustworthy advisors as their neighbors from which they can ask advice about sellers, forming a social network. In contrast with other researchers, we also have sellers model the reputation of buyers. Sellers will offer better rewards to satisfy buyers that are well respected in the social network, in order to build their own reputation. We provide precise formulae used by sellers when reasoning about immediate and future profit to determine their bidding behavior and the rewards to buyers, and emphasize the importance for buyers to adopt a strategy to limit the number of sellers that are considered for each good to be purchased. We theoretically prove that our mechanism promotes honesty from buyers in reporting seller ratings, and honesty from sellers in delivering products as promised. We also provide a series of experimental results in a simulated dynamic environment where agents may be arriving and departing. This provides a stronger defense of the mechanism as one that is robust to important conditions in the marketplace. Our experiments clearly show the gains in profit enjoyed by both honest sellers and honest buyers when our mechanism is introduced and our proposed strategies are followed. In general, our research will serve to promote honesty amongst buyers and sellers in e-marketplaces. Our particular proposal of allowing sellers to model buyers opens a new direction in trust modeling research. The novel direction of designing an incentive mechanism based on trust modeling and using this mechanism to further help trust modeling by diminishing the problem of unfair ratings will hope to bridge researchers in the areas of trust modeling and mechanism design.
45

Trust-based Incentive Mechanisms for Community-based Multiagent Systems

Kastidou, Georgia 26 May 2010 (has links)
In this thesis we study peer-based communities which are online communities whose services are provided by their participant agents. In order to improve the services an agent enjoys in these communities, we need to improve the services other agents offer. Towards this goal, we propose a novel solution which allows communities to share the experience of their members with other communities. The experience of a community with an agent is captured in the evaluation rating of the agent within the community, which can either represent the trustworthiness or the reputation of the agent. We argue that exchanging this information is the right way to improve the services the agent offers since it: i) exploits the information that each community accumulates to allow other communities to decide whether to accept the agent while it also puts pressure on the agent to behave well, since it is aware that any misbehaviour will be spread to the communities it might wish to join in the future, ii) can prevent the agent from overstretching itself among many communities, since this may lead the agent to provide very limited services to each of these communities due to its limited resources, and thus its trustworthiness and reputation might be compromised. We study mechanisms that can be used to facilitate the exchange of trust or reputation information between communities. We make two key contributions. First, we propose a graph-based model which allows a particular community to determine which other communities to ask information from. We leverage consistency of past information and provide an equilibrium analysis showing that communities are best-off when they truthfully report the requested information, and describe how payments should be made to support the equilibrium. Our second contribution is a promise-based trust model where agents are judged based on the contributions they promise and deliver to the community. We outline a set of desirable properties such a model must exhibit, provide an instantiation, and an empirical evaluation.
46

Multi-robot platooning in hostile environments

Shively, Jeremy 09 April 2012 (has links)
The purpose of this thesis is to develop a testing environment for mobile robot experiments, to examine methods for multi-robot platooning through hostile environments, and test these algorithms on mobile robots. Such a system will allow us to rapidly address and test problems that arise concerning robot swarms and consequent interactions. In order to create this hardware simulation environment a test bed will be created using ROS or Robot Operating System. This platform is highly modular and extensible for future development. Trajectory generation for the robots will use smoothing splines, B-splines, and A* search. Each method has distinct properties which will be analyzed and rated with respect to its effectiveness with regards to robotic platooning. A few issues to be considered include: Is the optimal path taken with respect to distance and threats? Is the formation of the robots maintained or compromised during traversal of the path? And finally, what sorts of compromises or additions are needed to make each method effective? This work will be helpful for choosing route planning methods in future work and will provide a large code base for rapid prototyping.
47

Trust-based Incentive Mechanisms for Community-based Multiagent Systems

Kastidou, Georgia 26 May 2010 (has links)
In this thesis we study peer-based communities which are online communities whose services are provided by their participant agents. In order to improve the services an agent enjoys in these communities, we need to improve the services other agents offer. Towards this goal, we propose a novel solution which allows communities to share the experience of their members with other communities. The experience of a community with an agent is captured in the evaluation rating of the agent within the community, which can either represent the trustworthiness or the reputation of the agent. We argue that exchanging this information is the right way to improve the services the agent offers since it: i) exploits the information that each community accumulates to allow other communities to decide whether to accept the agent while it also puts pressure on the agent to behave well, since it is aware that any misbehaviour will be spread to the communities it might wish to join in the future, ii) can prevent the agent from overstretching itself among many communities, since this may lead the agent to provide very limited services to each of these communities due to its limited resources, and thus its trustworthiness and reputation might be compromised. We study mechanisms that can be used to facilitate the exchange of trust or reputation information between communities. We make two key contributions. First, we propose a graph-based model which allows a particular community to determine which other communities to ask information from. We leverage consistency of past information and provide an equilibrium analysis showing that communities are best-off when they truthfully report the requested information, and describe how payments should be made to support the equilibrium. Our second contribution is a promise-based trust model where agents are judged based on the contributions they promise and deliver to the community. We outline a set of desirable properties such a model must exhibit, provide an instantiation, and an empirical evaluation.
48

Recommending messages to users in participatory media environments: a Bayesian credibility approach

Sardana, Noel 07 April 2014 (has links)
In this thesis, we address the challenge of information overload in online participatory messaging environments using an artificial intelligence approach drawn from research in multiagent systems trust modeling. In particular, we reason about which messages to show to users based on modeling both credibility and similarity, motivated by a need to discriminate between (false) popular and truly beneficial messages. Our work focuses on environments wherein users' ratings on messages reveal their preferences and where the trustworthiness of those ratings then needs to be modeled, in order to make effective recommendations. We first present one solution, CredTrust, and demonstrate its efficacy in comparison with LOAR --- an established trust-based recommender system applicable to participatory media networks which fails to incorporate the modeling of credibility. Validation for our framework is provided through the simulation of an environment where the ground truth of the benefit of a message to a user is known. We are able to show that our approach performs well in terms of successfully recommending those messages with high predicted benefit and avoiding those messages with low predicted benefit. We continue by developing a new model for making recommendations that is grounded in Bayesian statistics and uses Partially Observable Markov Decision Processes (POMDPs). This model is an important next step, as both CredTrust and LOAR encode particular functions of user features (viz., similarity and credibility) when making recommendations; our new model, denoted POMDPTrust, learns the appropriate evaluation functions in order to make ``correct" belief updates about the usefulness of messages. We validate our new approach in simulation, showing that it outperforms both LOAR and CredTrust in a variety of agent scenarios. Furthermore, we demonstrate how POMDPTrust performs well against real world data sets from Reddit.com and Epinions.com. In all, we offer a novel trust model which is shown, through simulation and real-world experimentation, to be an effective agent-based solution to the problem of managing the messages posted by users in participatory media networks.
49

Addressing the Issues of Coalitions and Collusion in Multiagent Systems

Kerr, Reid C. January 2013 (has links)
In the field of multiagent systems, trust and reputation systems are intended to assist agents in finding trustworthy partners with whom to interact. Earlier work of ours identified in theory a number of security vulnerabilities in trust and reputation systems, weaknesses that might be exploited by malicious agents to bypass the protections offered by such systems. In this work, we begin by developing the TREET testbed, a simulation platform that allows for extensive evaluation and flexible experimentation with trust and reputation technologies. We use this testbed to experimentally validate the practicality and gravity of attacks against vulnerabilities. Of particular interest are attacks that are collusive in nature: groups of agents (coalitions) working together to improve their expected rewards. But the issue of coalitions is not unique to trust and reputation; rather, it cuts across a range of fields in multiagent systems and beyond. In some scenarios, coalitions may be unwanted or forbidden; in others they may be benign or even desirable. In this document, we propose a method for detecting coalitions and identifying coalition members, a capability that is likely to be valuable in many of the diverse fields where coalitions may be of interest. Our method makes use of clustering in benefit space (a high-dimensional space reflecting how agents benefit others in the system) in order to identify groups of agents who benefit similar sets of agents. A statistical technique is then used to identify which clusters contain coalitions. Experimentation using the TREET platform verifies the effectiveness of this approach. A series of enhancements to our method are also introduced, which improve the accuracy and robustness of the algorithm. To demonstrate how this broadly-applicable tool can be used to address domain-specific problems, we focus again on trust and reputation systems. We show how, by incorporating our work into one such system (the existing Beta Reputation System), we can provide resistance to collusion. We conclude with a detailed discussion of the value of our work for a wide range of environments, including a variety of multiagent systems and real-world settings.
50

A Framework for Influencing Massive Virtual Organizations

McLaughlan, Brian Paul 01 August 2011 (has links)
This work presents a framework by which a massive multiagent organization can be controlled and modified without resorting to micromanagement and without needing advanced knowledge of potentially complex organizations. In addition to their designated duties, agents in the proposed framework perform some method of determining optimal traits such as configurations, plans, knowledge bases and so forth. Traits follow survival of the fittest rules in which more successful traits overpower less successful ones. Subproblem partitions develop emergently as successful solutions are disseminated to and aggregated by unsuccessful agents. Provisions are provided to allow the administrator to guide the search process by injecting solutions known to work for a particular agent. The performance of the framework is evaluated via comparison to individual state-space search.

Page generated in 0.0975 seconds