• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Patterns and protocols for agent-oriented software development

Oluyomi, Ayodele O. Unknown Date (has links) (PDF)
Agent-oriented software engineering is faced with challenges that impact on the adoption of agent technology by the wider software engineering community. This is generally due to lack of adequate comprehension of the concepts of agent technology. This thesis is based on the premise that the comprehension of the concepts of and the adoption of agent technology can be improved. Two approaches are explored: the first approach is the analysis and structuring of the interactions in multiagent systems; the second approach is sharing of experiences of what works and what does not in agent-oriented software engineering using software patterns. While analysis of interactions in multiagent systems improves the understanding of the behaviour of multiagent systems, sharing multiagent system development experience improves the understanding of the concepts of agent technology as well as the challenges that face the engineering of multiagent systems. It is therefore believed that interaction analysis and experience sharing can enhance the comprehension of agent technology and hence, the adoption of the technology by the wider community of software practitioners. This thesis addresses the challenges facing agent-oriented software engineering by presenting a dedicated approach for developing agent interaction protocols to guide the interactions in a multiagent system; and a comprehensive framework for classifying, analyzing and describing agent-oriented patterns for the purpose of sharing multiagent systems development experiences.
2

Non-Reciprocating Sharing Methods in Cooperative Q-Learning Environments

Cunningham, Bryan 28 August 2012 (has links)
Past research on multi-agent simulation with cooperative reinforcement learning (RL) for homogeneous agents focuses on developing sharing strategies that are adopted and used by all agents in the environment. These sharing strategies are considered to be reciprocating because all participating agents have a predefined agreement regarding what type of information is shared, when it is shared, and how the participating agent's policies are subsequently updated. The sharing strategies are specifically designed around manipulating this shared information to improve learning performance. This thesis targets situations where the assumption of a single sharing strategy that is employed by all agents is not valid. This work seeks to address how agents with no predetermined sharing partners can exploit groups of cooperatively learning agents to improve learning performance when compared to Independent learning. Specifically, several intra-agent methods are proposed that do not assume a reciprocating sharing relationship and leverage the pre-existing agent interface associated with Q-Learning to expedite learning. The other agents' functions and their sharing strategies are unknown and inaccessible from the point of view of the agent(s) using the proposed methods. The proposed methods are evaluated on physically embodied agents in the multi-agent cooperative robotics field learning a navigation task via simulation. The experiments conducted focus on the effects of the following factors on the performance of the proposed non-reciprocating methods: scaling the number of agents in the environment, limiting the communication range of the agents, and scaling the size of the environment. / Master of Science

Page generated in 0.1444 seconds