• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Mixture of Interaction Primitives for Multiple Agents

January 2017 (has links)
abstract: In a collaborative environment where multiple robots and human beings are expected to collaborate to perform a task, it becomes essential for a robot to be aware of multiple agents working in its work environment. A robot must also learn to adapt to different agents in the workspace and conduct its interaction based on the presence of these agents. A theoretical framework was introduced which performs interaction learning from demonstrations in a two-agent work environment, and it is called Interaction Primitives. This document is an in-depth description of the new state of the art Python Framework for Interaction Primitives between two agents in a single as well as multiple task work environment and extension of the original framework in a work environment with multiple agents doing a single task. The original theory of Interaction Primitives has been extended to create a framework which will capture correlation between more than two agents while performing a single task. The new state of the art Python framework is an intuitive, generic, easy to install and easy to use python library which can be applied to use the Interaction Primitives framework in a work environment. This library was tested in simulated environments and controlled laboratory environment. The results and benchmarks of this library are available in the related sections of this document. / Dissertation/Thesis / Masters Thesis Computer Science 2017
2

Training Multi-Agent Collaboration using Deep Reinforcement Learning in Game Environment / Träning av sambarbete mellan flera agenter i spelmiljö med hjälp av djup förstärkningsinlärning

Deng, Jie January 2018 (has links)
Deep Reinforcement Learning (DRL) is a new research area, which integrates deep neural networks into reinforcement learning algorithms. It is revolutionizing the field of AI with high performance in the traditional challenges, such as natural language processing, computer vision etc. The current deep reinforcement learning algorithms enable an end to end learning that utilizes deep neural networks to produce effective actions in complex environments from high dimensional sensory observations, such as raw images. The applications of deep reinforcement learning algorithms are remarkable. For example, the performance of trained agent playing Atari video games is comparable, or even superior to a human player. Current studies mostly focus on training single agent and its interaction with dynamic environments. However, in order to cope with complex real-world scenarios, it is necessary to look into multiple interacting agents and their collaborations on certain tasks. This thesis studies the state-of-the-art deep reinforcement learning algorithms and techniques. Through the experiments conducted in several 2D and 3D game scenarios, we investigate how DRL models can be adapted to train multiple agents cooperating with one another, by communications and physical navigations, and achieving their individual goals on complex tasks. / Djup förstärkningsinlärning (DRL) är en ny forskningsdomän som integrerar djupa neurala nätverk i inlärningsalgoritmer. Det har revolutionerat AI-fältet och skapat höga förväntningar på att lösa de traditionella problemen inom AI-forskningen. I detta examensarbete genomförs en grundlig studie av state-of-the-art inom DRL-algoritmer och DRL-tekniker. Genom experiment med flera 2D- och 3D-spelscenarion så undersöks hur agenter kan samarbeta med varandra och nå sina mål genom kommunikation och fysisk navigering.

Page generated in 0.1135 seconds