• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 42
  • 42
  • 16
  • 12
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Achieving Imitation-Based Learning for a Humanoid Robot by Evolutionary Computation

Chung, Chi-Hsiu 29 July 2009 (has links)
This thesis presents an imitation-based methodology, also a simple and easy way, for a service robot to learn the behaviors demonstrated by the user. With this proposed method, a robot can learn human behavior through observation. Inspired by the concept of biological learning, this learning model is initiated when facing a new learning event. A series of experiments are conducted to use a humanoid robot as a platform to implement the proposed algorithm. Discussions are made of how the robot generates a complete behavior sequences performed by its demonstrator. Because it is time consuming for a robot to go through the whole process of learning, we thus propose a decomposed learning method to enhance the learning performance, that is, based on the past learning information, the robot can skip learning again the behaviors already known. For simple robot behaviors, a hierarchical evolutionary mechanism is developed to evolve the complete behavior trajectories. For complex behaviors sequences, different ways are used to tackle the scalability problem, including decomposing the overall task into several sub-tasks, exploiting behavior information recorded previously, and constructing a new strategy to maintain population diversity. To verify our approach, a different series of experiments have been conducted. The results show that our imitation-based approach is a natural way to teach the robot new behaviors. This evolutionary mechanism successfully enables a humanoid robot to perform the behavior sequences it learns.
2

Efficient Algorithms for Causal Linear Identification and Sequential Imitation Learning

Daniel R Kumor (12476310) 28 April 2022 (has links)
<p>Finding cause and effect relationships is one of the quintessential questions throughout many of the empirical sciences, AI, and Machine Learning. This dissertation develops graphical conditions and efficient algorithms for two problems, linear identification and imitation learning. For the first problem, it is well-known that correlation does not imply causation, so linear regression doesn’t necessarily find causal relations even in the limit of a large sample size. Over the past century, a plethora of methods has been developed for identifying interventional distributions given a combination of assumptions about the underlying mechanisms (e.g., linear functional dependence, causal diagram) and observational data. We characterize the computational complexity of several existing graphical criteria and develop new polynomial-time algorithms that subsume existing disparate efficient approaches. The proposed methods constitute the current state of the art in terms of polynomial-time identification coverage. In words, our methods have the capability of identifying the maximal set of structural coefficients when compared to any other efficient algorithms found in the literature.</p> <p>The second problem studied in the dissertation is Causal Sequential Imitation Learning, which is concerned with an agent that aims to learn a policy by observing an expert acting in the environment, and mimicking this expert's observed behavior. Sometimes, the agent (imitator) does not have access to the same set of observations or sensors as the expert, which gives rise to challenges in correctly interpreting expert actions. We develop necessary and sufficient conditions for the imitator to obtain identical performance to the expert in sequential settings given the domain’s causal diagram, and create a polynomial-time algorithm for finding the covariates to include when generating an imitating policy.</p> <p><br></p>
3

Teaching Robots using Interactive Imitation Learning

Jonnavittula, Ananth 28 June 2024 (has links)
As robots transition from controlled environments, such as industrial settings, to more dynamic and unpredictable real-world applications, the need for adaptable and robust learning methods becomes paramount. In this dissertation we develop Interactive Imitation Learning (IIL) based methods that allow robots to learn from imperfect demonstrations. We achieve this by incorporating human factors such as the quality of their demonstrations and the level of effort they are willing to invest in teaching the robot. Our research is structured around three key contributions. First, we examine scenarios where robots have access to high-quality human demonstrations and abundant corrective feedback. In this setup, we introduce an algorithm called SARI (Shared Autonomy across Repeated Interactions), that leverages repeated human-robot interactions to learn from humans. Through extensive simulations and real-world experiments, we demonstrate that SARI significantly enhances the robot's ability to perform complex tasks by iteratively improving its understanding and responses based on human feedback. Second, we explore scenarios where human demonstrations are suboptimal and no additional corrective feedback is provided. This approach acknowledges the inherent imperfections in human teaching and aims to develop robots that can learn effectively under such conditions. We accomplish this by allowing the robot to adopt a risk-averse strategy that underestimates the human's abilities. This method is particularly valuable in household environments where users may not have the expertise or patience to provide perfect demonstrations. Finally, we address the challenge of learning from a single video demonstration. This is particularly relevant for enabling robots to learn tasks without extensive human involvement. We present VIEW (Visual Imitation lEarning with Waypoints), a method that focuses on extracting critical waypoints from video demonstrations. By identifying key positions and movements, VIEW allows robots to efficiently replicate tasks with minimal training data. Our experiments show that VIEW can significantly reduce both the number of trials required and the time needed for the robot to learn new tasks. The findings from this research highlight the importance of incorporating advanced learning algorithms and interactive methods to enhance the robot's ability to operate autonomously in diverse environments. By addressing the variability in human teaching and leveraging innovative learning strategies, this dissertation contributes to the development of more adaptable, efficient, and user-friendly robotic systems. / Doctor of Philosophy / Robots are becoming increasingly common outside manufacturing facilities. In these unstructured environments, people might not always be able to give perfect instructions or might make mistakes. This dissertation explores methods that allow robots to learn tasks by observing human demonstrations, even when those demonstrations are imperfect. First, we look at scenarios where humans can provide high-quality demonstrations and corrections. We introduce an algorithm called SARI (Shared Autonomy across Repeated Interactions). SARI helps robots get better at tasks by learning from repeated interactions with humans. Through various experiments, we found that SARI significantly improves the robot's ability to perform complex tasks, making it more reliable and efficient. Next, we explore scenarios where the human demonstrations are not perfect, and no additional corrections are given. This approach takes everyday scenarios into account, where people might not have the time or expertise to provide perfect instructions. By designing a method that assumes humans might make mistakes, we can create robots that can learn safely and effectively. This makes the robots more adaptable and easier to use for a diverse group of people. Finally, we tackle the challenge of teaching robots from a single video demonstration. This method is particularly useful because it requires less involvement from humans. We developed VIEW (Visual Imitation lEarning with Waypoints), a method that helps robots learn tasks by focusing on the most important parts of a video demonstration. By identifying key points and movements, VIEW allows robots to quickly and efficiently replicate tasks with minimal training. This method significantly reduces the time and effort needed for robots to learn new tasks. Overall, this research shows that by using advanced learning techniques and interactive methods, we can create robots that are more adaptable, efficient, and user-friendly. These robots can learn from humans in various environments and become valuable assistants in our daily lives.
4

Imitation Learning of Whole-Body Grasps

Hsiao, Kaijen, Lozano-Pérez, Tomás 01 1900 (has links)
Humans often learn to manipulate objects by observing other people. In much the same way, robots can use imitation learning to pick up useful skills. A system is detailed here for using imitation learning to teach a robot to grasp objects using both hand and whole-body grasps, which use the arms and torso as well as hands. Demonstration grasp trajectories are created by teleoperating a simulated robot to pick up simulated objects. When presented with a new object, the system compares it against the objects in a stored database to pick a demonstrated grasp used on a similar object. Both objects are modeled as a combination of primitives—boxes, cylinders, and spheres—and by considering the new object to be a transformed version of the demonstration object, contact points are mapped from one object to the other. The best kinematically feasible grasp candidate is chosen with the aid of a grasp quality metric. To test the success of the chosen grasp, a full, collision-free grasp trajectory is found and an attempt is made to execute in the simulation. The implemented system successfully picks up 92 out of 100 randomly generated test objects in simulation. / Singapore-MIT Alliance (SMA)
5

Efficient supervision for robot learning via imitation, simulation, and adaptation

Wulfmeier, Markus January 2018 (has links)
In order to enable more widespread application of robots, we are required to reduce the human effort for the introduction of existing robotic platforms to new environments and tasks. In this thesis, we identify three complementary strategies to address this challenge, via the use of imitation learning, domain adaptation, and transfer learning based on simulations. The overall work strives to reduce the effort of generating training data by employing inexpensively obtainable labels and by transferring information between different domains with deviating underlying properties. Imitation learning enables a straightforward way for untrained personnel to teach robots to perform tasks by providing demonstrations, which represent a comparably inexpensive source of supervision. We develop a scalable approach to identify the preferences underlying demonstration data via the framework of inverse reinforcement learning. The method enables integration of the extracted preferences as cost maps into existing motion planning systems. We further incorporate prior domain knowledge and demonstrate that the approach outperforms the baselines including manually crafted cost functions. In addition to employing low-cost labels from demonstration, we investigate the adaptation of models to domains without available supervisory information. Specifically, the challenge of appearance changes in outdoor robotics such as illumination and weather shifts is addressed using an adversarial domain adaptation approach. A principal advantage of the method over prior work is the straightforwardness of adapting arbitrary, state-of-the-art neural network architectures. Finally, we demonstrate performance benefits of the method for semantic segmentation of drivable terrain. Our last contribution focuses on simulation to real world transfer learning, where the characteristic differences are not only regarding the visual appearance but the underlying system dynamics. Our work aims at parallel training in both systems and mutual guidance via auxiliary alignment rewards to accelerate training for real world systems. The approach is shown to outperform various baselines as well as a unilateral alignment variant.
6

Deep learning based approaches for imitation learning

Hussein, Ahmed January 2018 (has links)
Imitation learning refers to an agent's ability to mimic a desired behaviour by learning from observations. The field is rapidly gaining attention due to recent advances in computational and communication capabilities as well as rising demand for intelligent applications. The goal of imitation learning is to describe the desired behaviour by providing demonstrations rather than instructions. This enables agents to learn complex behaviours with general learning methods that require minimal task specific information. However, imitation learning faces many challenges. The objective of this thesis is to advance the state of the art in imitation learning by adopting deep learning methods to address two major challenges of learning from demonstrations. Firstly, representing the demonstrations in a manner that is adequate for learning. We propose novel Convolutional Neural Networks (CNN) based methods to automatically extract feature representations from raw visual demonstrations and learn to replicate the demonstrated behaviour. This alleviates the need for task specific feature extraction and provides a general learning process that is adequate for multiple problems. The second challenge is generalizing a policy over unseen situations in the training demonstrations. This is a common problem because demonstrations typically show the best way to perform a task and don't offer any information about recovering from suboptimal actions. Several methods are investigated to improve the agent's generalization ability based on its initial performance. Our contributions in this area are three fold. Firstly, we propose an active data aggregation method that queries the demonstrator in situations of low confidence. Secondly, we investigate combining learning from demonstrations and reinforcement learning. A deep reward shaping method is proposed that learns a potential reward function from demonstrations. Finally, memory architectures in deep neural networks are investigated to provide context to the agent when taking actions. Using recurrent neural networks addresses the dependency between the state-action sequences taken by the agent. The experiments are conducted in simulated environments on 2D and 3D navigation tasks that are learned from raw visual data, as well as a 2D soccer simulator. The proposed methods are compared to state of the art deep reinforcement learning methods. The results show that deep learning architectures can learn suitable representations from raw visual data and effectively map them to atomic actions. The proposed methods for addressing generalization show improvements over using supervised learning and reinforcement learning alone. The results are thoroughly analysed to identify the benefits of each approach and situations in which it is most suitable.
7

Training Robot Policies using External Memory Based Networks Via Imitation Learning

January 2018 (has links)
abstract: Recent advancements in external memory based neural networks have shown promise in solving tasks that require precise storage and retrieval of past information. Re- searchers have applied these models to a wide range of tasks that have algorithmic properties but have not applied these models to real-world robotic tasks. In this thesis, we present memory-augmented neural networks that synthesize robot navigation policies which a) encode long-term temporal dependencies b) make decisions in partially observed environments and c) quantify the uncertainty inherent in the task. We extract information about the temporal structure of a task via imitation learning from human demonstration and evaluate the performance of the models on control policies for a robot navigation task. Experiments are performed in partially observed environments in both simulation and the real world / Dissertation/Thesis / Masters Thesis Computer Science 2018
8

Deep Imitation Learning on Spatio-Temporal Data with Multiple Adversarial Agents Applied on Soccer

Lindström, Per January 2019 (has links)
Recently, the availability of high quality and high resolution spatio-temporal data has increased for many sports. This enabled deep analysis of player behaviour and game strategy. This thesis investigates the assumption that game strategy is latent information in tracking data from soccer games and the possibility of modelling player behaviour with deep imitation learning. A possible application would be to perform counterfactual analysis, and switch an observed player in a real sequence, with a simulated player to asses alternative scenarios. An imitation learning application is implemented using recurrent neural networks. It is shown that the application is able to learn individual player behaviour and perform rollouts on previously unseen sequences.
9

Imitation Learning based on Generative Adversarial Networks for Robot Path Planning

Yi, Xianyong 24 November 2020 (has links)
Robot path planning and dynamic obstacle avoidance are defined as a problem that robots plan a feasible path from a given starting point to a destination point in a nonlinear dynamic environment, and safely bypass dynamic obstacles to the destination with minimal deviation from the trajectory. Path planning is a typical sequential decision-making problem. Dynamic local observable environment requires real-time and adaptive decision-making systems. It is an innovation for the robot to learn the policy directly from demonstration trajectories to adapt to similar state spaces that may appear in the future. We aim to develop a method for directly learning navigation behavior from demonstration trajectories without defining the environment and attention models, by using the concepts of Generative Adversarial Imitation Learning (GAIL) and Sequence Generative Adversarial Network (SeqGAN). The proposed SeqGAIL model in this thesis allows the robot to reproduce the desired behavior in different situations. In which, an adversarial net is established, and the Feature Counts Errors reduction is utilized as the forcing objective for the Generator. The refinement measure is taken to solve the instability problem. In addition, we proposed to use the Rapidly-exploring Random Tree* (RRT*) with pre-trained weights to generate adequate demonstration trajectories in dynamic environment as the training data, and this idea can effectively overcome the difficulty of acquiring huge training data.
10

On the use of expert data to imitate behavior and accelerate Reinforcement Learning

Giammarino, Vittorio 17 September 2024 (has links)
This dissertation examines the integration of expert datasets to enhance the data efficiency of online Deep Reinforcement Learning (DRL) algorithms in large state and action space problems. The focus is on effectively integrating real-world data, including data from biological systems, to accelerate the learning process within the online DRL pipeline. The motivation for this work is twofold. First, the internet provides access to a vast amount of data, such as videos, that demonstrate various tasks of interest but are not necessarily designed for use in the DRL framework. Leveraging these data to enhance DRL algorithms presents an exciting and challenging opportunity. Second, biological systems exhibit numerous inductive biases in their behavior that enable them to be highly efficient and adaptable learners. Incorporating these mechanisms for efficient learning remains an open question in DRL, and this work considers the use of human and animal data as a possible solution to this problem. Throughout this dissertation, important questions are addressed, such as how prior knowledge can be distilled into RL agents, the benefits of leveraging offline datasets for online RL, and the algorithmic challenges involved. Five original works are presented that investigate the use of animal videos to enhance RL learning performance, develop a framework to learn bio-inspired foraging policies using human data, propose an online algorithm for performing hierarchical imitation learning in the options framework, and formulate and theoretically motivate novel algorithms for imitation from videos in the presence of visual mismatch. This research demonstrates the effectiveness of utilizing offline datasets to improve the efficiency and performance of online DRL algorithms, providing valuable insights into accelerating the learning process for complex tasks.

Page generated in 0.123 seconds