Return to search

Teaching Robots using Interactive Imitation Learning

As robots transition from controlled environments, such as industrial settings, to more dynamic and unpredictable real-world applications, the need for adaptable and robust learning methods becomes paramount. In this dissertation we develop Interactive Imitation Learning (IIL) based methods that allow robots to learn from imperfect demonstrations. We achieve this by incorporating human factors such as the quality of their demonstrations and the level of effort they are willing to invest in teaching the robot.

Our research is structured around three key contributions. First, we examine scenarios where robots have access to high-quality human demonstrations and abundant corrective feedback. In this setup, we introduce an algorithm called SARI (Shared Autonomy across Repeated Interactions), that leverages repeated human-robot interactions to learn from humans. Through extensive simulations and real-world experiments, we demonstrate that SARI significantly enhances the robot's ability to perform complex tasks by iteratively improving its understanding and responses based on human feedback.

Second, we explore scenarios where human demonstrations are suboptimal and no additional corrective feedback is provided. This approach acknowledges the inherent imperfections in human teaching and aims to develop robots that can learn effectively under such conditions. We accomplish this by allowing the robot to adopt a risk-averse strategy that underestimates the human's abilities. This method is particularly valuable in household environments where users may not have the expertise or patience to provide perfect demonstrations.

Finally, we address the challenge of learning from a single video demonstration. This is particularly relevant for enabling robots to learn tasks without extensive human involvement. We present VIEW (Visual Imitation lEarning with Waypoints), a method that focuses on extracting critical waypoints from video demonstrations. By identifying key positions and movements, VIEW allows robots to efficiently replicate tasks with minimal training data. Our experiments show that VIEW can significantly reduce both the number of trials required and the time needed for the robot to learn new tasks.

The findings from this research highlight the importance of incorporating advanced learning algorithms and interactive methods to enhance the robot's ability to operate autonomously in diverse environments. By addressing the variability in human teaching and leveraging innovative learning strategies, this dissertation contributes to the development of more adaptable, efficient, and user-friendly robotic systems. / Doctor of Philosophy / Robots are becoming increasingly common outside manufacturing facilities. In these unstructured environments, people might not always be able to give perfect instructions or might make mistakes. This dissertation explores methods that allow robots to learn tasks by observing human demonstrations, even when those demonstrations are imperfect.

First, we look at scenarios where humans can provide high-quality demonstrations and corrections. We introduce an algorithm called SARI (Shared Autonomy across Repeated Interactions). SARI helps robots get better at tasks by learning from repeated interactions with humans. Through various experiments, we found that SARI significantly improves the robot's ability to perform complex tasks, making it more reliable and efficient.

Next, we explore scenarios where the human demonstrations are not perfect, and no additional corrections are given. This approach takes everyday scenarios into account, where people might not have the time or expertise to provide perfect instructions. By designing a method that assumes humans might make mistakes, we can create robots that can learn safely and effectively. This makes the robots more adaptable and easier to use for a diverse group of people.

Finally, we tackle the challenge of teaching robots from a single video demonstration. This method is particularly useful because it requires less involvement from humans. We developed VIEW (Visual Imitation lEarning with Waypoints), a method that helps robots learn tasks by focusing on the most important parts of a video demonstration. By identifying key points and movements, VIEW allows robots to quickly and efficiently replicate tasks with minimal training. This method significantly reduces the time and effort needed for robots to learn new tasks.

Overall, this research shows that by using advanced learning techniques and interactive methods, we can create robots that are more adaptable, efficient, and user-friendly. These robots can learn from humans in various environments and become valuable assistants in our daily lives.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/120552
Date28 June 2024
CreatorsJonnavittula, Ananth
ContributorsMechanical Engineering, Losey, Dylan Patrick, Akbari Hamed, Kaveh, Williams, Ryan K., Leonessa, Alexander
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeDissertation
FormatETD, application/pdf, application/pdf
RightsCreative Commons Attribution 4.0 International, http://creativecommons.org/licenses/by/4.0/

Page generated in 0.0014 seconds