Return to search

Imitation learning with dynamic movement primitives

Scientists have been working on making robots act like human beings for decades. Therefore, how to imitate human motion has became a popular academic topic in recent years. Nevertheless, there are infinite trajectories between two points in three-dimensional space. As a result, imitation learning, which is an algorithm of teaching from demonstrations, is utilized for learning human motion. Dynamic Movement Primitives (DMPs) is a framework for learning trajectories from demonstrations. Likewise, DMPs can also learn orientations given rotational movement's data. Also, the simulation is implemented on Robot Baxter which has seven degrees of freedom (DOF) and the Inverse Kinematic (IK) solver has been pre-programmed in the robot, which means that it is able to control a robot system as long as both translational and rotational data are provided. Taking advantage of DMPs, complex motor movements can achieve task-oriented regeneration without parametric adjustment and consideration of instability.

In this work, discrete DMPs is utilized as the framework of the whole system. The sample task is to move the objects into the target area using Robot Baxter which is a robotic arm-hand system. For more effective learning, a weighted learning algorithm called Local Weighted Regression (LWR) is implemented. To achieve the goal, the weights of basis functions are firstly trained from the demonstration using DMPs framework as well as LWR. Then, regard the weights as learning parameters and substitute the weights, desired initial state, desired goal state as well as time-correlated parameters into a DMPs framework. Ultimately, the translational and rotational data for a new task-specific trajectory is generated. The visualized results are simulated and shown in Virtual Robot Experimentation Platform (VREP). For accomplishing the tasks better, independent DMP is used for each translation or rotation axis. With relatively low computational cost, motions with relatively high complexity can also be achieved. Moreover, the task-oriented movements can always be successfully stabilized even though there are some spatial scaling and transformation as well as time scaling.

Twelve videos are included in supplementary materials of this thesis. The videos mainly describe the simulation results of Robot Baxter shown on Virtual Robot Experimentation Platform (VREP). The specific information can be found in the appendix.

Identiferoai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/40948
Date17 May 2020
CreatorsZhou, Haoying
ContributorsBelta, Calin A.
Source SetsBoston University
Languageen_US
Detected LanguageEnglish
TypeThesis/Dissertation
RightsAttribution-NonCommercial-ShareAlike 4.0 International, http://creativecommons.org/licenses/by-nc-sa/4.0/

Page generated in 0.0021 seconds