Return to search

Pose Imitation Constraints For Kinematic Structures

<p> </p>
<p>The usage of robots has increased in different areas of society and human work, including medicine, transportation, education, space exploration, and the service industry. This phenomenon has generated a sudden enthusiasm to develop more intelligent robots that are better equipped to perform tasks in a manner that is equivalently good as those completed by humans. Such jobs require human involvement as operators or teammates since robots struggle with automation in everyday settings. Soon, the role of humans will be far beyond users or stakeholders and include those responsible for training such robots. A popular teaching form is to allow robots to mimic human behavior. This method is intuitive and natural and does not require specialized knowledge of robotics. While there are other methods for robots to complete tasks effectively, collaborative tasks require mutual understanding and coordination that is best achieved by mimicking human motion. This mimicking problem has been tackled through skill imitation, which reproduces human-like motion during a task shown by a trainer. Skill imitation builds on faithfully replicating the human pose and requires two steps. In the first step, an expert's demonstration is captured and pre-processed, and motion features are obtained; in the second step, a learning algorithm is used to optimize for the task. The learning algorithms are often paired with traditional control systems to transfer the demonstration to the robot successfully. However, this methodology currently faces a generalization issue as most solutions are formulated for specific robots or tasks. The lack of generalization presents a problem, especially as the frequency at which robots are replaced and improved in collaborative environments is much higher than in traditional manufacturing. Like humans, we expect robots to have more than one skill and the same skills to be completed by more than one type of robot. Thus, we address this issue by proposing a human motion imitation framework that can be efficiently computed and generalized for different kinematic structures (e.g., different robots).</p>
<p> </p>
<p>This framework is developed by training an algorithm to augment collaborative demonstrations, facilitating the generalization to unseen scenarios. Later, we create a model for pose imitation that converts human motion to a flexible constraint space. This space can be directly mapped to different kinematic structures by specifying a correspondence between the main human joints (i.e., shoulder, elbow, wrist) and robot joints. This model permits having an unlimited number of robotic links between two assigned human joints, allowing different robots to mimic the demonstrated task and human pose. Finally, we incorporate the constraint model into a reward that informs a Reinforcement Learning algorithm during optimization. We tested the proposed methodology in different collaborative scenarios. Thereafter, we assessed the task success rate, pose imitation accuracy, the occlusion that the robot produces in the environment, the number of collisions, and finally, the learning efficiency of the algorithm.</p>
<p> </p>
<p>The results show that the proposed framework creates effective collaboration in different robots and tasks.</p>

  1. 10.25394/pgs.21936393.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/21936393
Date09 February 2023
CreatorsGlebys T Gonzalez (14486934)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/Pose_Imitation_Constraints_For_Kinematic_Structures/21936393

Page generated in 0.002 seconds