• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Estimating Short-Term Human Intent for Physical Human-Robot Co-Manipulation

Townsend, Eric Christopher 01 April 2017 (has links)
Robots are increasingly becoming safer and more capable. In the past, the main applications for robots have been in manufacturing, where they perform repetitive, highly accurate tasks with physical barriers that separate them from people. They have also been used in space exploration where people are not around. Due to improvements in sensors, algorithms, and design, robots are beginning to be used in other applications like materials handling, healthcare, and agriculture and will one day be ubiquitous. For this to be possible, they will need to be able to function safely in unmodelled and dynamic environments. This is especially true when working in a shared space with people. We desire for robots to interact with people in a way that is helpful and intuitive. This requires that the robots both act predictably and be able to predict short-term human intent. We create a model for predicting short-term human intent in a collaborative furniture carrying task that a robot could use to be a more responsive and intuitive teammate. For robots to perform collaborative manipulation tasks with people naturally and efficiently, understanding and predicting human intent is necessary. We completed an exploratory study recording motion and force for 21 human dyads moving an object in tandem in a variety of tasks to better understand how they move and how their movement can be predicted. Using the previous 0.75 seconds of data, the human intent can be predicted for the next 0.25 seconds. This can then be used with a robot in real applications. We also show that force data is not required to predict human intent. We show how the prediction data works in real-time, demonstrating that past motion alone can be used to predict short-term human intent. We show this with human-human dyads and a human-robot dyad. Finally, we imagine that soft robots will be common in human-robot interaction. We present work on controlling soft, pneumatically-actuated, inflatable robots. These soft robots have less inertia than traditional robots but a high power density which allows them to operate in proximity to people. They can, however, be difficult to control. We developed a neural net model to use for control of our soft robot. We have shown that we can predict human intent in a human-robot dyad which is an important goal in physical human-robot interaction and will allow robots to co-manipulate objects with humans in an intelligent way.
2

Force and Motion Based Methods for Planar Human-Robot Co-manipulation of Extended Objects

Mielke, Erich Allen 01 April 2018 (has links)
As robots become more common operating in close proximity to people, new opportunities arise for physical human-robot interaction, such as co-manipulation of extended objects. Co-manipulation involves physical interaction between two partners where an object held by both is manipulated in tandem. There is a dearth of viable high degree-of-freedom co-manipulation controllers, especially for extended objects, as well as a lack of information about how human-human teams perform in high degree-of-freedom tasks. One method for creating co-manipulation controllers is to pattern them off of human data. This thesis uses this technique by exploring a previously completed experimental study. The study involved human-human dyads in leader-follower format performing co-manipulation tasks with an extended object in 6 degrees of freedom. Two important tasks performed in this experiment were lateral translation and planar rotation tasks. This thesis focuses on these two tasks because they represent planar motion. Most previous control methods are for 1 or 2 degrees-of-freedom. The study provided information about how human-human dyads perform planar tasks. Most notably, planar tasks generally adhere to minimum-jerk trajectories, and do not minimize interaction forces between users. The study also helped solve the translation versus rotation problem. From the experimental data, torque patterns were discovered at the beginning of the trial that defined intent to translate or rotate. From these patterns, a new method of planar co-manipulation control was developed, called Extended Variable Impedance Control. This is a novel 3 degree-of-freedom method that is applicable to a variety of planar co-manipulation scenarios. Additionally, the data was fed through a Recursive Neural Network. The network takes in a series of motion data and predicts the next step in the series. The predicted data was used as an intent estimate in another novel 3 degree of freedom method called Neural Network Prediction Control. This method is capable of generalizing to 6 degrees of freedom, but is limited in this thesis for comparison with the other method. An experiment, involving 16 participants, was developed to test the capabilities of both controllers for planar tasks. A dual manipulator robot with an omnidirectional base was used in the experiment. The results from the study show that both the Neural Network Prediction Control and Extended Variable Impedance Control controllers performed comparably to blindfolded human-human dyads. A survey given to participants informed us they preferred to use the Extended Variable Impedance Control. These two unique controllers are the major results of this work.

Page generated in 0.1486 seconds