Spelling suggestions: "subject:"humanrobot interaction"" "subject:"humanoidrobot interaction""
161 |
Adaptive neural architectures for intuitive robot controlMelidis, Christos January 2017 (has links)
This thesis puts forward a novel way of control for robotic morphologies. Taking inspiration from Behaviour Based robotics and self-organisation principles, we present an interfacing mechanism, capable of adapting both to the user and the robot, while enabling a paradigm of intuitive control for the user. A transparent mechanism is presented, allowing for a seamless integration of control signals and robot behaviours. Instead of the user adapting to the interface and control paradigm, the proposed architecture allows the user to shape the control motifs in their way of preference, moving away from the cases where the user has to read and understand operation manuals or has to learn to operate a specific device. The seminal idea behind the work presented is the coupling of intuitive human behaviours with the dynamics of a machine in order to control and direct the machine dynamics. Starting from a tabula rasa basis, the architectures presented are able to identify control patterns (behaviours) for any given robotic morphology and successfully merge them with control signals from the user, regardless of the input device used. We provide a deep insight in the advantages of behaviour coupling, investigating the proposed system in detail, providing evidence for and quantifying emergent properties of the models proposed. The structural components of the interface are presented and assessed both individually and as a whole, as are inherent properties of the architectures. The proposed system is examined and tested both in vitro and in vivo, and is shown to work even in cases of complicated environments, as well as, complicated robotic morphologies. As a whole, this paradigm of control is found to highlight the potential for a change in the paradigm of robotic control, and a new level in the taxonomy of human in the loop systems.
|
162 |
Evaluation of Multi-sensory Feedback in Virtual and Real Remote Environments in a USAR Robot Teleoperation Scenariode Barros, Paulo 26 April 2014 (has links)
The area of Human-Robot Interaction deals with problems not only related to robots interacting with humans, but also with problems related to humans interacting and controlling robots. This dissertation focuses on the latter and evaluates multi-sensory (vision, hearing, touch, smell) feedback interfaces as a means to improve robot-operator cognition and performance. A set of four empirical studies using both simulated and real robotic systems evaluated a set of multi-sensory feedback interfaces with various levels of complexity. The task scenario for the robot in these studies involved the search for victims in a debris-filled environment after a fictitious catastrophic event (e.g., earthquake) took place. The results show that, if well-designed, multi-sensory feedback interfaces can indeed improve the robot operator data perception and performance. Improvements in operator performance were detected for navigation and search tasks despite minor increases in workload. In fact, some of the multi-sensory interfaces evaluated even led to a reduction in workload. The results also point out that redundant feedback is not always beneficial to the operator. While introducing the concept of operator omni-directional perception, that is, the operator’s capability of perceiving data or events coming from all senses and in all directions, this work explains that feedback redundancy is only beneficial when it enhances the operator omni-directional perception of data relevant to the task at hand. Last, the comprehensive methodology employed and refined over the course of the four studies is suggested as a starting point for the design of future HRI user studies. In summary, this work sheds some light on the benefits and challenges multi-sensory feedback interfaces bring, specifically on teleoperated robotics. It adds to our current understanding of these kinds of interfaces and provides a few insights to assist the continuation of research in the area.
|
163 |
Robots that say 'no' : acquisition of linguistic behaviour in interaction games with humansFörster, Frank January 2013 (has links)
Negation is a part of language that humans engage in pretty much from the onset of speech. Negation appears at first glance to be harder to grasp than object or action labels, yet this thesis explores how this family of ‘concepts’ could be acquired in a meaningful way by a humanoid robot based solely on the unconstrained dialogue with a human conversation partner. The earliest forms of negation appear to be linked to the affective or motivational state of the speaker. Therefore we developed a behavioural architecture which contains a motivational system. This motivational system feeds its state simultaneously to other subsystems for the purpose of symbol-grounding but also leads to the expression of the robot’s motivational state via a facial display of emotions and motivationally congruent body behaviours. In order to achieve the grounding of negative words we will examine two different mechanisms which provide an alternative to the established grounding via ostension with or without joint attention. Two large experiments were conducted to test these two mechanisms. One of these mechanisms is so called negative intent interpretation, the other one is a combination of physical and linguistic prohibition. Both mechanisms have been described in the literature on early child language development but have never been used in human-robot-interaction for the purpose of symbol grounding. As we will show, both mechanisms may operate simultaneously and we can exclude none of them as potential ontogenetic origin of negation.
|
164 |
Human Activity Recognition and Control of Wearable RobotsJanuary 2018 (has links)
abstract: Wearable robotics has gained huge popularity in recent years due to its wide applications in rehabilitation, military, and industrial fields. The weakness of the skeletal muscles in the aging population and neurological injuries such as stroke and spinal cord injuries seriously limit the abilities of these individuals to perform daily activities. Therefore, there is an increasing attention in the development of wearable robots to assist the elderly and patients with disabilities for motion assistance and rehabilitation. In military and industrial sectors, wearable robots can increase the productivity of workers and soldiers. It is important for the wearable robots to maintain smooth interaction with the user while evolving in complex environments with minimum effort from the user. Therefore, the recognition of the user's activities such as walking or jogging in real time becomes essential to provide appropriate assistance based on the activity.
This dissertation proposes two real-time human activity recognition algorithms intelligent fuzzy inference (IFI) algorithm and Amplitude omega ($A \omega$) algorithm to identify the human activities, i.e., stationary and locomotion activities. The IFI algorithm uses knee angle and ground contact forces (GCFs) measurements from four inertial measurement units (IMUs) and a pair of smart shoes. Whereas, the $A \omega$ algorithm is based on thigh angle measurements from a single IMU.
This dissertation also attempts to address the problem of online tuning of virtual impedance for an assistive robot based on real-time gait and activity measurement data to personalize the assistance for different users. An automatic impedance tuning (AIT) approach is presented for a knee assistive device (KAD) in which the IFI algorithm is used for real-time activity measurements. This dissertation also proposes an adaptive oscillator method known as amplitude omega adaptive oscillator ($A\omega AO$) method for HeSA (hip exoskeleton for superior augmentation) to provide bilateral hip assistance during human locomotion activities. The $A \omega$ algorithm is integrated into the adaptive oscillator method to make the approach robust for different locomotion activities. Experiments are performed on healthy subjects to validate the efficacy of the human activities recognition algorithms and control strategies proposed in this dissertation. Both the activity recognition algorithms exhibited higher classification accuracy with less update time. The results of AIT demonstrated that the KAD assistive torque was smoother and EMG signal of Vastus Medialis is reduced, compared to constant impedance and finite state machine approaches. The $A\omega AO$ method showed real-time learning of the locomotion activities signals for three healthy subjects while wearing HeSA. To understand the influence of the assistive devices on the inherent dynamic gait stability of the human, stability analysis is performed. For this, the stability metrics derived from dynamical systems theory are used to evaluate unilateral knee assistance applied to the healthy participants. / Dissertation/Thesis / Doctoral Dissertation Aerospace Engineering 2018
|
165 |
Safe human-robot interaction based on multi-sensor fusion and dexterous manipulation planningCorrales Ramón, Juan Antonio 21 July 2011 (has links)
This thesis presents several new techniques for developing safe and flexible human-robot interaction tasks where human operators cooperate with robotic manipulators. The contributions of this thesis are divided in two fields: the development of safety strategies which modify the normal behavior of the robotic manipulator when the human operator is near the robot and the development of dexterous manipulation tasks for in-hand manipulation of objects with a multi-fingered robotic hand installed at the end-effector of a robotic manipulator. / Valencian Government by the research project "Infraestructura 05/053". Spanish Ministry of Education and Science by the pre-doctoral grant AP2005-1458 and the research projects DPI2005-06222 and DPI2008-02647, which constitute the research framework of this thesis.
|
166 |
Multi-Robot Coordination and Scheduling for Deactivation & DecommissioningZanlongo, Sebastian A. 02 November 2018 (has links)
Large quantities of high-level radioactive waste were generated during WWII. This waste is being stored in facilities such as double-shell tanks in Washington, and the Waste Isolation Pilot Plant in New Mexico. Due to the dangerous nature of radioactive waste, these facilities must undergo periodic inspections to ensure that leaks are detected quickly. In this work, we provide a set of methodologies to aid in the monitoring and inspection of these hazardous facilities. This allows inspection of dangerous regions without a human operator, and for the inspection of locations where a person would not be physically able to enter.
First, we describe a robot equipped with sensors which uses a modified A* path-planning algorithm to navigate in a complex environment with a tether constraint. This is then augmented with an adaptive informative path planning approach that uses the assimilated sensor data within a Gaussian Process distribution model. The model's predictive outputs are used to adaptively plan the robot's path, to quickly map and localize areas from an unknown field of interest. The work was validated in extensive simulation testing and early hardware tests.
Next, we focused on how to assign tasks to a heterogeneous set of robots. Task assignment is done in a manner which allows for task-robot dependencies, prioritization of tasks, collision checking, and more realistic travel estimates among other improvements from the state-of-the-art. Simulation testing of this work shows an increase in the number of tasks which are completed ahead of a deadline.
Finally, we consider the case where robots are not able to complete planned tasks fully autonomously and require operator assistance during parts of their planned trajectory. We present a sampling-based methodology for allocating operator attention across multiple robots, or across different parts of a more sophisticated robot. This allows few operators to oversee large numbers of robots, allowing for a more scalable robotic infrastructure. This work was tested in simulation for both multi-robot deployment, and high degree-of-freedom robots, and was also tested in multi-robot hardware deployments.
The work here can allow robots to carry out complex tasks, autonomously or with operator assistance. Altogether, these three components provide a comprehensive approach towards robotic deployment within the deactivation and decommissioning tasks faced by the Department of Energy.
|
167 |
Development Of Electrical And Control System Of An Unmanned Ground Vehicle For Force Feedback TeleoperationHacinecipoglu, Akif 01 September 2012 (has links) (PDF)
Teleoperation of an unmanned vehicle is a challenging task for human operators especially when the vehicle is out of line of sight. Improperly designed and applied display interfaces directly affect the operation performance negatively and even can result in catastrophic failures. If these teleoperation missions are human-critical then it becomes more important to improve the operator performance by decreasing workload, managing stress and improving situational awareness. This research aims to develop electrical and control system of an unmanned ground vehicle (UGV) using an All-Terrain Vehicle (ATV) and validate the development with investigation of the effects of force feedback devices on the teleoperation performance. After development, teleoperation tests are performed to verify that force feedback generated from the dynamic obstacle information of the environment improves teleoperation performance. Results confirm this statement and the developed UGV is verified for future research studies. Development of UGV, algorithms and real system tests are included in this thesis.
|
168 |
Adaptation of task-aware, communicative variance for motion control in social humanoid robotic applicationsGielniak, Michael Joseph 17 January 2012 (has links)
An algorithm for generating communicative, human-like motion for social humanoid robots was developed. Anticipation, exaggeration, and secondary motion were demonstrated as examples of communication. Spatiotemporal correspondence was presented as a metric for human-like motion, and the metric was used to both synthesize and evaluate motion. An algorithm for generating an infinite number of variants from a single exemplar was established to avoid repetitive motion. The algorithm was made task-aware by including the functionality of satisfying constraints. User studies were performed with the algorithm using human participants. Results showed that communicative, human-like motion can be harnessed to direct partner attention and communicate state information. Furthermore, communicative, human-like motion for social robots produced by the algorithm allows humans partners to feel more engaged in the interaction, recognize motion earlier, label intent sooner, and remember interaction details more accurately.
|
169 |
Human Intention Recognition Based Assisted Telerobotic Grasping of Objects in an Unstructured EnvironmentKhokar, Karan Hariharan 01 January 2013 (has links)
In this dissertation work, a methodology is proposed to enable a robot to identify an object to be grasped and its intended grasp configuration while a human is teleoperating a robot towards the desired object. Based on the detected object and grasp configuration, the human is assisted in the teleoperation task. The environment is unstructured and consists of a number of objects, each with various possible grasp configurations. The identification of the object and the grasp configuration is carried out in real time, by recognizing the intention of the human motion. Simultaneously, the human user is assisted to preshape over the desired grasp configuration. This is done by scaling the components of the remote arm end-effector motion that lead to the desired grasp configuration and simultaneously attenuating the components that are in perpendicular directions. The complete process occurs while manipulating the master device and without having to interact with another interface.
Intention recognition from motion is carried out by using Hidden Markov Model (HMM) theory. First, the objects are classified based on their shapes. Then, the grasp configurations are preselected for each object class. The selection of grasp configurations is based on the human knowledge of robust grasps for the various shapes. Next, an HMM for each object class is trained by having a skilled teleoperator perform repeated preshape trials over each grasp configuration of the object class in consideration. The grasp configurations are modeled as the states of each HMM whereas the projections of translation and orientation vectors, over each reference vector, are modeled as observations. The reference vectors are the ideal translation and rotation trajectories that lead the remote arm end-effector towards a grasp configuration. During an actual grasping task performed by a novice or a skilled user, the trained model is used to detect their intention. The output probability of the HMM associated with each object in the environment is computed as the user is teleoperating towards the desired object. The object that is associated with the HMM which has the highest output probability, is taken as the desired object. The most likely Viterbi state sequence of the selected HMM gives the desired grasp configuration. Since an HMM is associated with every object, objects can be shuffled around, added or removed from the environment without the need to retrain the models. In other words, the HMM for each object class needs to be trained only once by a skilled teleoperator.
The intention recognition algorithm was validated by having novice users, as well as the skilled teleoperator, grasp objects with different grasp configurations from a dishwasher rack. Each object had various possible grasp configurations. The proposed algorithm was able to successfully detect the operator's intention and identify the object and the grasp configuration of interest. This methodology of grasping was also compared with unassisted mode and maximum-projection mode. In the unassisted mode, the operator teleoperated the arm without any assistance or intention recognition. In the maximum-projection mode, the maximum projection of the motion vectors was used to determine the intended object and the grasp configuration of interest. Six healthy and one wheelchair-bound individuals, each executed twelve pick-and-place trials in intention-based assisted mode and unassisted mode. In these trials, they picked up utensils from the dishwasher and laid them on a table located next to it. The relative positions and orientations of the utensils were changed at the end of every third trial. It was observed that the subjects were able to pick-and-place the objects 51% faster and with less number of movements, using the proposed method compared to the unassisted method. They found it much easier to execute the task using the proposed method and experienced less mental and overall workloads. Two able-bodied subjects also executed three preshape trials over three objects in intention-based assisted and maximum projection mode. For one of the subjects, the objects were shuffled at the end of the six trials and she was asked to carry out three more preshape trials in the two modes. This time, however, the subject was made to change their intention when she was about to preshape to the grasp configurations. It was observed that intention recognition was consistently accurate through the trajectory in the intention-based assisted method except at a few points. However, in the maximum-projection method the intention recognition was consistently inaccurate and fluctuated. This often caused to subject to be assisted in the wring directions and led to extreme frustration. The intention-based assisted method was faster and had less hand movements. The accuracy of the intention based method did not change when the objects were shuffled. It was also shown that the model for intention recognition can be trained by a skilled teleoperator and be used by a novice user to efficiently execute a grasping task in teleoperation.
|
170 |
RSVP: An investigation of the effects of Remote Shared Visual Presence on team process and team performance in urban search and rescue teamsBurke, Jennifer L 01 June 2006 (has links)
This field study presents mobile rescue robots as a way of augmenting communication in distributed teams through a remote shared visual presence (RSVP) consisting of the robot's view. It examines the effects of RSVP on team mental models, team processes, and team performance in collocated and distributed Urban Search & Rescue (US&R) technical search teams, and tests two models of team performance.
Participants (n=50) were US&R task force personnel drawn from high-fidelity training exercises held in California (2004) and New Jersey (2005). Data were collected from the 25 dyadic teams as they performed a 2 x 2 repeated measures search task entailing robot-assisted search in a confined space rubble pile. Team communication was analyzed using the Robot-Assisted Search and Rescue coding scheme (RASAR-CS). Team mental models were measured through a team-constructed map of the search process. Ratings of team processes (communication, support, leadership, and situation awareness) were made by onsite observers, and team performance was measured by number of victims (mannequins) found. Multilevel regression analyses were used to predict team mental models, team process, and team performance based upon use of RSVP (RSVP or no-RSVP) and location of team members (distributed or collocated). Results indicated that the use of RSVP technology predicted team performance (Ã?=-1.322, p = 0.05), but not team mental models or team process. Location predicted team mental models (Ã?=-0.425, p = 0.05), but not as expected.
Distributed teams had richer team mental models as measured by map ratings. No significant differences emerged between collocated and distributed teams in team process or team performance. Findings suggest RSVP may enhance team performance in US&R search tasks. However, results are complicated by differences detected between sites. Support was found for both models of team performance, but neither model was found sufficient to describe the data. Further research is suggested in the use of RSVP technology, the exploration of team mental models, and refinement of a modified model of team performance in extreme environments.
|
Page generated in 0.0958 seconds