Spelling suggestions: "subject:"humanrobot interaction"" "subject:"humanoidrobot interaction""
11 |
An exploratory study of health professionals' attitudes about robotic telepresence technologyKristoffersson, Annica, Coradeschi, Silvia, Loutfi, Amy, Severinson Eklundh, Kerstin January 2011 (has links)
This article presents the results from a video-based evaluation study of a social robotic telepresence solution for elderly. The evaluated system is a mobile teleoperated robot called Giraff that allows caregivers to virtually enter a home and conduct a natural visit just as if they were physically there. The evaluation focuses on the perspectives from primary healthcare organizations and collects the feedback from different categories of health professionals. The evaluation included 150 participants and yielded unexpected results with respect to the acceptance of the Giraff system. In particular, greater exposure to technology did not necessarily increase acceptance and large variances occurred between the categories of health professionals. In addition to outlining the results, this study provides a number of indications with respect to increasing acceptance for technology for elderly. / <p>The final version of this article can be read at http://www.tandfonline.com/doi/pdf/10.1080/15228835.2011.639509</p>
|
12 |
Adapting the Laban Effort System to Design Affect-Communicating Locomotion Path for a Flying RobotSharma, Megha 20 September 2013 (has links)
People and animals use various kinds of motion in a multitude of ways to communicate their ideas and affective states, such as their moods or emotions. Further, people attribute affect and personalities to movements of even abstract entities based solely on the style of their motions, e.g., movement of a geometric shape (how it moves about) can be interpreted as being shy, aggressive, etc. In this thesis, we investigated how flying robots can leverage this locomotion-style communication channel for communicating their states to people.
One problem in leveraging this style of communication in robot design is that there are no guidelines, or tools that Human-Robot Interaction (HRI) designers can leverage to author affect communicating locomotion paths for flying robots. Therefore, we propose to adapt the Laban Effort System (LES), a standard method for interpreting human motion commonly used in the performing arts, to develop a set of guidelines that can be leveraged by HRI designers to author affective locomotion paths for flying robots. We further validate our proposed approach by conducting a small design workshop with a group of interaction designers, where they were asked to design robotic behaviors using our design method. We conclude this thesis with an original adaption of LES to the locomotion path of a flying robot, and a set of design guidelines that can be leveraged by interaction designers for building affective locomotion path for a flying robot.
|
13 |
Adapting the Laban Effort System to Design Affect-Communicating Locomotion Path for a Flying RobotSharma, Megha 20 September 2013 (has links)
People and animals use various kinds of motion in a multitude of ways to communicate their ideas and affective states, such as their moods or emotions. Further, people attribute affect and personalities to movements of even abstract entities based solely on the style of their motions, e.g., movement of a geometric shape (how it moves about) can be interpreted as being shy, aggressive, etc. In this thesis, we investigated how flying robots can leverage this locomotion-style communication channel for communicating their states to people.
One problem in leveraging this style of communication in robot design is that there are no guidelines, or tools that Human-Robot Interaction (HRI) designers can leverage to author affect communicating locomotion paths for flying robots. Therefore, we propose to adapt the Laban Effort System (LES), a standard method for interpreting human motion commonly used in the performing arts, to develop a set of guidelines that can be leveraged by HRI designers to author affective locomotion paths for flying robots. We further validate our proposed approach by conducting a small design workshop with a group of interaction designers, where they were asked to design robotic behaviors using our design method. We conclude this thesis with an original adaption of LES to the locomotion path of a flying robot, and a set of design guidelines that can be leveraged by interaction designers for building affective locomotion path for a flying robot.
|
14 |
Decisional issues during human-robot joint actionDevin, Sandra 03 November 2017 (has links) (PDF)
In the future, robots will become our companions and co-workers. They will gradually appear in our environment, to help elderly or disabled people or to perform repetitive or unsafe tasks. However, we are still far from a real autonomous robot, which would be able to act in a natural, efficient and secure manner with humans. To endow robots with the capacity to act naturally with human, it is important to study, first, how humans act together. Consequently, this manuscript starts with a state of the art on joint action in psychology and philosophy before presenting the implementation of the principles gained from this study to human-robot joint action. We will then describe the supervision module for human-robot interaction developed during the thesis. Part of the work presented in this manuscript concerns the management of what we call a shared plan. Here, a shared plan is a a partially ordered set of actions to be performed by humans and/or the robot for the purpose of achieving a given goal. First, we present how the robot estimates the beliefs of its humans partners concerning the shared plan (called mental states) and how it takes these mental states into account during shared plan execution. It allows it to be able to communicate in a clever way about the potential divergent beliefs between the robot and the humans knowledge. Second, we present the abstraction of the shared plans and the postponing of some decisions. Indeed, in previous works, the robot took all decisions at planning time (who should perform which action, which object to use…) which could be perceived as unnatural by the human during execution as it imposes a solution preferentially to any other. This work allows us to endow the robot with the capacity to identify which decisions can be postponed to execution time and to take the right decision according to the human behavior in order to get a fluent and natural robot behavior. The complete system of shared plans management has been evaluated in simulation and with real robots in the context of a user study. Thereafter, we present our work concerning the non-verbal communication needed for human-robot joint action. This work is here focused on how to manage the robot head, which allows to transmit information concerning what the robot's activity and what it understands of the human actions, as well as coordination signals. Finally, we present how to mix planning and learning in order to allow the robot to be more efficient during its decision process. The idea, inspired from neuroscience studies, is to limit the use of planning (which is adapted to the human-aware context but costly) by letting the learning module made the choices when the robot is in a "known" situation. The first obtained results demonstrate the potential interest of the proposed solution.
|
15 |
Fast upper body pose estimation for human-robot interactionBurke, Michael Glen January 2015 (has links)
This work describes an upper body pose tracker that finds a 3D pose estimate using video sequences obtained from a monocular camera, with applications in human-robot interaction in mind. A novel mixture of Ornstein-Uhlenbeck processes model, trained in a reduced dimensional subspace and designed for analytical tractability, is introduced. This model acts as a collection of mean-reverting random walks that pull towards more commonly observed poses. Pose tracking using this model can be Rao-Blackwellised, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. The model is used within a recursive Bayesian framework to provide reliable estimates of upper body pose when only a subset of body joints can be detected. Model training data can be extended through a retargeting process, and better pose coverage obtained through the use of Poisson disk sampling in the model training stage. Results on a number of test datasets show that the proposed approach provides pose estimation accuracy comparable with the state of the art in real time (30 fps) and can be extended to the multiple user case. As a motivating example, this work also introduces a pantomimic gesture recognition interface. Traditional approaches to gesture recognition for robot control make use of predefined codebooks of gestures, which are mapped directly to the robot behaviours they are intended to elicit. These gesture codewords are typically recognised using algorithms trained on multiple recordings of people performing the predefined gestures. Obtaining these recordings can be expensive and time consuming, and the codebook of gestures may not be particularly intuitive. This thesis presents arguments that pantomimic gestures, which mimic the intended robot behaviours directly, are potentially more intuitive, and proposes a transfer learning approach to recognition, where human hand gestures are mapped to recordings of robot behaviour by extracting temporal and spatial features that are inherently present in both pantomimed actions and robot behaviours. A Bayesian bias compensation scheme is introduced to compensate for potential classification bias in features. Results from a quadrotor behaviour selection problem show that good classification accuracy can be obtained when human hand gestures are recognised using behaviour recordings, and that classification using these behaviour recordings is more robust than using human hand recordings when users are allowed complete freedom over their choice of input gestures.
|
16 |
Use of Vocal Prosody to Express Emotions in Robotic SpeechCrumpton, Joe 14 August 2015 (has links)
Vocal prosody (pitch, timing, loudness, etc.) and its use to convey emotions are essential components of speech communication between humans. The objective of this dissertation research was to determine the efficacy of using varying vocal prosody in robotic speech to convey emotion. Two pilot studies and two experiments were performed to address the shortcomings of previous HRI research in this area. The pilot studies were used to determine a set of vocal prosody modification values for a female voice model using the MARY speech synthesizer to convey the emotions: anger, fear, happiness, and sadness. Experiment 1 validated that participants perceived these emotions along with a neutral vocal prosody at rates significantly higher than chance. Four of the vocal prosodies (anger, fear, neutral, and sadness) were recognized at rates approaching the recognition rate (60%) of emotions in person to person speech. During Experiment 2 the robot led participants through a creativity test while making statements using one of the validated emotional vocal prosodies. The ratings of the robot’s positive qualities and the creativity scores by the participant group that heard nonnegative vocal prosodies (happiness, neutral) did not significantly differ from the ratings and scores of the participant group that heard the negative vocal prosodies (anger, fear, sadness). Therefore, Experiment 2 failed to show that the use of emotional vocal prosody in a robot’s speech influenced the participants’ appraisal of the robot or the participants’ performance on this specific task. At this time robot designers and programmers should not expect that vocal prosody alone will have a significant impact on the acceptability or the quality of human-robot interactions. Further research is required to show that multi-modal (vocal prosody along with facial expressions, body language, or linguistic content) expressions of emotions by robots will be effective at improving human-robot interactions.
|
17 |
The Impact Of Mental Transformation Training Across Levels Of Automation On Spatial Awareness In Human-robot InteractionRehfeld, Sherri 01 January 2006 (has links)
One of the problems affecting robot operators' spatial awareness involves their ability to infer a robot's location based on the views from on-board cameras and other electro-optic systems. To understand the vehicle's location, operators typically need to translate images from a vehicle's camera into some other coordinates, such as a location on a map. This translation requires operators to relate the view by mentally rotating it along a number of axes, a task that is both attention-demanding and workload-intensive, and one that is likely affected by individual differences in operator spatial abilities. Because building and maintaining spatial awareness is attention-demanding and workload-intensive, any variable that changes operator workload and attention should be investigated for its effects on operator spatial awareness. One of these variables is the use of automation (i.e., assigning functions to the robot). According to Malleable Attentional Resource Theory (MART), variation in workload across levels of automation affects an operator's attentional capacity to process critical cues like those that enable an operator to understand the robot's past, current, and future location. The study reported here focused on performance aspects of human-robot interaction involving ground robots (i.e., unmanned ground vehicles, or UGVs) during reconnaissance tasks. In particular, this study examined how differences in operator spatial ability and in operator workload and attention interacted to affect spatial awareness during human-robot interaction (HRI). Operator spatial abilities were systematically manipulated through the use of mental transformation training. Additionally, operator workload and attention were manipulated via the use of three different levels of automation (i.e., manual control, decision support, and full automation). Operator spatial awareness was measured by the size of errors made by the operators, when they were tasked to infer the robot's location from on-board camera views at three different points in a sequence of robot movements through a simulated military operation in urban terrain (MOUT) environment. The results showed that mental transformation training increased two areas of spatial ability, namely mental rotation and spatial visualization. Further, spatial ability in these two areas predicted performance in vehicle localization during the reconnaissance task. Finally, assistive automation showed a benefit with respect to operator workload, situation awareness, and subsequently performance. Together, the results of the study have implications with respect to the design of robots, function allocation between robots and operators, and training for spatial ability. Future research should investigate the interactive effects on operator spatial awareness of spatial ability, spatial ability training, and other variables affecting operator workload and attention.
|
18 |
The Perception And Measurement Of Human-robot TrustSchaefer, Kristin 01 January 2013 (has links)
As robots penetrate further into the everyday environments, trust in these robots becomes a crucial issue. The purpose of this work was to create and validate a reliable scale that could measure changes in an individual’s trust in a robot. Assessment of current trust theory identified measurable antecedents specific to the human, the robot, and the environment. Six experiments subsumed the development of the 40 item trust scale. Scale development included the creation of a 172 item pool. Two experiments identified the robot features and perceived functional characteristics that were related to the classification of a machine as a robot for this item pool. Item pool reduction techniques and subject matter expert (SME) content validation were used to reduce the scale to 40 items. The two final experiments were then conducted to validate the scale. The finalized 40 item pre-post interaction trust scale was designed to measure trust perceptions specific to HRI. The scale measured trust on a 0-100% rating scale and provides a percentage trust score. A 14 item sub-scale of this final version of the test recommended by SMEs may be sufficient for some HRI tasks, and the implications of this proposition were discussed.
|
19 |
Enhancing Capabilities of Assistive Robotic Arms: Learning, Control, and Object ManipulationMehta, Shaunak A. 11 November 2024 (has links)
In this thesis, we explore methods to enable assistive robotic arms mounted on wheelchairs to assist disabled users with their daily activities. To effectively aid users, these robots must recognize a variety of tasks and provide intuitive control mechanisms. We focus on developing techniques that allow these assistive robots to learn diverse tasks, manipulate different types of objects, and simplify user control of these complex, high-dimensional systems.
This thesis is structured around three key contributions. First, we introduce a method for assistive robots to autonomously learn complex, high-dimensional behaviors in a given environment and map them to a low-dimensional joystick interface without human demonstrations. Through controlled experiments and a user study, we show that this approach outperforms systems based on human-demonstrated actions, leading to faster task completion compared to industry-standard baselines.
Second, we improve the efficiency of reinforcement learning for robotic manipulation tasks by introducing a waypoint-based algorithm. This approach frames task learning as a sequence of multi-armed bandit problems, where each bandit problem corresponds to a waypoint in the robot's trajectory. We introduce an approximate posterior sampling solution that builds the robot's motion one waypoint at a time. Our simulations and real-world experiments show that this approach achieves faster learning than state-of-the-art baselines.
Finally, to address the challenge of manipulating a variety of objects, we introduce RIgid-SOft (RISO) grippers that combine soft-switchable adhesives with standard rigid grippers and propose a shared control framework that automates part of the grasping process. The RISO grippers allow users to manipulate objects using either rigid or soft grasps, depending on the task. Our user study reveals that, with the shared control framework and RISO grippers, users were able to grasp and manipulate a wide range of household objects effectively.
The findings from this research emphasize the importance of integrating advanced learning algorithms and control strategies to improve the capabilities of assistive robots in helping users with their daily activities. By exploring different directions within the domain of assistive robotics, this thesis contributes to the development of methods that enhance the overall functionality of assistive robotic arms. / Master of Science / In this thesis, we explore ways to make robotic arms attached to wheelchairs more helpful for people with disabilities in their everyday lives. To be truly useful, these robots need to understand a variety of tasks and be easy for users to control. Our focus is on developing techniques that help these robots learn different tasks, handle different types of objects, and make controlling them simpler. The thesis is built around three main contributions. First, we introduce a way for robots to learn how to perform complex tasks on their own and then simplify controlling robots for those tasks so users can control the robot to perform different tasks using just a joystick. We show through experiments that this approach helps people complete tasks faster than systems that rely on human-taught actions. Second, we improve how robots learn to perform tasks using a more efficient learning method. This method breaks down tasks into smaller steps, and the robot learns how to move toward each step more quickly. Our tests show that this approach speeds up the learning process compared to other methods. Finally, we address the challenge of handling different types of objects by developing a new type of robotic gripper that combines soft and rigid gripping options. This gripper allows users to pick up and manipulate a wide variety of household objects more easily, thanks to a control system that helps automate part of the process. In our user study, people found it easier to use the new gripper to handle different items. Overall, this research highlights the importance of combining learning algorithms and userfriendly controls to make assistive robots better at helping people with their daily tasks. These contributions advance the development of robotic arms that can more effectively assist users.
|
20 |
Developing robots that impact human-robot trust in emergency evacuationsRobinette, Paul 07 January 2016 (has links)
High-risk, time-critical situations require trust for humans to interact with other agents even if they have never interacted with the agents before. In the near future, robots will perform tasks to help people in such situations, thus robots must understand why a person makes a trust decision in order to effectively aid the person. High casualty rates in several emergency evacuations motivate our use of this scenario as an example of a high-risk, time-critical situation. Emergency guidance robots can be stored inside of buildings then activated to search for victims and guide evacuees to safety. In this dissertation, we determined the conditions under which evacuees would be likely to trust a robot in an emergency evacuation.
We began by examining reports of real-world evacuations and considering how guidance robots can best help. We performed two simulations of evacuations and learned that robots could be helpful as long as at least 30% of evacuees trusted their guidance instructions. We then developed several methods for a robot to communicate directional information to evacuees. After performing three rounds of evaluation using virtually, remotely and physically present robots, we concluded that robots should communicate directional information by gesturing with two arms. Next, we studied the effect of situational risk and the robot's previous performance on a participant's decision to use the robot during an interaction. We found that higher risk scenarios caused participants to align their self-reported trust with their decisions in a trust situation. We also discovered that trust in a robot drops after a single error when interaction occurs in a virtual environment. After an exploratory study in trust repair, we have learned that a robot can repair broken trust during the emergency by apologizing for its prior mistake or giving additional information relevant to the situation. Apologizing immediately after the error had no effect.
Robots have the potential to save lives in emergency scenarios, but could have an equally disastrous effect if participants overtrust them. To explore this concept, we created a virtual environment of an office as well as a real-world simulation of an emergency evacuation. In both, participants interacted with a robot during a non-emergency phase to experience its behavior and then chose whether to follow the robot’s instructions during an emergency phase or not. In the virtual environment, the emergency was communicated through text, but in the real-world simulation, artificial smoke and fire alarms were used to increase the urgency of the situation. In our virtual environment, we confirmed our previous results that prior robot behavior affected whether participants would trust the robot or not. To our surprise, all participants followed the robot in the real-world simulation of an emergency, despite half observing the same robot perform poorly in a navigation guidance task just minutes before. We performed additional exploratory studies investigating different failure modes. Even when the robot pointed to a dark room with no discernible exit the majority of people did not choose to exit the way they entered.
The conclusions of this dissertation are based on the results of fifteen experiments with a total of 2,168 participants (2,071 participants in virtual or remote studies conducted over the internet and 97 participants in physical studies on campus). We have found that most human evacuees will trust an emergency guidance robot that uses understandable information conveyance modalities and exhibits efficient guidance behavior in an evacuation scenario. In interactions with a virtual robot, this trust can be lost because of a single error made by the robot, but a similar effect was not found with real-world robots. This dissertation presents data indicating that victims in emergency situations may overtrust a robot, even when they have recently witnessed the robot malfunction. This work thus demonstrates concerns which are important to both the HRI and rescue robot communities.
|
Page generated in 0.1076 seconds