• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 20
  • 17
  • 15
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 302
  • 302
  • 302
  • 105
  • 91
  • 59
  • 51
  • 50
  • 41
  • 39
  • 39
  • 39
  • 36
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

An exploratory study of health professionals' attitudes about robotic telepresence technology

Kristoffersson, Annica, Coradeschi, Silvia, Loutfi, Amy, Severinson Eklundh, Kerstin January 2011 (has links)
This article presents the results from a video-based evaluation study of a social robotic telepresence solution for elderly. The evaluated system is a mobile teleoperated robot called Giraff that allows caregivers to virtually enter a home and conduct a natural visit just as if they were physically there. The evaluation focuses on the perspectives from primary healthcare organizations and collects the feedback from different categories of health professionals. The evaluation included 150 participants and yielded unexpected results with respect to the acceptance of the Giraff system. In particular, greater exposure to technology did not necessarily increase acceptance and large variances occurred between the categories of health professionals. In addition to outlining the results, this study provides a number of indications with respect to increasing acceptance for technology for elderly. / <p>The final version of this article can be read at http://www.tandfonline.com/doi/pdf/10.1080/15228835.2011.639509</p>
12

Adapting the Laban Effort System to Design Affect-Communicating Locomotion Path for a Flying Robot

Sharma, Megha 20 September 2013 (has links)
People and animals use various kinds of motion in a multitude of ways to communicate their ideas and affective states, such as their moods or emotions. Further, people attribute affect and personalities to movements of even abstract entities based solely on the style of their motions, e.g., movement of a geometric shape (how it moves about) can be interpreted as being shy, aggressive, etc. In this thesis, we investigated how flying robots can leverage this locomotion-style communication channel for communicating their states to people. One problem in leveraging this style of communication in robot design is that there are no guidelines, or tools that Human-Robot Interaction (HRI) designers can leverage to author affect communicating locomotion paths for flying robots. Therefore, we propose to adapt the Laban Effort System (LES), a standard method for interpreting human motion commonly used in the performing arts, to develop a set of guidelines that can be leveraged by HRI designers to author affective locomotion paths for flying robots. We further validate our proposed approach by conducting a small design workshop with a group of interaction designers, where they were asked to design robotic behaviors using our design method. We conclude this thesis with an original adaption of LES to the locomotion path of a flying robot, and a set of design guidelines that can be leveraged by interaction designers for building affective locomotion path for a flying robot.
13

Adapting the Laban Effort System to Design Affect-Communicating Locomotion Path for a Flying Robot

Sharma, Megha 20 September 2013 (has links)
People and animals use various kinds of motion in a multitude of ways to communicate their ideas and affective states, such as their moods or emotions. Further, people attribute affect and personalities to movements of even abstract entities based solely on the style of their motions, e.g., movement of a geometric shape (how it moves about) can be interpreted as being shy, aggressive, etc. In this thesis, we investigated how flying robots can leverage this locomotion-style communication channel for communicating their states to people. One problem in leveraging this style of communication in robot design is that there are no guidelines, or tools that Human-Robot Interaction (HRI) designers can leverage to author affect communicating locomotion paths for flying robots. Therefore, we propose to adapt the Laban Effort System (LES), a standard method for interpreting human motion commonly used in the performing arts, to develop a set of guidelines that can be leveraged by HRI designers to author affective locomotion paths for flying robots. We further validate our proposed approach by conducting a small design workshop with a group of interaction designers, where they were asked to design robotic behaviors using our design method. We conclude this thesis with an original adaption of LES to the locomotion path of a flying robot, and a set of design guidelines that can be leveraged by interaction designers for building affective locomotion path for a flying robot.
14

Decisional issues during human-robot joint action

Devin, Sandra 03 November 2017 (has links) (PDF)
In the future, robots will become our companions and co-workers. They will gradually appear in our environment, to help elderly or disabled people or to perform repetitive or unsafe tasks. However, we are still far from a real autonomous robot, which would be able to act in a natural, efficient and secure manner with humans. To endow robots with the capacity to act naturally with human, it is important to study, first, how humans act together. Consequently, this manuscript starts with a state of the art on joint action in psychology and philosophy before presenting the implementation of the principles gained from this study to human-robot joint action. We will then describe the supervision module for human-robot interaction developed during the thesis. Part of the work presented in this manuscript concerns the management of what we call a shared plan. Here, a shared plan is a a partially ordered set of actions to be performed by humans and/or the robot for the purpose of achieving a given goal. First, we present how the robot estimates the beliefs of its humans partners concerning the shared plan (called mental states) and how it takes these mental states into account during shared plan execution. It allows it to be able to communicate in a clever way about the potential divergent beliefs between the robot and the humans knowledge. Second, we present the abstraction of the shared plans and the postponing of some decisions. Indeed, in previous works, the robot took all decisions at planning time (who should perform which action, which object to use…) which could be perceived as unnatural by the human during execution as it imposes a solution preferentially to any other. This work allows us to endow the robot with the capacity to identify which decisions can be postponed to execution time and to take the right decision according to the human behavior in order to get a fluent and natural robot behavior. The complete system of shared plans management has been evaluated in simulation and with real robots in the context of a user study. Thereafter, we present our work concerning the non-verbal communication needed for human-robot joint action. This work is here focused on how to manage the robot head, which allows to transmit information concerning what the robot's activity and what it understands of the human actions, as well as coordination signals. Finally, we present how to mix planning and learning in order to allow the robot to be more efficient during its decision process. The idea, inspired from neuroscience studies, is to limit the use of planning (which is adapted to the human-aware context but costly) by letting the learning module made the choices when the robot is in a "known" situation. The first obtained results demonstrate the potential interest of the proposed solution.
15

Fast upper body pose estimation for human-robot interaction

Burke, Michael Glen January 2015 (has links)
This work describes an upper body pose tracker that finds a 3D pose estimate using video sequences obtained from a monocular camera, with applications in human-robot interaction in mind. A novel mixture of Ornstein-Uhlenbeck processes model, trained in a reduced dimensional subspace and designed for analytical tractability, is introduced. This model acts as a collection of mean-reverting random walks that pull towards more commonly observed poses. Pose tracking using this model can be Rao-Blackwellised, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. The model is used within a recursive Bayesian framework to provide reliable estimates of upper body pose when only a subset of body joints can be detected. Model training data can be extended through a retargeting process, and better pose coverage obtained through the use of Poisson disk sampling in the model training stage. Results on a number of test datasets show that the proposed approach provides pose estimation accuracy comparable with the state of the art in real time (30 fps) and can be extended to the multiple user case. As a motivating example, this work also introduces a pantomimic gesture recognition interface. Traditional approaches to gesture recognition for robot control make use of predefined codebooks of gestures, which are mapped directly to the robot behaviours they are intended to elicit. These gesture codewords are typically recognised using algorithms trained on multiple recordings of people performing the predefined gestures. Obtaining these recordings can be expensive and time consuming, and the codebook of gestures may not be particularly intuitive. This thesis presents arguments that pantomimic gestures, which mimic the intended robot behaviours directly, are potentially more intuitive, and proposes a transfer learning approach to recognition, where human hand gestures are mapped to recordings of robot behaviour by extracting temporal and spatial features that are inherently present in both pantomimed actions and robot behaviours. A Bayesian bias compensation scheme is introduced to compensate for potential classification bias in features. Results from a quadrotor behaviour selection problem show that good classification accuracy can be obtained when human hand gestures are recognised using behaviour recordings, and that classification using these behaviour recordings is more robust than using human hand recordings when users are allowed complete freedom over their choice of input gestures.
16

Use of Vocal Prosody to Express Emotions in Robotic Speech

Crumpton, Joe 14 August 2015 (has links)
Vocal prosody (pitch, timing, loudness, etc.) and its use to convey emotions are essential components of speech communication between humans. The objective of this dissertation research was to determine the efficacy of using varying vocal prosody in robotic speech to convey emotion. Two pilot studies and two experiments were performed to address the shortcomings of previous HRI research in this area. The pilot studies were used to determine a set of vocal prosody modification values for a female voice model using the MARY speech synthesizer to convey the emotions: anger, fear, happiness, and sadness. Experiment 1 validated that participants perceived these emotions along with a neutral vocal prosody at rates significantly higher than chance. Four of the vocal prosodies (anger, fear, neutral, and sadness) were recognized at rates approaching the recognition rate (60%) of emotions in person to person speech. During Experiment 2 the robot led participants through a creativity test while making statements using one of the validated emotional vocal prosodies. The ratings of the robot’s positive qualities and the creativity scores by the participant group that heard nonnegative vocal prosodies (happiness, neutral) did not significantly differ from the ratings and scores of the participant group that heard the negative vocal prosodies (anger, fear, sadness). Therefore, Experiment 2 failed to show that the use of emotional vocal prosody in a robot’s speech influenced the participants’ appraisal of the robot or the participants’ performance on this specific task. At this time robot designers and programmers should not expect that vocal prosody alone will have a significant impact on the acceptability or the quality of human-robot interactions. Further research is required to show that multi-modal (vocal prosody along with facial expressions, body language, or linguistic content) expressions of emotions by robots will be effective at improving human-robot interactions.
17

The Impact Of Mental Transformation Training Across Levels Of Automation On Spatial Awareness In Human-robot Interaction

Rehfeld, Sherri 01 January 2006 (has links)
One of the problems affecting robot operators' spatial awareness involves their ability to infer a robot's location based on the views from on-board cameras and other electro-optic systems. To understand the vehicle's location, operators typically need to translate images from a vehicle's camera into some other coordinates, such as a location on a map. This translation requires operators to relate the view by mentally rotating it along a number of axes, a task that is both attention-demanding and workload-intensive, and one that is likely affected by individual differences in operator spatial abilities. Because building and maintaining spatial awareness is attention-demanding and workload-intensive, any variable that changes operator workload and attention should be investigated for its effects on operator spatial awareness. One of these variables is the use of automation (i.e., assigning functions to the robot). According to Malleable Attentional Resource Theory (MART), variation in workload across levels of automation affects an operator's attentional capacity to process critical cues like those that enable an operator to understand the robot's past, current, and future location. The study reported here focused on performance aspects of human-robot interaction involving ground robots (i.e., unmanned ground vehicles, or UGVs) during reconnaissance tasks. In particular, this study examined how differences in operator spatial ability and in operator workload and attention interacted to affect spatial awareness during human-robot interaction (HRI). Operator spatial abilities were systematically manipulated through the use of mental transformation training. Additionally, operator workload and attention were manipulated via the use of three different levels of automation (i.e., manual control, decision support, and full automation). Operator spatial awareness was measured by the size of errors made by the operators, when they were tasked to infer the robot's location from on-board camera views at three different points in a sequence of robot movements through a simulated military operation in urban terrain (MOUT) environment. The results showed that mental transformation training increased two areas of spatial ability, namely mental rotation and spatial visualization. Further, spatial ability in these two areas predicted performance in vehicle localization during the reconnaissance task. Finally, assistive automation showed a benefit with respect to operator workload, situation awareness, and subsequently performance. Together, the results of the study have implications with respect to the design of robots, function allocation between robots and operators, and training for spatial ability. Future research should investigate the interactive effects on operator spatial awareness of spatial ability, spatial ability training, and other variables affecting operator workload and attention.
18

The Perception And Measurement Of Human-robot Trust

Schaefer, Kristin 01 January 2013 (has links)
As robots penetrate further into the everyday environments, trust in these robots becomes a crucial issue. The purpose of this work was to create and validate a reliable scale that could measure changes in an individual’s trust in a robot. Assessment of current trust theory identified measurable antecedents specific to the human, the robot, and the environment. Six experiments subsumed the development of the 40 item trust scale. Scale development included the creation of a 172 item pool. Two experiments identified the robot features and perceived functional characteristics that were related to the classification of a machine as a robot for this item pool. Item pool reduction techniques and subject matter expert (SME) content validation were used to reduce the scale to 40 items. The two final experiments were then conducted to validate the scale. The finalized 40 item pre-post interaction trust scale was designed to measure trust perceptions specific to HRI. The scale measured trust on a 0-100% rating scale and provides a percentage trust score. A 14 item sub-scale of this final version of the test recommended by SMEs may be sufficient for some HRI tasks, and the implications of this proposition were discussed.
19

Developing robots that impact human-robot trust in emergency evacuations

Robinette, Paul 07 January 2016 (has links)
High-risk, time-critical situations require trust for humans to interact with other agents even if they have never interacted with the agents before. In the near future, robots will perform tasks to help people in such situations, thus robots must understand why a person makes a trust decision in order to effectively aid the person. High casualty rates in several emergency evacuations motivate our use of this scenario as an example of a high-risk, time-critical situation. Emergency guidance robots can be stored inside of buildings then activated to search for victims and guide evacuees to safety. In this dissertation, we determined the conditions under which evacuees would be likely to trust a robot in an emergency evacuation. We began by examining reports of real-world evacuations and considering how guidance robots can best help. We performed two simulations of evacuations and learned that robots could be helpful as long as at least 30% of evacuees trusted their guidance instructions. We then developed several methods for a robot to communicate directional information to evacuees. After performing three rounds of evaluation using virtually, remotely and physically present robots, we concluded that robots should communicate directional information by gesturing with two arms. Next, we studied the effect of situational risk and the robot's previous performance on a participant's decision to use the robot during an interaction. We found that higher risk scenarios caused participants to align their self-reported trust with their decisions in a trust situation. We also discovered that trust in a robot drops after a single error when interaction occurs in a virtual environment. After an exploratory study in trust repair, we have learned that a robot can repair broken trust during the emergency by apologizing for its prior mistake or giving additional information relevant to the situation. Apologizing immediately after the error had no effect. Robots have the potential to save lives in emergency scenarios, but could have an equally disastrous effect if participants overtrust them. To explore this concept, we created a virtual environment of an office as well as a real-world simulation of an emergency evacuation. In both, participants interacted with a robot during a non-emergency phase to experience its behavior and then chose whether to follow the robot’s instructions during an emergency phase or not. In the virtual environment, the emergency was communicated through text, but in the real-world simulation, artificial smoke and fire alarms were used to increase the urgency of the situation. In our virtual environment, we confirmed our previous results that prior robot behavior affected whether participants would trust the robot or not. To our surprise, all participants followed the robot in the real-world simulation of an emergency, despite half observing the same robot perform poorly in a navigation guidance task just minutes before. We performed additional exploratory studies investigating different failure modes. Even when the robot pointed to a dark room with no discernible exit the majority of people did not choose to exit the way they entered. The conclusions of this dissertation are based on the results of fifteen experiments with a total of 2,168 participants (2,071 participants in virtual or remote studies conducted over the internet and 97 participants in physical studies on campus). We have found that most human evacuees will trust an emergency guidance robot that uses understandable information conveyance modalities and exhibits efficient guidance behavior in an evacuation scenario. In interactions with a virtual robot, this trust can be lost because of a single error made by the robot, but a similar effect was not found with real-world robots. This dissertation presents data indicating that victims in emergency situations may overtrust a robot, even when they have recently witnessed the robot malfunction. This work thus demonstrates concerns which are important to both the HRI and rescue robot communities.
20

Decision shaping and strategy learning in multi-robot interactions

Valtazanos, Aris January 2013 (has links)
Recent developments in robot technology have contributed to the advancement of autonomous behaviours in human-robot systems; for example, in following instructions received from an interacting human partner. Nevertheless, increasingly many systems are moving towards more seamless forms of interaction, where factors such as implicit trust and persuasion between humans and robots are brought to the fore. In this context, the problem of attaining, through suitable computational models and algorithms, more complex strategic behaviours that can influence human decisions and actions during an interaction, remains largely open. To address this issue, this thesis introduces the problem of decision shaping in strategic interactions between humans and robots, where a robot seeks to lead, without however forcing, an interacting human partner to a particular state. Our approach to this problem is based on a combination of statistical modeling and synthesis of demonstrated behaviours, which enables robots to efficiently adapt to novel interacting agents. We primarily focus on interactions between autonomous and teleoperated (i.e. human-controlled) NAO humanoid robots, using the adversarial soccer penalty shooting game as an illustrative example. We begin by describing the various challenges that a robot operating in such complex interactive environments is likely to face. Then, we introduce a procedure through which composable strategy templates can be learned from provided human demonstrations of interactive behaviours. We subsequently present our primary contribution to the shaping problem, a Bayesian learning framework that empirically models and predicts the responses of an interacting agent, and computes action strategies that are likely to influence that agent towards a desired goal. We then address the related issue of factors affecting human decisions in these interactive strategic environments, such as the availability of perceptual information for the human operator. Finally, we describe an information processing algorithm, based on the Orient motion capture platform, which serves to facilitate direct (as opposed to teleoperation-mediated) strategic interactions between humans and robots. Our experiments introduce and evaluate a wide range of novel autonomous behaviours, where robots are shown to (learn to) influence a variety of interacting agents, ranging from other simple autonomous agents, to robots controlled by experienced human subjects. These results demonstrate the benefits of strategic reasoning in human-robot interaction, and constitute an important step towards realistic, practical applications, where robots are expected to be not just passive agents, but active, influencing participants.

Page generated in 0.117 seconds