1 |
Designing and Evaluating Human-Robot Communication : Informing Design through Analysis of User InteractionGreen, Anders January 2009 (has links)
This thesis explores the design and evaluation of human-robot communication for service robots that use natural language to interact with people. The research is centred around three themes: design of human-robot communication; evaluation of miscommunication in human-robot communication; and the analysis of spatial influence as empiric phenomenon and design element. The method has been to put users in situations of future use through means of Hi-fi simulation. Several scenarios were enacted using the Wizard-of-Oz technique: a robot intended for fetch- and carry services in an office environment; and a robot acting in what can be characterised as a home tour, where the user teaches objects and locations to the robot. Using these scenarios a corpus of human-robot communication was developed and analysed. The analysis of the communicative behaviours led to the following observations: the users communicate with the robot in order to solve a main task goal. In order to fulfil this goal they overtake service actions that the robot is incapable of. Once users have understood that the robot is capable of performing actions, they explore its capabilities. During the interactions the users continuously monitor the behaviour of the robot, attempting to elicit feedback or to draw its perceptual attention to the users’ communicative behaviour. Information related to the communicative status of the robot seems to have a fundamental impact on the quality of interaction. Large portions of the miscommunication that occurs in the analysed scenarios can be attributed to ill-timed, lacking or irrelevant feedback from the robot. The analysis of the corpus data also showed that the users’ spatial behaviour seemed to be influenced by the robot’s communicative behaviour, embodiment and positioning. This means that we in robot design can consider the use strategies for spatial prompting to influence the users’ spatial behaviour. The understanding of the importance of continuously providing information of the communicative status of the robot to it’s users leaves us with an intriguing design challenge for the future: When designing communication for a service robot we need to design communication for the robot work tasks; and simultaneously, provide information based on the systems communicative status to continuously make users aware of the robots communicative capability. / QC 20100714
|
2 |
Evaluating Human-robot Implicit Communication Through Human-human Implicit CommunicationRichardson, Andrew Xenos 01 January 2012 (has links)
Human-Robot Interaction (HRI) research is examining ways to make human-robot (HR) communication more natural. Incorporating natural communication techniques is expected to make HR communication seamless and more natural for humans. Humans naturally incorporate implicit levels of communication, and including implicit communication in HR communication should provide tremendous benefit. The aim for this work was to evaluate a model for humanrobot implicit communication. Specifically, the primary goal for this research was to determine whether humans can assign meanings to implicit cues received from autonomous robots as they do for identical implicit cues received from humans. An experiment was designed to allow participants to assign meanings to identical, implicit cues (pursuing, retreating, investigating, hiding, patrolling) received from humans and robots. Participants were tasked to view random video clips of both entity types, label the implicit cue, and assign a level of confidence in their chosen answer. Physiological data was tracked during the experiment using an electroencephalogram and eye-tracker. Participants answered workload and stress measure questionnaires following each scenario. Results revealed that participants were significantly more accurate with human cues (84%) than with robot cues (82%), however participants were highly accurate, above 80%, for both entity types. Despite the high accuracy for both types, participants remained significantly more confident in answers for humans (6.1) than for robots (5.9) on a confidence scale of 1 - 7. Subjective measures showed no significant differences for stress or mental workload across entities. Physiological measures were not significant for the engagement index across v entity, but robots resulted in significantly higher levels of cognitive workload for participants via the index of cognitive activity. The results of this study revealed that participants are more confident interpreting human implicit cues than identical cues received from a robot. However, the accuracy of interpreting both entities remained high. Participants showed no significant difference in interpreting different cues across entity as well. Therefore, much of the ability of interpreting an implicit cue resides in the actual cue rather than the entity. Proper training should boost confidence as humans begin to work alongside autonomous robots as teammates, and it is possible to train humans to recognize cues based on the movement, regardless of the entity demonstrating the movement.
|
3 |
A Behavioral Approach to Human-Robot CommunicationOu, Shichao 01 February 2010 (has links)
Robots are increasingly capable of co-existing with human beings in the places where we live and work. I believe, however, for robots to collaborate and assist human beings in their daily lives, new methods are required for enhancing humanrobot communication. In this dissertation, I focus on how a robot can acquire and refine expressive and receptive communication skills with human beings. I hypothesize that communication has its roots in motor behavior and present an approach that is unique in the following aspects: (1) representations of humans and the skills for interacting with them are learned in the same way as the robot learns to interact with other “objects,” (2) expressive behavior naturally emerges as the result of the robot discovering new utility in existing manual behavior in a social context, and (3) symmetry in communicative behavior can be exploited to bootstrap the learning of receptive behavior. Experiments have been designed to evaluate the approach: (1) as a computational framework for learning increasingly comprehensive models and behavior for communicating with human beings and, (2) from a human-robot interaction perspective that can adapt to a variety of human behavior. Results from these studies illustrate that the robot successfully acquired a variety of expressive pointing gestures using multiple limbs and eye gaze, and the perceptual skills with which to recognize and respond to similar gestures from humans. Due to variations in human reactions over the training subjects, the robot developed a preference for certain gestures over others. These results support the experimental hypotheses and offer insights for extensions of the computation framework and experimental designs for future studies.
|
4 |
Visual Recognition of a Dynamic Arm Gesture Language for Human-Robot and Inter-Robot CommunicationAbid, Muhammad Rizwan January 2015 (has links)
This thesis presents a novel Dynamic Gesture Language Recognition (DGLR) system for human-robot and inter-robot communication.
We developed and implemented an experimental setup consisting of a humanoid robot/android able to recognize and execute in real time all the arm gestures of the Dynamic Gesture Language (DGL) in similar way as humans do.
Our DGLR system comprises two main subsystems: an image processing (IP) module and a linguistic recognition system (LRS) module. The IP module enables recognizing individual DGL gestures. In this module, we use the bag-of-features (BOFs) and a local part model approach for dynamic gesture recognition from images. Dynamic gesture classification is conducted using the BOFs and nonlinear support-vector-machine (SVM) methods. The multiscale local part model preserves the temporal context.
The IP module was tested using two databases, one consisting of images of a human performing a series of dynamic arm gestures under different environmental conditions and a second database consisting of images of an android performing the same series of arm gestures.
The linguistic recognition system (LRS) module uses a novel formal grammar approach to accept DGL-wise valid sequences of dynamic gestures and reject invalid ones. LRS consists of two subsystems: one using a Linear Formal Grammar (LFG) to derive the valid sequence of dynamic gestures and another using a Stochastic Linear Formal Grammar (SLFG) to occasionally recover gestures that were unrecognized by the IP module. Experimental results have shown that the DGLR system had a slightly better overall performance when recognizing gestures made by a human subject (98.92% recognition rate) than those made by the android (97.42% recognition rate).
|
Page generated in 0.1146 seconds