Spelling suggestions: "subject:"2human robot 1interaction"" "subject:"2human robot 3dinteraction""
81 |
Towards the human-centered design of everyday robotsSung, Ja-Young 01 April 2011 (has links)
The recent advancement of robotic technology brings robots closer to assisting us in our everyday spaces, providing support for healthcare, cleaning, entertaining and other tasks. In this dissertation, I refer to these robots as everyday robots. Scholars argue that the key to successful human acceptance lies in the design of robots that have the ability to blend into everyday activities. A challenge remains; robots are an autonomous technology that triggers multi-faceted interactions: physical, intellectual, social and emotional, making their presence visible and even obtrusive. These challenges need more than technological advances to be resolved; more human-centered approaches are required in the design. However to date, little is known about how to support that human-centered design of everyday robots.
In this thesis, I address this gap by introducing an initial set of design guidelines for everyday robots. These guidelines are based on four empirical studies undertaken to identify how people live with robots in the home. These studies mine insights about what interaction attributes of everyday robots elicit positive or negative user responses. The guidelines were deployed in the development of one type of everyday robot: a senior-care robot called HomeMate. It shows that the guidelines become useful during the early development process by helping designers and robot engineers to focus on how social and emotional values of end-users influence the design of the technical functions required.
Overall, this thesis addresses a question how we can support the design of everyday robots to become more accepted by users. I respond to this question by proposing a set of design guidelines that account for lived experiences of robots in the home, which ultimately can improve the adoption and use of everyday robots.
|
82 |
An integrative framework of time-varying affective robotic behaviorMoshkina, Lilia V. 04 April 2011 (has links)
As robots become more and more prevalent in our everyday life, making sure that our interactions with them are natural and satisfactory is of paramount importance. Given the propensity of humans to treat machines as social actors, and the integral role affect plays in human life, providing robots with affective responses is a step towards making our interaction with them more intuitive. To the end of promoting more natural, satisfying and effective human-robot interaction and enhancing robotic behavior in general, an integrative framework of time-varying affective robotic behavior was designed and implemented on a humanoid robot. This psychologically inspired framework (TAME) encompasses 4 different yet interrelated affective phenomena: personality Traits, affective Attitudes, Moods and Emotions. Traits determine consistent patterns of behavior across situations and environments and are generally time-invariant; attitudes are long-lasting and reflect likes or dislikes towards particular objects, persons, or situations; moods are subtle and relatively short in duration, biasing behavior according to favorable or unfavorable conditions; and emotions provide a fast yet short-lived response to environmental contingencies. The software architecture incorporating the TAME framework was designed as a stand-alone process to promote platform-independence and applicability to other domains.
In this dissertation, the effectiveness of affective robotic behavior was explored and evaluated in a number of human-robot interaction studies with over 100 participants. In one of these studies, the impact of Negative Mood and emotion of Fear was assessed in a mock-up search-and-rescue scenario, where the participants found the robot expressing affect more compelling, sincere, convincing and "conscious" than its non-affective counterpart. Another study showed that different robotic personalities are better suited for different tasks: an extraverted robot was found to be more welcoming and fun for a task as a museum robot guide, where an engaging and gregarious demeanor was expected; whereas an introverted robot was rated as more appropriate for a problem solving task requiring concentration. To conclude, multi-faceted robotic affect can have far-reaching practical benefits for human-robot interaction, from making people feel more welcome where gregariousness is expected to making unobtrusive partners for problem solving tasks to saving people's lives in dangerous situations.
|
83 |
Joint attention in human-robot interactionHuang, Chien-Ming 07 July 2010 (has links)
Joint attention, a crucial component in interaction and an important milestone in human development, has drawn a lot of attention from the robotics community recently. Robotics researchers have studied and implemented joint attention for robots for the purposes of achieving natural human-robot interaction and facilitating social learning. Most previous work on the realization of joint attention in the robotics community has focused only on responding to joint attention and/or initiating joint attention. Responding to joint attention is the ability to follow another's direction of gaze and gestures in order to share common experience. Initiating joint attention is the ability to manipulate another's attention to a focus of interest in order to share experience. A third important component of joint attention is ensuring, where by the initiator ensures that the responders has changed their attention. However, to the best of our knowledge, there is no work explicitly addressing the ability for a robot to ensure that joint attention is reached by interacting agents. We refer to this ability as ensuring joint attention and recognize its importance in human-robot interaction.
We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This modular decomposition is supported by psychological findings and matches the developmental timeline of humans. Infants start with the skill of following a caregiver's gaze, and then they exhibit imperative and declarative pointing gestures to get a caregiver's attention. Importantly, as they aged and social skills matured, initiating actions often come with an ensuring behavior that is to look back and forth between the caregiver and the referred object to see if the caregiver is paying attention to the referential object.
We conducted two experiments to investigate joint attention in human-robot interaction. The first experiment explored effects of responding to joint attention. We hypothesize that humans will find that robots responding to joint attention are more transparent, more competent, and more socially interactive. Transparency helps people understand a robot's intention, facilitating a better human-robot interaction, and positive perception of a robot improves the human-robot relationship. Our hypotheses were supported by quantitative data, results from questionnaire, and behavioral observations. The second experiment studied the importance of ensuring joint attention. The results confirmed our hypotheses that robots that ensure joint attention yield better performance in interactive human-robot tasks and that ensuring joint attention behaviors are perceived as natural behaviors by humans. The findings suggest that social robots should use ensuring joint attention behaviors.
|
84 |
Generation and use of a discrete robotic controls alphabet for high-level tasksGargas , Eugene Frank, III 06 April 2012 (has links)
The objective of this thesis is to generate a discrete alphabet of low-level robotic controllers rich enough to mimic the actions of high-level users using the robot for a specific task. This alphabet will be built through the analysis of various user data sets in a modified version of the motion description language, MDLe. It can then be used to mimic the actions of a future user attempting to perform the task by calling scaled versions of the controls in the alphabet, potentially reducing the amount of data required to be transmitted to the robot, with minimal error.
In this thesis, theory is developed that will allow the construction of such an alphabet, as well as its use to mimic new actions. A MATLAB algorithm is then built to implement the theory. This is followed by an experiment in which various users drive a Khepera robot through different courses with a joystick. The thesis concludes by presenting results which suggest that a relatively small group of users can generate an alphabet capable of mimicking the actions of other users, while drastically reducing bandwidth.
|
85 |
Guided teaching interactions with robots: embodied queries and teaching heuristicsCakmak, Maya 17 May 2012 (has links)
The vision of personal robot assistants continues to become more realistic with technological advances in robotics. The increase in the capabilities of robots, presents boundless opportunities for them to perform useful tasks for humans.
However, it is not feasible for engineers to program robots for all possible uses. Instead, we envision general-purpose robots that can be programmed by their end-users.
Learning from Demonstration (LfD), is an approach that allows users to program new capabilities on a robot by demonstrating what is required from the robot. Although LfD has become an established area of Robotics, many challenges remain in making it effective and intuitive for naive users. This thesis contributes to addressing these challenges in several ways. First, the problems that occur in teaching-learning interactions between humans and robots are characterized through human-subject experiments in three different domains. To address these problems, two mechanisms for guiding human teachers in their interactions are developed: embodied queries and teaching heuristics.
Embodied queries, inspired from Active Learning queries, are questions asked by the robot so as to steer the teacher towards providing more informative demonstrations. They leverage the robot's embodiment to physically manipulate the environment and to communicate the question. Two technical contributions are made in developing embodied queries. The first is Active Keyframe-based LfD -- a framework for learning human-segmented skills in continuous action spaces and producing four different types of embodied queries to improve learned skills. The second is Intermittently-Active Learning in which a learner makes queries selectively, so as to create balanced interactions with the benefits of fully-active learning. Empirical findings from five experiments with human subjects are presented. These identify interaction-related issues in generating embodied queries, characterize human question asking, and evaluate implementations of Intermittently-Active Learning and Active Keyframe-based LfD on the humanoid robot Simon.
The second mechanism, teaching heuristics, is a set of instructions given to human teachers in order to elicit more informative demonstrations from them. Such instructions are devised based on an understanding of what constitutes an optimal teacher for a given learner, with techniques grounded in Algorithmic Teaching. The utility of teaching heuristics is empirically demonstrated through six human-subject experiments, that involve teaching different concepts or tasks to a virtual agent, or teaching skills to Simon.
With a diverse set of human subject experiments, this thesis demonstrates the necessity for guiding humans in teaching interactions with robots, and verifies the utility of two proposed mechanisms in improving sample efficiency and final performance, while enhancing the user interaction.
|
86 |
Determining the Benefit of Human Input in Human-in-the-Loop Robotic SystemsBringes, Christine Elizabeth 01 January 2013 (has links)
This work analyzes human-in-the-loop robotic systems to determine where human input can be most beneficial to a collaborative task. This is accomplished by implementing a pick-and-place task using a human-in-the-loop robotic system and determining which segments of the task, when replaced by human guidance, provide the most improvement to overall task performance and require the least cognitive effort.
The first experiment entails implementing a pick and place task on a commercial robotic arm. Initially, we look at a pick-and-place task that is segmented into two main areas: coarse approach towards a goal object and fine pick motion. For the fine picking phase, we look at the importance of user guidance in terms of position and orientation of the end effector. Results from this initial experiment show that the most successful strategy for our human-in-the-loop system is the one in which the human specifies a general region for grasping, and the robotic system completes the remaining elements of the task. We extend this study to include a second experiment, utilizing a more complex robotic system and pick-and-place task to further analyze human impact in a human-in-the-loop system in a more realistic setting. In this experiment, we use a robotic system that utilizes an Xbox Kinect as a vision sensor, a more cluttered environment, and a pick-and-place task that we segment in a way similar to the first experiment.
Results from the second experiment indicate that allowing the user to make fine tuned adjustments to the position and orientation of the robotic hand can improve task success in high noise situations in which the autonomous robotic system might otherwise fail. The experimental setups and procedures used in this thesis can be generalized and used to guide similar analysis of human impact in other human-in-the-loop systems performing other tasks.
|
87 |
Learning from human-generated rewardKnox, William Bradley 15 February 2013 (has links)
Robots and other computational agents are increasingly becoming part of our daily lives. They will need to be able to learn to perform new tasks, adapt to novel situations, and understand what is wanted by their human users, most of whom will not have programming skills. To achieve these ends, agents must learn from humans using methods of communication that are naturally accessible to everyone. This thesis presents and formalizes interactive shaping, one such teaching method, where agents learn from real-valued reward signals that are generated by a human trainer. In interactive shaping, a human trainer observes an agent behaving in a task environment and delivers feedback signals. These signals are mapped to numeric values, which are used by the agent to specify correct behavior. A solution to the problem of interactive shaping maps human reward to some objective such that maximizing that objective generally leads to the behavior that the trainer desires.
Interactive shaping addresses the aforementioned needs of real-world agents. This teaching method allows human users to quickly teach agents the specific behaviors that they desire. Further, humans can shape agents without needing programming skills or even detailed knowledge of how to perform the task themselves. In contrast, algorithms that learn autonomously from only a pre-programmed evaluative signal often learn slowly, which is unacceptable for some real-world tasks with real-world costs. These autonomous algorithms additionally have an inflexibly defined set of optimal behaviors, changeable only through additional programming. Through interactive shaping, human users can (1) specify and teach desired behavior and (2) share task knowledge when correct behavior is already indirectly specified by an objective function. Additionally, computational agents that can be taught interactively by humans provide a unique opportunity to study how humans teach in a highly controlled setting, in which the computer agent’s behavior is parametrized.
This thesis answers the following question. How and to what extent can agents harness the information contained in human-generated signals of reward to learn sequential decision-making tasks? The contributions of this thesis begin with an operational definition of the problem of interactive shaping. Next, I introduce the tamer framework, one solution to the problem of interactive shaping, and describe and analyze algorithmic implementations of the framework within multiple domains. This thesis also proposes and empirically examines algorithms for learning from both human reward and a pre-programmed reward function within an MDP, demonstrating two techniques that consistently outperform learning from either feedback signal alone. Subsequently, the thesis shifts its focus from the agent to the trainer, describing two psychological studies in which the trainer is manipulated by either changing their perceived role or by having the agent intentionally misbehave at specific times; we examine the effect of these manipulations on trainer behavior and the agent’s learned task performance. Lastly, I return to the problem of interactive shaping, for which we examine a space of mappings from human reward to objective functions, where mappings differ by how much the agent discounts reward it expects to receive in the future. Through this investigation, a deep relationship is identified between discounting, the level of positivity in human reward, and training success. Specific constraints of human reward are identified (i.e., the “positive circuits” problem), as are strategies for overcoming these constraints, pointing towards interactive shaping methods that are more effective than the already successful tamer framework. / text
|
88 |
Requirements for effective collision detection on industrial serial manipulatorsSchroeder, Kyle Anthony 16 October 2013 (has links)
Human-robot interaction (HRI) is the future of robotics. It is essential in the expanding markets, such as surgical, medical, and therapy robots. However, existing industrial systems can also benefit from safe and effective HRI. Many robots are now being fitted with joint torque sensors to enable effective human-robot collision detection. Many existing and off-the-shelf industrial robotic systems are not equipped with these sensors. This work presents and demonstrates a method for effective collision detection on a system with motor current feedback instead of joint torque sensors. The effectiveness of this system is also evaluated by simulating collisions with human hands and arms. Joint torques are estimated from the input motor currents. The joint friction and hysteresis losses are estimated for each joint of an SIA5D 7 Degree of Freedom (DOF) manipulator. The estimated joint torques are validated by comparing to joint torques predicted by the recursive application of Newton-Euler equations. During a pick and place motion, the estimation error in joint 2 is less than 10 Newton meters. Acceleration increased the estimation uncertainty resulting in estimation errors of 20 Newton meters over the entire workspace. When the manipulator makes contact with the environment or a human, the same technique can be used to estimate contact torques from motor current. Current-estimated contact torque is validated against the calculated torque due to a measured force. The error in contact force is less than 10 Newtons. Collision detection is demonstrated on the SIA5D using estimated joint torques. The effectiveness of the collision detection is explored through simulated collisions with the human hands and arms. Simulated collisions are performed both for a typical pick and place motion as well as trajectories that transverse the entire workspace. The simulated forces and pressures are compared to acceptable maximums for human hands and arms. During pick and place motions with vertical and lateral end effector motions at 10mm/s and 25mm/s, the maximum forces and pressures remained below acceptable levels. At and near singular configurations some collisions can be difficult to detect. Fortunately, these configurations are generally avoided for kinematic reasons. / text
|
89 |
Design and Evaluation of Affective Serious Games for Emotion Regulation TrainingJerčić, Petar January 2015 (has links)
Emotions are thought to be one of the key factors that critically influences human decision-making. Emotion regulation can help to mitigate emotion related decision biases and eventually lead to a better decision performance. Serious games emerged as a new angle introducing technological methods to learning emotion regulation, where meaningful biofeedback information communicates player's emotional state. Games are a series of interesting choices, where design of those choices could support an educational platform to learning emotion regulation. Such design could benefit digital serious games as those choices could be informed though player's physiology about emotional states in real time. This thesis explores design and evaluation methods for creating serious games where emotion regulation can be learned and practiced. Design of a digital serious game using physiological measures of emotions was investigated and evaluated. Furthermore, it investigates emotions and the effect of emotion regulation on decision performance in digital serious games. The scope of this thesis was limited to digital serious games for emotion regulation training using psychophysiological methods to communicate player's affective information. Using the psychophysiological methods in design and evaluation of digital serious games, emotions and their underlying neural mechanism have been explored. Effects of emotion regulation have been investigated where decision performance has been measured and analyzed. The proposed metrics for designing and evaluating such affective serious games have been extensively evaluated. The research methods used in this thesis were based on both quantitative and qualitative aspects, with true experiment and evaluation research, respectively. Digital serious games approach to emotion regulation was investigated, player's physiology of emotions informs design of interactions where regulation of those emotions could be practiced. The results suggested that two different emotion regulation strategies, suppression and cognitive reappraisal, are optimal for different decision tasks contexts. With careful design methods, valid serious games for training those different strategies could be produced. Moreover, using psychophysiological methods, underlying emotion neural mechanism could be mapped. This could inform a digital serious game about an optimal level of arousal for a certain task, as evidence suggests that arousal is equally or more important than valence for decision-making. The results suggest that it is possible to design and develop digital serious game applications that provide helpful learning environment where decision makers could practice emotion regulation and subsequently improve their decision-making. If we assume that physiological arousal is more important than physiological valence for learning purposes, results show that digital serious games designed in this thesis elicit high physiological arousal, suitable for use as an educational platform.
|
90 |
Breaking the typecast: Revising roles for coordinating mixed teamsLong, Matthew T 01 June 2007 (has links)
Heterogeneous multi-agent systems are currently used in a wide variety of situations, including search and rescue, military applications, and off-world exploration, however it is difficult to understand the actions of these systems or naturalistically assign these mixed teams to tasks. These agents, which may be human, robot or software, have different capabilities but will need to coordinate effectively with humans in order to operate. The first and largest contributing factor to this challenge is the processing, understanding and representing of elements of the natural world in a manner that can be utilized by artificial agents. A second contributing factor is that current abstractions and robot architectures are ill-suited to address this problem. This dissertation addresses the lack of a high-level abstraction for the naturalistic coordination of teams of heterogeneous robots, humans and other agents through the development of roles.
Roles are a fundamental concept of social science that may provide this necessary abstraction. Roles are not a new concept and have been used in a number of related areas. This work draws from these fields and constructs a coherent and usable model of roles for robotics. This research is focussed on answering the following question: Can the use of social roles enable the naturalistic coordinated operation of robots in a mixed setting? In addition to this primary question, related research includes defining the key concepts important to artificial systems, providing a mapping and implementation from these concepts to a usable robot framework and identifies a set of robot-specific roles used for human-robot interaction. This research will benefit both the artificial intelligence agent and robotics communities. It poses a fundamental contribution to the multi-agent community because it extends and refines the role concept.
The application of roles in a principled and complete implementation is a novel contribution to both software and robotic agents. The creation of an open source operational architecture which supports taskable robots is also a major contribution.
|
Page generated in 0.1102 seconds