• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 159
  • 20
  • 17
  • 15
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 295
  • 295
  • 295
  • 103
  • 87
  • 56
  • 50
  • 48
  • 41
  • 39
  • 38
  • 36
  • 36
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

An integrative framework of time-varying affective robotic behavior

Moshkina, Lilia V. 04 April 2011 (has links)
As robots become more and more prevalent in our everyday life, making sure that our interactions with them are natural and satisfactory is of paramount importance. Given the propensity of humans to treat machines as social actors, and the integral role affect plays in human life, providing robots with affective responses is a step towards making our interaction with them more intuitive. To the end of promoting more natural, satisfying and effective human-robot interaction and enhancing robotic behavior in general, an integrative framework of time-varying affective robotic behavior was designed and implemented on a humanoid robot. This psychologically inspired framework (TAME) encompasses 4 different yet interrelated affective phenomena: personality Traits, affective Attitudes, Moods and Emotions. Traits determine consistent patterns of behavior across situations and environments and are generally time-invariant; attitudes are long-lasting and reflect likes or dislikes towards particular objects, persons, or situations; moods are subtle and relatively short in duration, biasing behavior according to favorable or unfavorable conditions; and emotions provide a fast yet short-lived response to environmental contingencies. The software architecture incorporating the TAME framework was designed as a stand-alone process to promote platform-independence and applicability to other domains. In this dissertation, the effectiveness of affective robotic behavior was explored and evaluated in a number of human-robot interaction studies with over 100 participants. In one of these studies, the impact of Negative Mood and emotion of Fear was assessed in a mock-up search-and-rescue scenario, where the participants found the robot expressing affect more compelling, sincere, convincing and "conscious" than its non-affective counterpart. Another study showed that different robotic personalities are better suited for different tasks: an extraverted robot was found to be more welcoming and fun for a task as a museum robot guide, where an engaging and gregarious demeanor was expected; whereas an introverted robot was rated as more appropriate for a problem solving task requiring concentration. To conclude, multi-faceted robotic affect can have far-reaching practical benefits for human-robot interaction, from making people feel more welcome where gregariousness is expected to making unobtrusive partners for problem solving tasks to saving people's lives in dangerous situations.
82

Joint attention in human-robot interaction

Huang, Chien-Ming 07 July 2010 (has links)
Joint attention, a crucial component in interaction and an important milestone in human development, has drawn a lot of attention from the robotics community recently. Robotics researchers have studied and implemented joint attention for robots for the purposes of achieving natural human-robot interaction and facilitating social learning. Most previous work on the realization of joint attention in the robotics community has focused only on responding to joint attention and/or initiating joint attention. Responding to joint attention is the ability to follow another's direction of gaze and gestures in order to share common experience. Initiating joint attention is the ability to manipulate another's attention to a focus of interest in order to share experience. A third important component of joint attention is ensuring, where by the initiator ensures that the responders has changed their attention. However, to the best of our knowledge, there is no work explicitly addressing the ability for a robot to ensure that joint attention is reached by interacting agents. We refer to this ability as ensuring joint attention and recognize its importance in human-robot interaction. We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This modular decomposition is supported by psychological findings and matches the developmental timeline of humans. Infants start with the skill of following a caregiver's gaze, and then they exhibit imperative and declarative pointing gestures to get a caregiver's attention. Importantly, as they aged and social skills matured, initiating actions often come with an ensuring behavior that is to look back and forth between the caregiver and the referred object to see if the caregiver is paying attention to the referential object. We conducted two experiments to investigate joint attention in human-robot interaction. The first experiment explored effects of responding to joint attention. We hypothesize that humans will find that robots responding to joint attention are more transparent, more competent, and more socially interactive. Transparency helps people understand a robot's intention, facilitating a better human-robot interaction, and positive perception of a robot improves the human-robot relationship. Our hypotheses were supported by quantitative data, results from questionnaire, and behavioral observations. The second experiment studied the importance of ensuring joint attention. The results confirmed our hypotheses that robots that ensure joint attention yield better performance in interactive human-robot tasks and that ensuring joint attention behaviors are perceived as natural behaviors by humans. The findings suggest that social robots should use ensuring joint attention behaviors.
83

Generation and use of a discrete robotic controls alphabet for high-level tasks

Gargas , Eugene Frank, III 06 April 2012 (has links)
The objective of this thesis is to generate a discrete alphabet of low-level robotic controllers rich enough to mimic the actions of high-level users using the robot for a specific task. This alphabet will be built through the analysis of various user data sets in a modified version of the motion description language, MDLe. It can then be used to mimic the actions of a future user attempting to perform the task by calling scaled versions of the controls in the alphabet, potentially reducing the amount of data required to be transmitted to the robot, with minimal error. In this thesis, theory is developed that will allow the construction of such an alphabet, as well as its use to mimic new actions. A MATLAB algorithm is then built to implement the theory. This is followed by an experiment in which various users drive a Khepera robot through different courses with a joystick. The thesis concludes by presenting results which suggest that a relatively small group of users can generate an alphabet capable of mimicking the actions of other users, while drastically reducing bandwidth.
84

Guided teaching interactions with robots: embodied queries and teaching heuristics

Cakmak, Maya 17 May 2012 (has links)
The vision of personal robot assistants continues to become more realistic with technological advances in robotics. The increase in the capabilities of robots, presents boundless opportunities for them to perform useful tasks for humans. However, it is not feasible for engineers to program robots for all possible uses. Instead, we envision general-purpose robots that can be programmed by their end-users. Learning from Demonstration (LfD), is an approach that allows users to program new capabilities on a robot by demonstrating what is required from the robot. Although LfD has become an established area of Robotics, many challenges remain in making it effective and intuitive for naive users. This thesis contributes to addressing these challenges in several ways. First, the problems that occur in teaching-learning interactions between humans and robots are characterized through human-subject experiments in three different domains. To address these problems, two mechanisms for guiding human teachers in their interactions are developed: embodied queries and teaching heuristics. Embodied queries, inspired from Active Learning queries, are questions asked by the robot so as to steer the teacher towards providing more informative demonstrations. They leverage the robot's embodiment to physically manipulate the environment and to communicate the question. Two technical contributions are made in developing embodied queries. The first is Active Keyframe-based LfD -- a framework for learning human-segmented skills in continuous action spaces and producing four different types of embodied queries to improve learned skills. The second is Intermittently-Active Learning in which a learner makes queries selectively, so as to create balanced interactions with the benefits of fully-active learning. Empirical findings from five experiments with human subjects are presented. These identify interaction-related issues in generating embodied queries, characterize human question asking, and evaluate implementations of Intermittently-Active Learning and Active Keyframe-based LfD on the humanoid robot Simon. The second mechanism, teaching heuristics, is a set of instructions given to human teachers in order to elicit more informative demonstrations from them. Such instructions are devised based on an understanding of what constitutes an optimal teacher for a given learner, with techniques grounded in Algorithmic Teaching. The utility of teaching heuristics is empirically demonstrated through six human-subject experiments, that involve teaching different concepts or tasks to a virtual agent, or teaching skills to Simon. With a diverse set of human subject experiments, this thesis demonstrates the necessity for guiding humans in teaching interactions with robots, and verifies the utility of two proposed mechanisms in improving sample efficiency and final performance, while enhancing the user interaction.
85

Determining the Benefit of Human Input in Human-in-the-Loop Robotic Systems

Bringes, Christine Elizabeth 01 January 2013 (has links)
This work analyzes human-in-the-loop robotic systems to determine where human input can be most beneficial to a collaborative task. This is accomplished by implementing a pick-and-place task using a human-in-the-loop robotic system and determining which segments of the task, when replaced by human guidance, provide the most improvement to overall task performance and require the least cognitive effort. The first experiment entails implementing a pick and place task on a commercial robotic arm. Initially, we look at a pick-and-place task that is segmented into two main areas: coarse approach towards a goal object and fine pick motion. For the fine picking phase, we look at the importance of user guidance in terms of position and orientation of the end effector. Results from this initial experiment show that the most successful strategy for our human-in-the-loop system is the one in which the human specifies a general region for grasping, and the robotic system completes the remaining elements of the task. We extend this study to include a second experiment, utilizing a more complex robotic system and pick-and-place task to further analyze human impact in a human-in-the-loop system in a more realistic setting. In this experiment, we use a robotic system that utilizes an Xbox Kinect as a vision sensor, a more cluttered environment, and a pick-and-place task that we segment in a way similar to the first experiment. Results from the second experiment indicate that allowing the user to make fine tuned adjustments to the position and orientation of the robotic hand can improve task success in high noise situations in which the autonomous robotic system might otherwise fail. The experimental setups and procedures used in this thesis can be generalized and used to guide similar analysis of human impact in other human-in-the-loop systems performing other tasks.
86

Learning from human-generated reward

Knox, William Bradley 15 February 2013 (has links)
Robots and other computational agents are increasingly becoming part of our daily lives. They will need to be able to learn to perform new tasks, adapt to novel situations, and understand what is wanted by their human users, most of whom will not have programming skills. To achieve these ends, agents must learn from humans using methods of communication that are naturally accessible to everyone. This thesis presents and formalizes interactive shaping, one such teaching method, where agents learn from real-valued reward signals that are generated by a human trainer. In interactive shaping, a human trainer observes an agent behaving in a task environment and delivers feedback signals. These signals are mapped to numeric values, which are used by the agent to specify correct behavior. A solution to the problem of interactive shaping maps human reward to some objective such that maximizing that objective generally leads to the behavior that the trainer desires. Interactive shaping addresses the aforementioned needs of real-world agents. This teaching method allows human users to quickly teach agents the specific behaviors that they desire. Further, humans can shape agents without needing programming skills or even detailed knowledge of how to perform the task themselves. In contrast, algorithms that learn autonomously from only a pre-programmed evaluative signal often learn slowly, which is unacceptable for some real-world tasks with real-world costs. These autonomous algorithms additionally have an inflexibly defined set of optimal behaviors, changeable only through additional programming. Through interactive shaping, human users can (1) specify and teach desired behavior and (2) share task knowledge when correct behavior is already indirectly specified by an objective function. Additionally, computational agents that can be taught interactively by humans provide a unique opportunity to study how humans teach in a highly controlled setting, in which the computer agent’s behavior is parametrized. This thesis answers the following question. How and to what extent can agents harness the information contained in human-generated signals of reward to learn sequential decision-making tasks? The contributions of this thesis begin with an operational definition of the problem of interactive shaping. Next, I introduce the tamer framework, one solution to the problem of interactive shaping, and describe and analyze algorithmic implementations of the framework within multiple domains. This thesis also proposes and empirically examines algorithms for learning from both human reward and a pre-programmed reward function within an MDP, demonstrating two techniques that consistently outperform learning from either feedback signal alone. Subsequently, the thesis shifts its focus from the agent to the trainer, describing two psychological studies in which the trainer is manipulated by either changing their perceived role or by having the agent intentionally misbehave at specific times; we examine the effect of these manipulations on trainer behavior and the agent’s learned task performance. Lastly, I return to the problem of interactive shaping, for which we examine a space of mappings from human reward to objective functions, where mappings differ by how much the agent discounts reward it expects to receive in the future. Through this investigation, a deep relationship is identified between discounting, the level of positivity in human reward, and training success. Specific constraints of human reward are identified (i.e., the “positive circuits” problem), as are strategies for overcoming these constraints, pointing towards interactive shaping methods that are more effective than the already successful tamer framework. / text
87

Requirements for effective collision detection on industrial serial manipulators

Schroeder, Kyle Anthony 16 October 2013 (has links)
Human-robot interaction (HRI) is the future of robotics. It is essential in the expanding markets, such as surgical, medical, and therapy robots. However, existing industrial systems can also benefit from safe and effective HRI. Many robots are now being fitted with joint torque sensors to enable effective human-robot collision detection. Many existing and off-the-shelf industrial robotic systems are not equipped with these sensors. This work presents and demonstrates a method for effective collision detection on a system with motor current feedback instead of joint torque sensors. The effectiveness of this system is also evaluated by simulating collisions with human hands and arms. Joint torques are estimated from the input motor currents. The joint friction and hysteresis losses are estimated for each joint of an SIA5D 7 Degree of Freedom (DOF) manipulator. The estimated joint torques are validated by comparing to joint torques predicted by the recursive application of Newton-Euler equations. During a pick and place motion, the estimation error in joint 2 is less than 10 Newton meters. Acceleration increased the estimation uncertainty resulting in estimation errors of 20 Newton meters over the entire workspace. When the manipulator makes contact with the environment or a human, the same technique can be used to estimate contact torques from motor current. Current-estimated contact torque is validated against the calculated torque due to a measured force. The error in contact force is less than 10 Newtons. Collision detection is demonstrated on the SIA5D using estimated joint torques. The effectiveness of the collision detection is explored through simulated collisions with the human hands and arms. Simulated collisions are performed both for a typical pick and place motion as well as trajectories that transverse the entire workspace. The simulated forces and pressures are compared to acceptable maximums for human hands and arms. During pick and place motions with vertical and lateral end effector motions at 10mm/s and 25mm/s, the maximum forces and pressures remained below acceptable levels. At and near singular configurations some collisions can be difficult to detect. Fortunately, these configurations are generally avoided for kinematic reasons. / text
88

Design and Evaluation of Affective Serious Games for Emotion Regulation Training

Jerčić, Petar January 2015 (has links)
Emotions are thought to be one of the key factors that critically influences human decision-making. Emotion regulation can help to mitigate emotion related decision biases and eventually lead to a better decision performance. Serious games emerged as a new angle introducing technological methods to learning emotion regulation, where meaningful biofeedback information communicates player's emotional state. Games are a series of interesting choices, where design of those choices could support an educational platform to learning emotion regulation. Such design could benefit digital serious games as those choices could be informed though player's physiology about emotional states in real time. This thesis explores design and evaluation methods for creating serious games where emotion regulation can be learned and practiced. Design of a digital serious game using physiological measures of emotions was investigated and evaluated. Furthermore, it investigates emotions and the effect of emotion regulation on decision performance in digital serious games. The scope of this thesis was limited to digital serious games for emotion regulation training using psychophysiological methods to communicate player's affective information. Using the psychophysiological methods in design and evaluation of digital serious games, emotions and their underlying neural mechanism have been explored. Effects of emotion regulation have been investigated where decision performance has been measured and analyzed. The proposed metrics for designing and evaluating such affective serious games have been extensively evaluated. The research methods used in this thesis were based on both quantitative and qualitative aspects, with true experiment and evaluation research, respectively. Digital serious games approach to emotion regulation was investigated, player's physiology of emotions informs design of interactions where regulation of those emotions could be practiced. The results suggested that two different emotion regulation strategies, suppression and cognitive reappraisal, are optimal for different decision tasks contexts. With careful design methods, valid serious games for training those different strategies could be produced. Moreover, using psychophysiological methods, underlying emotion neural mechanism could be mapped. This could inform a digital serious game about an optimal level of arousal for a certain task, as evidence suggests that arousal is equally or more important than valence for decision-making. The results suggest that it is possible to design and develop digital serious game applications that provide helpful learning environment where decision makers could practice emotion regulation and subsequently improve their decision-making. If we assume that physiological arousal is more important than physiological valence for learning purposes, results show that digital serious games designed in this thesis elicit high physiological arousal, suitable for use as an educational platform.
89

Breaking the typecast: Revising roles for coordinating mixed teams

Long, Matthew T 01 June 2007 (has links)
Heterogeneous multi-agent systems are currently used in a wide variety of situations, including search and rescue, military applications, and off-world exploration, however it is difficult to understand the actions of these systems or naturalistically assign these mixed teams to tasks. These agents, which may be human, robot or software, have different capabilities but will need to coordinate effectively with humans in order to operate. The first and largest contributing factor to this challenge is the processing, understanding and representing of elements of the natural world in a manner that can be utilized by artificial agents. A second contributing factor is that current abstractions and robot architectures are ill-suited to address this problem. This dissertation addresses the lack of a high-level abstraction for the naturalistic coordination of teams of heterogeneous robots, humans and other agents through the development of roles. Roles are a fundamental concept of social science that may provide this necessary abstraction. Roles are not a new concept and have been used in a number of related areas. This work draws from these fields and constructs a coherent and usable model of roles for robotics. This research is focussed on answering the following question: Can the use of social roles enable the naturalistic coordinated operation of robots in a mixed setting? In addition to this primary question, related research includes defining the key concepts important to artificial systems, providing a mapping and implementation from these concepts to a usable robot framework and identifies a set of robot-specific roles used for human-robot interaction. This research will benefit both the artificial intelligence agent and robotics communities. It poses a fundamental contribution to the multi-agent community because it extends and refines the role concept. The application of roles in a principled and complete implementation is a novel contribution to both software and robotic agents. The creation of an open source operational architecture which supports taskable robots is also a major contribution.
90

Lexical vagueness handling using fuzzy logic in human robot interaction

Guo, Xiao January 2011 (has links)
Lexical vagueness is a ubiquitous phenomenon in natural language. Most of previous works in natural language processing (NLP) consider lexical ambiguity as the main problem in natural language understanding rather than lexical vagueness. Lexical vagueness is usually considered as a solution rather than a problem in natural language understanding since precise information is usually failed to be provided in conversations. However, lexical vagueness is obviously an obstacle in human robot interaction (HRI) since the robots are expected to precisely understand their users' utterances in order to provide reliable services to their users. This research aims to develop novel lexical vagueness handling techniques to enable service robots to precisely understand their users' utterance so that they can provide the reliable services to their users. A novel integrated system to handle lexical vagueness is proposed in this research based on an in-depth understanding of lexical ambiguity and lexical vagueness including why they exist, how they are presented, what differences are in between them, and the mainstream techniques to handle lexical ambiguity and lexical vagueness. The integrated system consists of two blocks: the block of lexical ambiguity handling and the block of lexical vagueness handling. The block of lexical ambiguity handling first removes syntactic ambiguity and lexical ambiguity. The block of lexical vagueness handling is then used to model and remove lexical vagueness. Experimental results show that the robots endowed with the developed integrated system are able to understand their users' utterances. The reliable services to their users, therefore, can be provided by the robots.

Page generated in 0.1212 seconds