Spelling suggestions: "subject:"2human robot 1interaction"" "subject:"2human robot 3dinteraction""
91 |
Lexical vagueness handling using fuzzy logic in human robot interactionGuo, Xiao January 2011 (has links)
Lexical vagueness is a ubiquitous phenomenon in natural language. Most of previous works in natural language processing (NLP) consider lexical ambiguity as the main problem in natural language understanding rather than lexical vagueness. Lexical vagueness is usually considered as a solution rather than a problem in natural language understanding since precise information is usually failed to be provided in conversations. However, lexical vagueness is obviously an obstacle in human robot interaction (HRI) since the robots are expected to precisely understand their users' utterances in order to provide reliable services to their users. This research aims to develop novel lexical vagueness handling techniques to enable service robots to precisely understand their users' utterance so that they can provide the reliable services to their users. A novel integrated system to handle lexical vagueness is proposed in this research based on an in-depth understanding of lexical ambiguity and lexical vagueness including why they exist, how they are presented, what differences are in between them, and the mainstream techniques to handle lexical ambiguity and lexical vagueness. The integrated system consists of two blocks: the block of lexical ambiguity handling and the block of lexical vagueness handling. The block of lexical ambiguity handling first removes syntactic ambiguity and lexical ambiguity. The block of lexical vagueness handling is then used to model and remove lexical vagueness. Experimental results show that the robots endowed with the developed integrated system are able to understand their users' utterances. The reliable services to their users, therefore, can be provided by the robots.
|
92 |
Framing Human-Robot Task Communication as a Partially Observable Markov Decision ProcessWoodward, Mark P. 10 August 2012 (has links)
As general purpose robots become more capable, pre-programming of all tasks at the factory will become less practical. We would like for non-technical human owners to be able to communicate, through interaction with their robot, the details of a new task; I call this interaction "task communication". During task communication the robot must infer the details of the task from unstructured human signals, and it must choose actions that facilitate this inference. In this dissertation I propose the use of a partially observable Markov decision process (POMDP) for representing the task communication problem; with the unobservable task details and unobservable intentions of the human teacher captured in the state, with all signals from the human represented as observations, and with the cost function chosen to penalize uncertainty. This dissertation presents the framework, works through an example of framing task communication as a POMDP, and presents results from a user experiment where subjects communicated a task to a POMDP-controlled virtual robot and to a human controlled virtual robot. The task communicated in the experiment consisted of a single object movement and the communication in the experiment was limited to binary approval signals from the teacher. This dissertation makes three contributions: 1) It frames human-robot task communication as a POMDP, a widely used framework. This enables the leveraging of techniques developed for other problems framed as a POMDP. 2) It provides an example of framing a task communication problem as a POMDP. 3) It validates the framework through results from a user experiment. The results suggest that the proposed POMDP framework produces robots that are robust to teacher error, that can accurately infer task details, and that are perceived to be intelligent. / Engineering and Applied Sciences
|
93 |
Compliance Control of Robot Manipulator for Safe Physical Human Robot InteractionAhmed, Muhammad Rehan January 2011 (has links)
Inspiration from biological systems suggests that robots should demonstrate same level of capabilities that are embedded in biological systems in performing safe and successful interaction with the humans. The major challenge in physical human robot interaction tasks in anthropic environment is the safe sharing of robot work space such that robot will not cause harm or injury to the human under any operating condition. Embedding human like adaptable compliance characteristics into robot manipulators can provide safe physical human robot interaction in constrained motion tasks. In robotics, this property can be achieved by using active, passive and semi active compliant actuation devices. Traditional methods of active and passive compliance lead to complex control systems and complex mechanical design. In this thesis we present compliant robot manipulator system with semi active compliant device having magneto rheological fluid based actuation mechanism. Human like adaptable compliance is achieved by controlling the properties of the magneto rheological fluid inside joint actuator. This method offers high operational accuracy, intrinsic safety and high absorption to impacts. Safety is assured by mechanism design rather than by conventional approach based on advance control. Control schemes for implementing adaptable compliance are implemented in parallel with the robot motion control that brings much simple interaction control strategy compared to other methods. Here we address two main issues: human robot collision safety and robot motion performance.We present existing human robot collision safety standards and evaluate the proposed actuation mechanism on the basis of static and dynamic collision tests. Static collision safety analysis is based on Yamada’s safety criterion and the adaptable compliance control scheme keeps the robot in the safe region of operation. For the dynamic collision safety analysis, Yamada’s impact force criterion and head injury criterion are employed. Experimental results validate the effectiveness of our solution. In addition, the results with head injury criterion showed the need to investigate human bio-mechanics in more details in order to acquire adequate knowledge for estimating the injury severity index for robots interacting with humans. We analyzed the robot motion performance in several physical human robot interaction tasks. Three interaction scenarios are studied to simulate human robot physical contact in direct and inadvertent contact situations. Respective control disciplines for the joint actuators are designed and implemented with much simplified adaptable compliance control scheme. The series of experimental tests in direct and inadvertent contact situations validate our solution of implementing human like adaptable compliance during robot motion and prove the safe interaction with humans in anthropic domains.
|
94 |
An Augmented Reality Human-Robot Collaboration SystemGreen, Scott Armstrong January 2008 (has links)
Although robotics is well established as a research field, there has been relatively little work on human-robot collaboration. This type of collaboration is going to become an increasingly important issue as robots work ever more closely with humans. Clearly, there is a growing need for research on human-robot collaboration and communication between humans and robotic systems.
Research into human-human communication can be used as a starting point in developing a robust human-robot collaboration system. Previous research into collaborative efforts with humans has shown that grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication. Therefore, these items comprise a list of required attributes of an effective human-robot collaborative system.
Augmented Reality (AR) is a technology for overlaying three-dimensional virtual graphics onto the user's view of the real world. It also allows for real time interaction with these virtual graphics, enabling a user to reach into the augmented world and manipulate it directly. The internal state of a robot and its intended actions can be displayed through the virtual imagery in the AR environment. Therefore, AR can bridge the divide between human and robotic systems and enable effective human-robot collaboration.
This thesis describes the work involved in developing the Augmented Reality Human-Robot Collaboration (AR-HRC) System. It first garners design criteria for the system from a review of communication and collaboration in human-human interaction, the current state of Human-Robot Interaction (HRI) and related work in AR. A review of research in multimodal interfaces is then provided highlighting the benefits of using such an interface design. Therefore, an AR multimodal interface was developed to determine if this type of design improved performance over a single modality design. Indeed, the multimodal interface was found to improve performance, thereby providing the impetus to use a multimodal design approach for the AR-HRC system.
The architectural design of the system is then presented. A user study conducted to determine what kind of interaction people would use when collaborating with a mobile robot is discussed and then the integration of a mobile robot is described. Finally, an evaluation of the AR-HRC system is presented.
|
95 |
Optimal behavior composition for roboticsBartholomew, Paul D. 22 May 2014 (has links)
The development of a humanoid robot that mimics human motion requires extensive programming as well as understanding the motion limitations of the robot. Programming the countless possibilities for a robot’s response to observed human motion can be time consuming. To simplify this process, this thesis presents a new approach for mimicking captured human motion data through the development of a composition routine. This routine is built upon a behavior-based framework and is coupled with optimization by calculus to determine the appropriate weightings of predetermined motion behaviors. The completion of this thesis helps to fill a void in human/robot interactions involving mimicry and behavior-based design. Technological advancements in the way computers and robots identify human motion and determine for themselves how to approximate that motion have helped make possible the mimicry of observed human subjects. In fact, many researchers have developed humanoid systems that are capable of mimicking human motion data; however, these systems do not use behavior-based design. This thesis will explain the framework and theory behind our optimal behavior composition algorithm and the selection of sinusoidal motion primitives that make up a behavior library. This algorithm breaks captured motion data into various time intervals, then optimally weights the defined behaviors to best approximate the captured data. Since this routine does not reference previous or following motion sequences, discontinuities may exist between time intervals. To address this issue, the addition of a PI controller to regulate and smooth out the transitions between time intervals will be shown. The effectiveness of using the optimal behavior composition algorithm to create an approximated motion that mimics capture motion data will be demonstrated through an example configuration of hardware and a humanoid robot platform. An example of arm motion mimicry will be presented and includes various image sequences from the mimicry as well as trajectories containing the joint positions for both the human and the robot.
|
96 |
Towards quantifying upper-arm rehabilitation metrics for children through interaction with a humanoid robotBrooks, Douglas A. 24 April 2012 (has links)
The objective of this research effort is to further rehabilitation techniques for children by developing and validating the core technologies needed to integrate therapy instruction with child-robot play interaction in order to improve upper-arm rehabilitation. Using computer vision techniques such as Motion History Imaging (MHI), Multimodal Mean, edge detection, and Random Sample Consensus (RANSAC), movements can be quantified through robot observation. Also incorporating three-dimensional data obtained via an infrared projector coupled with a Principle Component Analysis (PCA), depth information can be utilized to create a robust algorithm. Finally, utilizing prior knowledge regarding exercise data, physical therapeutic metrics, and novel approaches, a mapping to therapist instructions can be created allowing robotic feedback and intelligent interaction.
|
97 |
Human coordination of robot teams an empirical study of multimodal interface design /Cross, E. Vincent. Gilbert, Juan E., January 2009 (has links)
Thesis (Ph. D.)--Auburn University. / Abstract. Includes bibliographical references (p. 86-89).
|
98 |
Metrics to evaluate human teaching engagement from a robot's point of viewNovanda, Ori January 2017 (has links)
This thesis was motivated by a study of how robots can be taught by humans, with an emphasis on allowing persons without programming skills to teach robots. The focus of this thesis was to investigate what criteria could or should be used by a robot to evaluate whether a human teacher is (or potentially could be) a good teacher in robot learning by demonstration. In effect, choosing the teacher that can maximize the benefit to the robot using learning by imitation/demonstration. The study approached this topic by taking a technology snapshot in time to see if a representative example of research laboratory robot technology is capable of assessing teaching quality. With this snapshot, this study evaluated how humans observe teaching quality to attempt to establish measurement metrics that can be transferred as rules or algorithms that are beneficial from a robot's point of view. To evaluate teaching quality, the study looked at the teacher-student relationship from a human-human interaction perspective. Two factors were considered important in defining a good teacher: engagement and immediacy. The study gathered more literature reviews relating to further detailed elements of engagement and immediacy. The study also tried to link physical effort as a possible metric that could be used to measure the level of engagement of the teachers. An investigatory experiment was conducted to evaluate which modality the participants prefer to employ in teaching a robot if the robot can be taught using voice, gesture demonstration, or physical manipulation. The findings from this experiment suggested that the participants appeared to have no preference in terms of human effort for completing the task. However, there was a significant difference in human enjoyment preferences of input modality and a marginal difference in the robot's perceived ability to imitate. A main experiment was conducted to study the detailed elements that might be used by a robot in identifying a 'good' teacher. The main experiment was conducted in two subexperiments. The first part recorded the teacher's activities and the second part analysed how humans evaluate the perception of engagement when assessing another human teaching a robot. The results from the main experiment suggested that in human teaching of a robot (human-robot interaction), humans (the evaluators) also look for some immediacy cues that happen in human-human interaction for evaluating the engagement.
|
99 |
On Enhancing Myoelectric Interfaces by Exploiting Motor Learning and Flexible Muscle SynergiesJanuary 2015 (has links)
abstract: Myoelectric control is lled with potential to signicantly change human-robot interaction.
Humans desire compliant robots to safely interact in dynamic environments
associated with daily activities. As surface electromyography non-invasively measures
limb motion intent and correlates with joint stiness during co-contractions,
it has been identied as a candidate for naturally controlling such robots. However,
state-of-the-art myoelectric interfaces have struggled to achieve both enhanced
functionality and long-term reliability. As demands in myoelectric interfaces trend
toward simultaneous and proportional control of compliant robots, robust processing
of multi-muscle coordinations, or synergies, plays a larger role in the success of the
control scheme. This dissertation presents a framework enhancing the utility of myoelectric
interfaces by exploiting motor skill learning and
exible muscle synergies for
reliable long-term simultaneous and proportional control of multifunctional compliant
robots. The interface is learned as a new motor skill specic to the controller,
providing long-term performance enhancements without requiring any retraining or
recalibration of the system. Moreover, the framework oers control of both motion
and stiness simultaneously for intuitive and compliant human-robot interaction. The
framework is validated through a series of experiments characterizing motor learning
properties and demonstrating control capabilities not seen previously in the literature.
The results validate the approach as a viable option to remove the trade-o
between functionality and reliability that have hindered state-of-the-art myoelectric
interfaces. Thus, this research contributes to the expansion and enhancement of myoelectric
controlled applications beyond commonly perceived anthropomorphic and
\intuitive control" constraints and into more advanced robotic systems designed for
everyday tasks. / Dissertation/Thesis / Doctoral Dissertation Mechanical Engineering 2015
|
100 |
A High Level Language for Human Robot InteractionJanuary 2012 (has links)
abstract: While developing autonomous intelligent robots has been the goal of many research programs, a more practical application involving intelligent robots is the formation of teams consisting of both humans and robots. An example of such an application is search and rescue operations where robots commanded by humans are sent to environments too dangerous for humans. For such human-robot interaction, natural language is considered a good communication medium as it allows humans with less training about the robot's internal language to be able to command and interact with the robot. However, any natural language communication from the human needs to be translated to a formal language that the robot can understand. Similarly, before the robot can communicate (in natural language) with the human, it needs to formulate its communique in some formal language which then gets translated into natural language. In this paper, I develop a high level language for communication between humans and robots and demonstrate various aspects through a robotics simulation. These language constructs borrow some ideas from action execution languages and are grounded with respect to simulated human-robot interaction transcripts. / Dissertation/Thesis / M.S. Computer Science 2012
|
Page generated in 0.1148 seconds