• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 212
  • 24
  • 18
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 380
  • 380
  • 308
  • 125
  • 105
  • 68
  • 63
  • 63
  • 57
  • 51
  • 50
  • 47
  • 45
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Generation and use of a discrete robotic controls alphabet for high-level tasks

Gargas , Eugene Frank, III 06 April 2012 (has links)
The objective of this thesis is to generate a discrete alphabet of low-level robotic controllers rich enough to mimic the actions of high-level users using the robot for a specific task. This alphabet will be built through the analysis of various user data sets in a modified version of the motion description language, MDLe. It can then be used to mimic the actions of a future user attempting to perform the task by calling scaled versions of the controls in the alphabet, potentially reducing the amount of data required to be transmitted to the robot, with minimal error. In this thesis, theory is developed that will allow the construction of such an alphabet, as well as its use to mimic new actions. A MATLAB algorithm is then built to implement the theory. This is followed by an experiment in which various users drive a Khepera robot through different courses with a joystick. The thesis concludes by presenting results which suggest that a relatively small group of users can generate an alphabet capable of mimicking the actions of other users, while drastically reducing bandwidth.
112

Guided teaching interactions with robots: embodied queries and teaching heuristics

Cakmak, Maya 17 May 2012 (has links)
The vision of personal robot assistants continues to become more realistic with technological advances in robotics. The increase in the capabilities of robots, presents boundless opportunities for them to perform useful tasks for humans. However, it is not feasible for engineers to program robots for all possible uses. Instead, we envision general-purpose robots that can be programmed by their end-users. Learning from Demonstration (LfD), is an approach that allows users to program new capabilities on a robot by demonstrating what is required from the robot. Although LfD has become an established area of Robotics, many challenges remain in making it effective and intuitive for naive users. This thesis contributes to addressing these challenges in several ways. First, the problems that occur in teaching-learning interactions between humans and robots are characterized through human-subject experiments in three different domains. To address these problems, two mechanisms for guiding human teachers in their interactions are developed: embodied queries and teaching heuristics. Embodied queries, inspired from Active Learning queries, are questions asked by the robot so as to steer the teacher towards providing more informative demonstrations. They leverage the robot's embodiment to physically manipulate the environment and to communicate the question. Two technical contributions are made in developing embodied queries. The first is Active Keyframe-based LfD -- a framework for learning human-segmented skills in continuous action spaces and producing four different types of embodied queries to improve learned skills. The second is Intermittently-Active Learning in which a learner makes queries selectively, so as to create balanced interactions with the benefits of fully-active learning. Empirical findings from five experiments with human subjects are presented. These identify interaction-related issues in generating embodied queries, characterize human question asking, and evaluate implementations of Intermittently-Active Learning and Active Keyframe-based LfD on the humanoid robot Simon. The second mechanism, teaching heuristics, is a set of instructions given to human teachers in order to elicit more informative demonstrations from them. Such instructions are devised based on an understanding of what constitutes an optimal teacher for a given learner, with techniques grounded in Algorithmic Teaching. The utility of teaching heuristics is empirically demonstrated through six human-subject experiments, that involve teaching different concepts or tasks to a virtual agent, or teaching skills to Simon. With a diverse set of human subject experiments, this thesis demonstrates the necessity for guiding humans in teaching interactions with robots, and verifies the utility of two proposed mechanisms in improving sample efficiency and final performance, while enhancing the user interaction.
113

Determining the Benefit of Human Input in Human-in-the-Loop Robotic Systems

Bringes, Christine Elizabeth 01 January 2013 (has links)
This work analyzes human-in-the-loop robotic systems to determine where human input can be most beneficial to a collaborative task. This is accomplished by implementing a pick-and-place task using a human-in-the-loop robotic system and determining which segments of the task, when replaced by human guidance, provide the most improvement to overall task performance and require the least cognitive effort. The first experiment entails implementing a pick and place task on a commercial robotic arm. Initially, we look at a pick-and-place task that is segmented into two main areas: coarse approach towards a goal object and fine pick motion. For the fine picking phase, we look at the importance of user guidance in terms of position and orientation of the end effector. Results from this initial experiment show that the most successful strategy for our human-in-the-loop system is the one in which the human specifies a general region for grasping, and the robotic system completes the remaining elements of the task. We extend this study to include a second experiment, utilizing a more complex robotic system and pick-and-place task to further analyze human impact in a human-in-the-loop system in a more realistic setting. In this experiment, we use a robotic system that utilizes an Xbox Kinect as a vision sensor, a more cluttered environment, and a pick-and-place task that we segment in a way similar to the first experiment. Results from the second experiment indicate that allowing the user to make fine tuned adjustments to the position and orientation of the robotic hand can improve task success in high noise situations in which the autonomous robotic system might otherwise fail. The experimental setups and procedures used in this thesis can be generalized and used to guide similar analysis of human impact in other human-in-the-loop systems performing other tasks.
114

Learning from human-generated reward

Knox, William Bradley 15 February 2013 (has links)
Robots and other computational agents are increasingly becoming part of our daily lives. They will need to be able to learn to perform new tasks, adapt to novel situations, and understand what is wanted by their human users, most of whom will not have programming skills. To achieve these ends, agents must learn from humans using methods of communication that are naturally accessible to everyone. This thesis presents and formalizes interactive shaping, one such teaching method, where agents learn from real-valued reward signals that are generated by a human trainer. In interactive shaping, a human trainer observes an agent behaving in a task environment and delivers feedback signals. These signals are mapped to numeric values, which are used by the agent to specify correct behavior. A solution to the problem of interactive shaping maps human reward to some objective such that maximizing that objective generally leads to the behavior that the trainer desires. Interactive shaping addresses the aforementioned needs of real-world agents. This teaching method allows human users to quickly teach agents the specific behaviors that they desire. Further, humans can shape agents without needing programming skills or even detailed knowledge of how to perform the task themselves. In contrast, algorithms that learn autonomously from only a pre-programmed evaluative signal often learn slowly, which is unacceptable for some real-world tasks with real-world costs. These autonomous algorithms additionally have an inflexibly defined set of optimal behaviors, changeable only through additional programming. Through interactive shaping, human users can (1) specify and teach desired behavior and (2) share task knowledge when correct behavior is already indirectly specified by an objective function. Additionally, computational agents that can be taught interactively by humans provide a unique opportunity to study how humans teach in a highly controlled setting, in which the computer agent’s behavior is parametrized. This thesis answers the following question. How and to what extent can agents harness the information contained in human-generated signals of reward to learn sequential decision-making tasks? The contributions of this thesis begin with an operational definition of the problem of interactive shaping. Next, I introduce the tamer framework, one solution to the problem of interactive shaping, and describe and analyze algorithmic implementations of the framework within multiple domains. This thesis also proposes and empirically examines algorithms for learning from both human reward and a pre-programmed reward function within an MDP, demonstrating two techniques that consistently outperform learning from either feedback signal alone. Subsequently, the thesis shifts its focus from the agent to the trainer, describing two psychological studies in which the trainer is manipulated by either changing their perceived role or by having the agent intentionally misbehave at specific times; we examine the effect of these manipulations on trainer behavior and the agent’s learned task performance. Lastly, I return to the problem of interactive shaping, for which we examine a space of mappings from human reward to objective functions, where mappings differ by how much the agent discounts reward it expects to receive in the future. Through this investigation, a deep relationship is identified between discounting, the level of positivity in human reward, and training success. Specific constraints of human reward are identified (i.e., the “positive circuits” problem), as are strategies for overcoming these constraints, pointing towards interactive shaping methods that are more effective than the already successful tamer framework. / text
115

Design and Evaluation of Affective Serious Games for Emotion Regulation Training

Jerčić, Petar January 2015 (has links)
Emotions are thought to be one of the key factors that critically influences human decision-making. Emotion regulation can help to mitigate emotion related decision biases and eventually lead to a better decision performance. Serious games emerged as a new angle introducing technological methods to learning emotion regulation, where meaningful biofeedback information communicates player's emotional state. Games are a series of interesting choices, where design of those choices could support an educational platform to learning emotion regulation. Such design could benefit digital serious games as those choices could be informed though player's physiology about emotional states in real time. This thesis explores design and evaluation methods for creating serious games where emotion regulation can be learned and practiced. Design of a digital serious game using physiological measures of emotions was investigated and evaluated. Furthermore, it investigates emotions and the effect of emotion regulation on decision performance in digital serious games. The scope of this thesis was limited to digital serious games for emotion regulation training using psychophysiological methods to communicate player's affective information. Using the psychophysiological methods in design and evaluation of digital serious games, emotions and their underlying neural mechanism have been explored. Effects of emotion regulation have been investigated where decision performance has been measured and analyzed. The proposed metrics for designing and evaluating such affective serious games have been extensively evaluated. The research methods used in this thesis were based on both quantitative and qualitative aspects, with true experiment and evaluation research, respectively. Digital serious games approach to emotion regulation was investigated, player's physiology of emotions informs design of interactions where regulation of those emotions could be practiced. The results suggested that two different emotion regulation strategies, suppression and cognitive reappraisal, are optimal for different decision tasks contexts. With careful design methods, valid serious games for training those different strategies could be produced. Moreover, using psychophysiological methods, underlying emotion neural mechanism could be mapped. This could inform a digital serious game about an optimal level of arousal for a certain task, as evidence suggests that arousal is equally or more important than valence for decision-making. The results suggest that it is possible to design and develop digital serious game applications that provide helpful learning environment where decision makers could practice emotion regulation and subsequently improve their decision-making. If we assume that physiological arousal is more important than physiological valence for learning purposes, results show that digital serious games designed in this thesis elicit high physiological arousal, suitable for use as an educational platform.
116

Breaking the typecast: Revising roles for coordinating mixed teams

Long, Matthew T 01 June 2007 (has links)
Heterogeneous multi-agent systems are currently used in a wide variety of situations, including search and rescue, military applications, and off-world exploration, however it is difficult to understand the actions of these systems or naturalistically assign these mixed teams to tasks. These agents, which may be human, robot or software, have different capabilities but will need to coordinate effectively with humans in order to operate. The first and largest contributing factor to this challenge is the processing, understanding and representing of elements of the natural world in a manner that can be utilized by artificial agents. A second contributing factor is that current abstractions and robot architectures are ill-suited to address this problem. This dissertation addresses the lack of a high-level abstraction for the naturalistic coordination of teams of heterogeneous robots, humans and other agents through the development of roles. Roles are a fundamental concept of social science that may provide this necessary abstraction. Roles are not a new concept and have been used in a number of related areas. This work draws from these fields and constructs a coherent and usable model of roles for robotics. This research is focussed on answering the following question: Can the use of social roles enable the naturalistic coordinated operation of robots in a mixed setting? In addition to this primary question, related research includes defining the key concepts important to artificial systems, providing a mapping and implementation from these concepts to a usable robot framework and identifies a set of robot-specific roles used for human-robot interaction. This research will benefit both the artificial intelligence agent and robotics communities. It poses a fundamental contribution to the multi-agent community because it extends and refines the role concept. The application of roles in a principled and complete implementation is a novel contribution to both software and robotic agents. The creation of an open source operational architecture which supports taskable robots is also a major contribution.
117

Lexical vagueness handling using fuzzy logic in human robot interaction

Guo, Xiao January 2011 (has links)
Lexical vagueness is a ubiquitous phenomenon in natural language. Most of previous works in natural language processing (NLP) consider lexical ambiguity as the main problem in natural language understanding rather than lexical vagueness. Lexical vagueness is usually considered as a solution rather than a problem in natural language understanding since precise information is usually failed to be provided in conversations. However, lexical vagueness is obviously an obstacle in human robot interaction (HRI) since the robots are expected to precisely understand their users' utterances in order to provide reliable services to their users. This research aims to develop novel lexical vagueness handling techniques to enable service robots to precisely understand their users' utterance so that they can provide the reliable services to their users. A novel integrated system to handle lexical vagueness is proposed in this research based on an in-depth understanding of lexical ambiguity and lexical vagueness including why they exist, how they are presented, what differences are in between them, and the mainstream techniques to handle lexical ambiguity and lexical vagueness. The integrated system consists of two blocks: the block of lexical ambiguity handling and the block of lexical vagueness handling. The block of lexical ambiguity handling first removes syntactic ambiguity and lexical ambiguity. The block of lexical vagueness handling is then used to model and remove lexical vagueness. Experimental results show that the robots endowed with the developed integrated system are able to understand their users' utterances. The reliable services to their users, therefore, can be provided by the robots.
118

Framing Human-Robot Task Communication as a Partially Observable Markov Decision Process

Woodward, Mark P. 10 August 2012 (has links)
As general purpose robots become more capable, pre-programming of all tasks at the factory will become less practical. We would like for non-technical human owners to be able to communicate, through interaction with their robot, the details of a new task; I call this interaction "task communication". During task communication the robot must infer the details of the task from unstructured human signals, and it must choose actions that facilitate this inference. In this dissertation I propose the use of a partially observable Markov decision process (POMDP) for representing the task communication problem; with the unobservable task details and unobservable intentions of the human teacher captured in the state, with all signals from the human represented as observations, and with the cost function chosen to penalize uncertainty. This dissertation presents the framework, works through an example of framing task communication as a POMDP, and presents results from a user experiment where subjects communicated a task to a POMDP-controlled virtual robot and to a human controlled virtual robot. The task communicated in the experiment consisted of a single object movement and the communication in the experiment was limited to binary approval signals from the teacher. This dissertation makes three contributions: 1) It frames human-robot task communication as a POMDP, a widely used framework. This enables the leveraging of techniques developed for other problems framed as a POMDP. 2) It provides an example of framing a task communication problem as a POMDP. 3) It validates the framework through results from a user experiment. The results suggest that the proposed POMDP framework produces robots that are robust to teacher error, that can accurately infer task details, and that are perceived to be intelligent. / Engineering and Applied Sciences
119

Virtual lead-through robot programming : Programming virtual robot by demonstration

Boberg, Arvid January 2015 (has links)
This report describes the development of an application which allows a user to program a robot in a virtual environment by the use of hand motions and gestures. The application is inspired by the use of robot lead-through programming which is an easy and hands-on approach for programming robots, but instead of performing it online which creates loss in productivity the strength from offline programming where the user operates in a virtual environment is used as well. Thus, this is a method which saves on the economy and prevents contamination of the environment. To convey hand gesture information into the application which will be implemented for RobotStudio, a Kinect sensor is used for entering the data into the virtual environment. Similar work has been performed before where, by using hand movements, a physical robot’s movement can be manipulated, but for virtual robots not so much. The results could simplify the process of programming robots and supports the work towards Human-Robot Collaboration as it allows people to interact and communicate with robots, a major focus of this work. The application was developed in the programming language C# and has two different functions that interact with each other, one for the Kinect and its tracking and the other for installing the application in RobotStudio and implementing the calculated data into the robot. The Kinect’s functionality is utilized through three simple hand gestures to jog and create targets for the robot: open, closed and “lasso”. A prototype of this application was completed which through motions allowed the user to teach a virtual robot desired tasks by moving it to different positions and saving them by doing hand gestures. The prototype could be applied to both one-armed robots as well as to a two-armed robot such as ABB’s YuMi. The robot's orientation while running was too complicated to be developed and implemented in time and became the application's main bottleneck, but remained as one of several other suggestions for further work in this project.
120

Compliance Control of Robot Manipulator for Safe Physical Human Robot Interaction

Ahmed, Muhammad Rehan January 2011 (has links)
Inspiration from biological systems suggests that robots should demonstrate same level of capabilities that are embedded in biological systems in performing safe and successful interaction with the humans. The major challenge in physical human robot interaction tasks in anthropic environment is the safe sharing of robot work space such that robot will not cause harm or injury to the human under any operating condition. Embedding human like adaptable compliance characteristics into robot manipulators can provide safe physical human robot interaction in constrained motion tasks. In robotics, this property can be achieved by using active, passive and semi active compliant actuation devices. Traditional methods of active and passive compliance lead to complex control systems and complex mechanical design. In this thesis we present compliant robot manipulator system with semi active compliant device having magneto rheological fluid based actuation mechanism. Human like adaptable compliance is achieved by controlling the properties of the magneto rheological fluid inside joint actuator. This method offers high operational accuracy, intrinsic safety and high absorption to impacts. Safety is assured by mechanism design rather than by conventional approach based on advance control. Control schemes for implementing adaptable compliance are implemented in parallel with the robot motion control that brings much simple interaction control strategy compared to other methods. Here we address two main issues: human robot collision safety and robot motion performance.We present existing human robot collision safety standards and evaluate the proposed actuation mechanism on the basis of static and dynamic collision tests. Static collision safety analysis is based on Yamada’s safety criterion and the adaptable compliance control scheme keeps the robot in the safe region of operation. For the dynamic collision safety analysis, Yamada’s impact force criterion and head injury criterion are employed. Experimental results validate the effectiveness of our solution. In addition, the results with head injury criterion showed the need to investigate human bio-mechanics in more details in order to acquire adequate knowledge for estimating the injury severity index for robots interacting with humans. We analyzed the robot motion performance in several physical human robot interaction tasks. Three interaction scenarios are studied to simulate human robot physical contact in direct and inadvertent contact situations. Respective control disciplines for the joint actuators are designed and implemented with much simplified adaptable compliance control scheme. The series of experimental tests in direct and inadvertent contact situations validate our solution of implementing human like adaptable compliance during robot motion and prove the safe interaction with humans in anthropic domains.

Page generated in 0.0349 seconds