• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 212
  • 24
  • 18
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 380
  • 380
  • 308
  • 125
  • 105
  • 68
  • 63
  • 63
  • 57
  • 51
  • 50
  • 47
  • 45
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Teaching robots social autonomy from in situ human supervision

Senft, Emmanuel January 2018 (has links)
Traditionally the behaviour of social robots has been programmed. However, increasingly there has been a focus on letting robots learn their behaviour to some extent from example or through trial and error. This on the one hand excludes the need for programming, but also allows the robot to adapt to circumstances not foreseen at the time of programming. One such occasion is when the user wants to tailor or fully specify the robot's behaviour. The engineer often has limited knowledge of what the user wants or what the deployment circumstances specifically require. Instead, the user does know what is expected from the robot and consequently, the social robot should be equipped with a mechanism to learn from its user. This work explores how a social robot can learn to interact meaningfully with people in an efficient and safe way by learning from supervision by a human teacher in control of the robot's behaviour. To this end we propose a new machine learning framework called Supervised Progressively Autonomous Robot Competencies (SPARC). SPARC enables non-technical users to control and teach a robot, and we evaluate its effectiveness in Human-Robot Interaction (HRI). The core idea is that the user initially remotely operates the robot, while an algorithm associates actions to states and gradually learns. Over time, the robot takes over the control from the user while still giving the user oversight of the robot's behaviour by ensuring that every action executed by the robot has been actively or passively approved by the user. This is particularly important in HRI, as interacting with people, and especially vulnerable users, is a complex and multidimensional problem, and any errors by the robot may have negative consequences for the people involved in the interaction. Through the development and evaluation of SPARC, this work contributes to both HRI and Interactive Machine Learning, especially on how autonomous agents, such as social robots, can learn from people and how this specific teacher-robot interaction impacts the learning process. We showed that a supervised robot learning from their user can reduce the workload of this person, and that providing the user with the opportunity to control the robot's behaviour substantially improves the teaching process. Finally, this work also demonstrated that a robot supervised by a user could learn rich social behaviours in the real world, in a large multidimensional and multimodal sensitive environment, as a robot learned quickly (25 interactions of 4 sessions during in average 1.9 minutes) to tutor children in an educational game, achieving similar behaviours and educational outcomes compared to a robot fully controlled by the user, both providing 10 to 30% improvement in game metrics compared to a passive robot.
92

Affective Motivational Collaboration Theory

Shayganfar, Mohammad 25 January 2017 (has links)
Existing computational theories of collaboration explain some of the important concepts underlying collaboration, e.g., the collaborators' commitments and communication. However, the underlying processes required to dynamically maintain the elements of the collaboration structure are largely unexplained. Our main insight is that in many collaborative situations acknowledging or ignoring a collaborator's affective state can facilitate or impede the progress of the collaboration. This implies that collaborative agents need to employ affect-related processes that (1) use the collaboration structure to evaluate the status of the collaboration, and (2) influence the collaboration structure when required. This thesis develops a new affect-driven computational framework to achieve these objectives and thus empower agents to be better collaborators. Contributions of this thesis are: (1) Affective Motivational Collaboration (AMC) theory, which incorporates appraisal processes into SharedPlans theory. (2) New computational appraisal algorithms based on collaboration structure. (3) Algorithms such as goal management, that use the output of appraisal to maintain collaboration structures. (4) Implementation of a computational system based on AMC theory. (5) Evaluation of AMC theory via two user studies to a) validate our appraisal algorithms, and b) investigate the overall functionality of our framework within an end-to-end system with a human and a robot.
93

Recognizing Engagement Behaviors in Human-Robot Interaction

Ponsler, Brett 17 January 2011 (has links)
Based on analysis of human-human interactions, we have developed an initial model of engagement for human-robot interaction which includes the concept of connection events, consisting of: directed gaze, mutual facial gaze, conversational adjacency pairs, and backchannels. We implemented the model in the open source Robot Operating System and conducted a human-robot interaction experiment to evaluate it.
94

On the Sociability of a Game-Playing Agent: A Software Framework and Empirical Study

Behrooz, Morteza 10 April 2014 (has links)
The social element of playing games is what makes us play together to enjoy more than just what the game itself has to offer. There are millions of games with different rules and goals; They are played by people of many cultures and various ages. However, this social element remains as crucial. Nowadays, the role of social robots and virtual agents is rapidly expanding in daily activities and entertainment and one of these areas is games. Therefore, it seems desirable for an agent to be able to play games socially, as opposed to simply having the computer make the moves in game application. To achieve this goal, verbal and non-verbal communication should be inspired by the game events and human input, to create a human-like social experience. Moreover, a better social interaction can be created if the agent can change its game strategies in accordance with social criteria. To bring sociability to the gaming experience with many different robots, virtual agents and games, we have developed a generic software framework which generates social comments based on the gameplay semantics. We also conducted a user study, with this framework as a core component, involving the rummy card game and the checkers board game. In our analysis, we observed both subjective and objective measures of the effects of social gaze and comments in the gaming interactions. Participants' gaming experience proved to be significantly more social, human-like, enjoyable and adoptable when social behaviors were employed. Moreover, since facial expressions can be a strong indication of internal state, we measured the number of participants' smiles during the gameplay and observed them to smile significantly more when social behaviors were involved than when they were not.
95

The impact of social expectation towards robots on human-robot interactions

Syrdal, Dag Sverre January 2018 (has links)
This work is presented in defence of the thesis that it is possible to measure the social expectations and perceptions that humans have of robots in an explicit and succinct manner, and these measures are related to how humans interact with, and evaluate, these robots. There are many ways of understanding how humans may respond to, or reason about, robots as social actors, but the approach that was adopted within this body of work was one which focused on interaction-specific expectations, rather than expectations regarding the true nature of the robot. These expectations were investigated using a questionnaire-based tool, the University of Hertfordshire Social Roles Questionnaire, which was developed as part of the work presented in this thesis and tested on a sample of 400 visitors to an exhibition in the Science Gallery in Dublin. This study suggested that responses to this questionnaire loaded on two main dimensions, one which related to the degree of social equality the participants expected the interactions with the robots to have, and the other was related to the degree of control they expected to exert upon the robots within the interaction. A single item, related to pet-like interactions, loaded on both and was considered a separate, third dimension. This questionnaire was deployed as part of a proxemics study, which found that the degree to which participants accepted particular proxemics behaviours was correlated with initial social expectations of the robot. If participants expected the robot to be more of a social equal, then the participants preferred the robot to approach from the front, while participants who viewed the robot more as a tool preferred it to approach from a less obtrusive angle. The questionnaire was also deployed in two long-term studies. In the first study, which involved one interaction a week over a period of two months, participant social expectations of the robots prior to the beginning of the study, not only impacted how participants evaluated open-ended interactions with the robots throughout the two-month period, but also how they collaborated with the robots in task-oriented interactions as well. In the second study, participants interacted with the robots twice a week over a period of 6 weeks. This study replicated the findings of the previous study, in that initial expectations impacted evaluations of interactions throughout the long-term study. In addition, this study used the questionnaire to measure post-interaction perceptions of the robots in terms of social expectations. The results from these suggest that while initial social expectations of robots impact how participants evaluate the robots in terms of interactional outcomes, social perceptions of robots are more closely related to the social/affective experience of the interaction.
96

Ambiente para interação baseada em reconhecimento de emoções por análise de expressões faciais / Environment based on emotion recognition for human-robot interaction

Ranieri, Caetano Mazzoni 09 August 2016 (has links)
Nas ciências de computação, o estudo de emoções tem sido impulsionado pela construção de ambientes interativos, especialmente no contexto dos dispositivos móveis. Pesquisas envolvendo interação humano-robô têm explorado emoções para propiciar experiências naturais de interação com robôs sociais. Um dos aspectos a serem investigados é o das abordagens práticas que exploram mudanças na personalidade de um sistema artificial propiciadas por alterações em um estado emocional inferido do usuário. Neste trabalho, é proposto um ambiente para interação humano-robô baseado em emoções, reconhecidas por meio de análise de expressões faciais, para plataforma Android. Esse sistema consistiu em um agente virtual agregado a um aplicativo, o qual usou informação proveniente de um reconhecedor de emoções para adaptar sua estratégia de interação, alternando entre dois paradigmas discretos pré-definidos. Nos experimentos realizados, verificou-se que a abordagem proposta tende a produzir mais empatia do que uma condição controle, entretanto esse resultado foi observado somente em interações suficientemente longas. / In computer sciences, the development of interactive environments have motivated the study of emotions, especially on the context of mobile devices. Research in human-robot interaction have explored emotions to create natural experiences on interaction with social robots. A fertile aspect consist on practical approaches concerning changes on the personality of an artificial system caused by modifications on the users inferred emotional state. The present project proposes to develop, for Android platform, an environment for human-robot interaction based on emotions. A dedicated module will be responsible for recognizing emotions by analyzing facial expressions. This system consisted of a virtual agent aggregated to an application, which used information of the emotion recognizer to adapt its interaction strategy, alternating between two pre-defined discrete paradigms. In the experiments performed, it was found that the proposed approach tends to produce more empathy than a control condition, however this result was observed only in sufficiently long interactions.
97

Error-related potentials for adaptive decoding and volitional control

Salazar Gómez, Andrés Felipe 10 July 2017 (has links)
Locked-in syndrome (LIS) is a condition characterized by total or near-total paralysis with preserved cognitive and somatosensory function. For the locked-in, brain-machine interfaces (BMI) provide a level of restored communication and interaction with the world, though this technology has not reached its fullest potential. Several streams of research explore improving BMI performance but very little attention has been given to the paradigms implemented and the resulting constraints imposed on the users. Learning new mental tasks, constant use of external stimuli, and high attentional and cognitive processing loads are common demands imposed by BMI. These paradigm constraints negatively affect BMI performance by locked-in patients. In an effort to develop simpler and more reliable BMI for those suffering from LIS, this dissertation explores using error-related potentials, the neural correlates of error awareness, as an access pathway for adaptive decoding and direct volitional control. In the first part of this thesis we characterize error-related local field potentials (eLFP) and implement a real-time decoder error detection (DED) system using eLFP while non-human primates controlled a saccade BMI. Our results show specific traits in the eLFP that bridge current knowledge of non-BMI evoked error-related potentials with error-potentials evoked during BMI control. Moreover, we successfully perform real-time DED via, to our knowledge, the first real-time LFP-based DED system integrated into an invasive BMI, demonstrating that error-based adaptive decoding can become a standard feature in BMI design. In the second part of this thesis, we focus on employing electroencephalography error-related potentials (ErrP) for direct volitional control. These signals were employed as an indicator of the user’s intentions under a closed-loop binary-choice robot reaching task. Although this approach is technically challenging, our results demonstrate that ErrP can be used for direct control via binary selection and, given the appropriate levels of task engagement and agency, single-trial closed-loop ErrP decoding is possible. Taken together, this work contributes to a deeper understanding of error-related potentials evoked during BMI control and opens new avenues of research for employing ErrP as a direct control signal for BMI. For the locked-in community, these advancements could foster the development of real-time intuitive brain-machine control.
98

Utvärdering av VR-manövrering av en brandrobot / Evaluation of VR-maneuvering of a firefighting robot

Segerström, Niklas January 2019 (has links)
This thesis evaluates the suitability of using a VR-interface to help maneuver a teleoperated robot designed to assist firefighters. Head mounted displays together with teleoperated robots are becoming more popular but the question is if the technology can be adapted to fit the needs and requirements of the firefighters. The technology will be used in high-stress situations without a big margin for error. Focus group interviews and user tests together with an analysis of current research have been performed to try and give an answer on how suitable VR-interfaces are. To efficiently implement a VR-interface the developers of the robot need to have a clear user profile. The designers need to use the expert knowledge of the firefighters for best results. The implementation will overall have a positive effect on the users situational awareness and the robot will become easier to maneuver, given that the developers use hardware compatible to the needs of VRglasses.
99

Understanding Humans to Better Understand Robots in a Joint-Task Environment: The Study of Surprise and Trust in Human-Machine Physical Coordination

January 2019 (has links)
abstract: Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction should be to understand how human dyads have been historically effective within a joint-task setting. This will ensure that all goals can be met in human robot settings. The aim of the present study was to examine human dyads and the effects of an unexpected interruption. Humans’ interpersonal and individual levels of trust were studied in order to draw appropriate conclusions. Seventeen undergraduate and graduate level dyads were collected from Arizona State University. Participants were broken up into either a surprise condition or a baseline condition. Participants individually took two surveys in order to have an accurate understanding of levels of dispositional and individual levels of trust. The findings showed that participant levels of interpersonal trust were average. Surprisingly, participants who participated in the surprise condition afterwards, showed moderate to high levels of dyad trust. This effect showed that participants became more reliant on their partners when interrupted by a surprising event. Future studies will take this knowledge and apply it to human-robot interaction, in order to mimic the seamless team-interaction shown in historically effective dyads, specifically human team interaction. / Dissertation/Thesis / Masters Thesis Engineering 2019
100

Estimating Short-Term Human Intent for Physical Human-Robot Co-Manipulation

Townsend, Eric Christopher 01 April 2017 (has links)
Robots are increasingly becoming safer and more capable. In the past, the main applications for robots have been in manufacturing, where they perform repetitive, highly accurate tasks with physical barriers that separate them from people. They have also been used in space exploration where people are not around. Due to improvements in sensors, algorithms, and design, robots are beginning to be used in other applications like materials handling, healthcare, and agriculture and will one day be ubiquitous. For this to be possible, they will need to be able to function safely in unmodelled and dynamic environments. This is especially true when working in a shared space with people. We desire for robots to interact with people in a way that is helpful and intuitive. This requires that the robots both act predictably and be able to predict short-term human intent. We create a model for predicting short-term human intent in a collaborative furniture carrying task that a robot could use to be a more responsive and intuitive teammate. For robots to perform collaborative manipulation tasks with people naturally and efficiently, understanding and predicting human intent is necessary. We completed an exploratory study recording motion and force for 21 human dyads moving an object in tandem in a variety of tasks to better understand how they move and how their movement can be predicted. Using the previous 0.75 seconds of data, the human intent can be predicted for the next 0.25 seconds. This can then be used with a robot in real applications. We also show that force data is not required to predict human intent. We show how the prediction data works in real-time, demonstrating that past motion alone can be used to predict short-term human intent. We show this with human-human dyads and a human-robot dyad. Finally, we imagine that soft robots will be common in human-robot interaction. We present work on controlling soft, pneumatically-actuated, inflatable robots. These soft robots have less inertia than traditional robots but a high power density which allows them to operate in proximity to people. They can, however, be difficult to control. We developed a neural net model to use for control of our soft robot. We have shown that we can predict human intent in a human-robot dyad which is an important goal in physical human-robot interaction and will allow robots to co-manipulate objects with humans in an intelligent way.

Page generated in 0.0364 seconds