• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 20
  • 17
  • 15
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 302
  • 302
  • 302
  • 105
  • 91
  • 59
  • 51
  • 50
  • 41
  • 39
  • 39
  • 39
  • 36
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Error-related potentials for adaptive decoding and volitional control

Salazar Gómez, Andrés Felipe 10 July 2017 (has links)
Locked-in syndrome (LIS) is a condition characterized by total or near-total paralysis with preserved cognitive and somatosensory function. For the locked-in, brain-machine interfaces (BMI) provide a level of restored communication and interaction with the world, though this technology has not reached its fullest potential. Several streams of research explore improving BMI performance but very little attention has been given to the paradigms implemented and the resulting constraints imposed on the users. Learning new mental tasks, constant use of external stimuli, and high attentional and cognitive processing loads are common demands imposed by BMI. These paradigm constraints negatively affect BMI performance by locked-in patients. In an effort to develop simpler and more reliable BMI for those suffering from LIS, this dissertation explores using error-related potentials, the neural correlates of error awareness, as an access pathway for adaptive decoding and direct volitional control. In the first part of this thesis we characterize error-related local field potentials (eLFP) and implement a real-time decoder error detection (DED) system using eLFP while non-human primates controlled a saccade BMI. Our results show specific traits in the eLFP that bridge current knowledge of non-BMI evoked error-related potentials with error-potentials evoked during BMI control. Moreover, we successfully perform real-time DED via, to our knowledge, the first real-time LFP-based DED system integrated into an invasive BMI, demonstrating that error-based adaptive decoding can become a standard feature in BMI design. In the second part of this thesis, we focus on employing electroencephalography error-related potentials (ErrP) for direct volitional control. These signals were employed as an indicator of the user’s intentions under a closed-loop binary-choice robot reaching task. Although this approach is technically challenging, our results demonstrate that ErrP can be used for direct control via binary selection and, given the appropriate levels of task engagement and agency, single-trial closed-loop ErrP decoding is possible. Taken together, this work contributes to a deeper understanding of error-related potentials evoked during BMI control and opens new avenues of research for employing ErrP as a direct control signal for BMI. For the locked-in community, these advancements could foster the development of real-time intuitive brain-machine control.
72

Utvärdering av VR-manövrering av en brandrobot / Evaluation of VR-maneuvering of a firefighting robot

Segerström, Niklas January 2019 (has links)
This thesis evaluates the suitability of using a VR-interface to help maneuver a teleoperated robot designed to assist firefighters. Head mounted displays together with teleoperated robots are becoming more popular but the question is if the technology can be adapted to fit the needs and requirements of the firefighters. The technology will be used in high-stress situations without a big margin for error. Focus group interviews and user tests together with an analysis of current research have been performed to try and give an answer on how suitable VR-interfaces are. To efficiently implement a VR-interface the developers of the robot need to have a clear user profile. The designers need to use the expert knowledge of the firefighters for best results. The implementation will overall have a positive effect on the users situational awareness and the robot will become easier to maneuver, given that the developers use hardware compatible to the needs of VRglasses.
73

Understanding Humans to Better Understand Robots in a Joint-Task Environment: The Study of Surprise and Trust in Human-Machine Physical Coordination

January 2019 (has links)
abstract: Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction should be to understand how human dyads have been historically effective within a joint-task setting. This will ensure that all goals can be met in human robot settings. The aim of the present study was to examine human dyads and the effects of an unexpected interruption. Humans’ interpersonal and individual levels of trust were studied in order to draw appropriate conclusions. Seventeen undergraduate and graduate level dyads were collected from Arizona State University. Participants were broken up into either a surprise condition or a baseline condition. Participants individually took two surveys in order to have an accurate understanding of levels of dispositional and individual levels of trust. The findings showed that participant levels of interpersonal trust were average. Surprisingly, participants who participated in the surprise condition afterwards, showed moderate to high levels of dyad trust. This effect showed that participants became more reliant on their partners when interrupted by a surprising event. Future studies will take this knowledge and apply it to human-robot interaction, in order to mimic the seamless team-interaction shown in historically effective dyads, specifically human team interaction. / Dissertation/Thesis / Masters Thesis Engineering 2019
74

Estimating Short-Term Human Intent for Physical Human-Robot Co-Manipulation

Townsend, Eric Christopher 01 April 2017 (has links)
Robots are increasingly becoming safer and more capable. In the past, the main applications for robots have been in manufacturing, where they perform repetitive, highly accurate tasks with physical barriers that separate them from people. They have also been used in space exploration where people are not around. Due to improvements in sensors, algorithms, and design, robots are beginning to be used in other applications like materials handling, healthcare, and agriculture and will one day be ubiquitous. For this to be possible, they will need to be able to function safely in unmodelled and dynamic environments. This is especially true when working in a shared space with people. We desire for robots to interact with people in a way that is helpful and intuitive. This requires that the robots both act predictably and be able to predict short-term human intent. We create a model for predicting short-term human intent in a collaborative furniture carrying task that a robot could use to be a more responsive and intuitive teammate. For robots to perform collaborative manipulation tasks with people naturally and efficiently, understanding and predicting human intent is necessary. We completed an exploratory study recording motion and force for 21 human dyads moving an object in tandem in a variety of tasks to better understand how they move and how their movement can be predicted. Using the previous 0.75 seconds of data, the human intent can be predicted for the next 0.25 seconds. This can then be used with a robot in real applications. We also show that force data is not required to predict human intent. We show how the prediction data works in real-time, demonstrating that past motion alone can be used to predict short-term human intent. We show this with human-human dyads and a human-robot dyad. Finally, we imagine that soft robots will be common in human-robot interaction. We present work on controlling soft, pneumatically-actuated, inflatable robots. These soft robots have less inertia than traditional robots but a high power density which allows them to operate in proximity to people. They can, however, be difficult to control. We developed a neural net model to use for control of our soft robot. We have shown that we can predict human intent in a human-robot dyad which is an important goal in physical human-robot interaction and will allow robots to co-manipulate objects with humans in an intelligent way.
75

Force and Motion Based Methods for Planar Human-Robot Co-manipulation of Extended Objects

Mielke, Erich Allen 01 April 2018 (has links)
As robots become more common operating in close proximity to people, new opportunities arise for physical human-robot interaction, such as co-manipulation of extended objects. Co-manipulation involves physical interaction between two partners where an object held by both is manipulated in tandem. There is a dearth of viable high degree-of-freedom co-manipulation controllers, especially for extended objects, as well as a lack of information about how human-human teams perform in high degree-of-freedom tasks. One method for creating co-manipulation controllers is to pattern them off of human data. This thesis uses this technique by exploring a previously completed experimental study. The study involved human-human dyads in leader-follower format performing co-manipulation tasks with an extended object in 6 degrees of freedom. Two important tasks performed in this experiment were lateral translation and planar rotation tasks. This thesis focuses on these two tasks because they represent planar motion. Most previous control methods are for 1 or 2 degrees-of-freedom. The study provided information about how human-human dyads perform planar tasks. Most notably, planar tasks generally adhere to minimum-jerk trajectories, and do not minimize interaction forces between users. The study also helped solve the translation versus rotation problem. From the experimental data, torque patterns were discovered at the beginning of the trial that defined intent to translate or rotate. From these patterns, a new method of planar co-manipulation control was developed, called Extended Variable Impedance Control. This is a novel 3 degree-of-freedom method that is applicable to a variety of planar co-manipulation scenarios. Additionally, the data was fed through a Recursive Neural Network. The network takes in a series of motion data and predicts the next step in the series. The predicted data was used as an intent estimate in another novel 3 degree of freedom method called Neural Network Prediction Control. This method is capable of generalizing to 6 degrees of freedom, but is limited in this thesis for comparison with the other method. An experiment, involving 16 participants, was developed to test the capabilities of both controllers for planar tasks. A dual manipulator robot with an omnidirectional base was used in the experiment. The results from the study show that both the Neural Network Prediction Control and Extended Variable Impedance Control controllers performed comparably to blindfolded human-human dyads. A survey given to participants informed us they preferred to use the Extended Variable Impedance Control. These two unique controllers are the major results of this work.
76

Desenvolvimento de técnicas de acompanhamento para interação entre humano e uma equipe de robôs / Development of following techniques for interaction of human and multi-robot teams

Batista, Murillo Rehder 17 December 2018 (has links)
A Robótica tem avançando significativamente nas últimas décadas, chegando a apresentar produtos comerciais, como robôs aspiradores de pó e quadricópteros. Com a integração cada vez maior de robôs em nossa sociedade, mostra-se necessário o desenvolvimento de métodos de interação entre pessoas e robôs para gerenciar o convívio e trabalho mútuo. Existem alguns trabalhos na literatura que consideram o posicionamento socialmente aceitável de um robô, acompanhando um indivíduo, mas não consideram o caso de uma equipe de robôs navegando com uma pessoa considerando aspectos de proxêmica. Nesta tese, são propostas várias estratégias de acompanhamento de um humano por um time de robôs social, que são bioinspiradas por serem baseadas em técnicas de inteligencia coletiva e comportamento social. Experimentos simulados são apresentados visando comparar as técnicas propostas em diversos cenários, destacando-se as vantagens e desvantagens de cada uma delas. Experimentos reais permitiram uma análise da percepção das pessoas em interagir com um ou mais robôs, demonstrando que nenhuma diferença na impressão dos indivíduos foi encontrada. / The field of Robotics have been advancing significantly on the last few decades, presenting commercial products like vacuum cleaning robots and autonomous quadcopter drones. With the increasing presence of robots in our routine, it is necessary to develop human-robot interaction schemes to manage their relationship. Works that deal with a single robot doing a socially acceptable human following behavior are available, but do not consider cases where a robot team walks with a human In this thesis, it is presented a solution for social navigation between a human and a robot team combining socially aware human following techniques with a multirobot escorting method, generating four bioinspired navigation strategies based on collective intelligence and social behavior. Experiments comparing these four strategies on a simulated environment in various scenarios highlighted advantages and disadvantages of each strategy. Moreover, an experiment with real robots was made to investigate the difference on perception of people when interacting with one or three robots, and no difference was found.
77

Designing and Evaluating Human-Robot Communication : Informing Design through Analysis of User Interaction

Green, Anders January 2009 (has links)
This thesis explores the design and evaluation of human-robot communication for service robots that use natural language to interact with people.  The research is centred around three themes: design of human-robot communication; evaluation of miscommunication in human-robot communication; and the analysis of spatial influence as empiric phenomenon and design element.  The method has been to put users in situations of future use through means of Hi-fi simulation. Several scenarios were enacted using the Wizard-of-Oz technique: a robot intended for fetch- and carry services in an office environment; and a robot acting in what can be characterised as a home tour, where the user teaches objects and locations to the robot. Using these scenarios a corpus of human-robot communication was developed and analysed.  The analysis of the communicative behaviours led to the following observations: the users communicate with the robot in order to solve a main task goal. In order to fulfil this goal they overtake service actions that the robot is incapable of. Once users have understood that the robot is capable of performing actions, they explore its capabilities.  During the interactions the users continuously monitor the behaviour of the robot, attempting to elicit feedback or to draw its perceptual attention to the users’ communicative behaviour. Information related to the communicative status of the robot seems to have a fundamental impact on the quality of interaction. Large portions of the miscommunication that occurs in the analysed scenarios can be attributed to ill-timed, lacking or irrelevant feedback from the robot.  The analysis of the corpus data also showed that the users’ spatial behaviour seemed to be influenced by the robot’s communicative behaviour, embodiment and positioning. This means that we in robot design can consider the use strategies for spatial prompting to influence the users’ spatial behaviour.  The understanding of the importance of continuously providing information of the communicative status of the robot to it’s users leaves us with an intriguing design challenge for the future: When designing communication for a service robot we need to design communication for the robot work tasks; and simultaneously, provide information based on the systems communicative status to continuously make users aware of the robots communicative capability. / QC 20100714
78

Human-Robot Interaction and Mapping with a Service Robot : Human Augmented Mapping

Topp, Elin Anna January 2008 (has links)
An issue widely discussed in robotics research is the ageing society with its consequences for care-giving institutions and opportunities for developments in the area of service robots and robot companions. The general idea of using robotic systems in a personal or private context to support an independent way of living not only for the elderly but also for the physically impaired is pursued in different ways, ranging from socially oriented robotic pets to mobile assistants. Thus, the idea of the personalised general service robot is not too far fetched. Crucial for such a service robot is the ability to navigate in its working environment, which has to be assumed an arbitrary domestic or office-like environment that is shared with human users and bystanders. With methods developed and investigated in the field of simultaneous localisation and mapping it has become possible for mobile robots to explore and map an unknown environment, while they can stay localised with respect to their starting point and the surroundings. These approaches though do not consider the representation of the environment that is used by humans to refer to particular places. Robotic maps are often metric representations of features that can be obtained from sensory data. Humans have a more topological, in fact partially hierarchical way of representing environments. Especially for the communication between a user and her personal robot it is thus necessary to provide a link between the robotic map and the human understanding of the robot's workspace. The term Human Augmented Mapping is used for a framework that allows to integrate a robotic map with human concepts. Communication about the environment can thus be facilitated. By assuming an interactive setting for the map acquisition process it is possible for the user to influence the process significantly. Personal preferences can be made part of the environment representation that is acquired by the robot. Advantages become also obvious for the mapping process itself, since in an interactive setting the robot can ask for information and resolve ambiguities with the help of the user. Thus, a scenario of a ``guided tour'' in which a user can ask a robot to follow and present the surroundings is assumed as the starting point for a system for the integration of robotic mapping, interaction and human environment representations. A central point is the development of a generic, partially hierarchical environment model, that is applied in a topological graph structure as part of an overall experimental Human Augmented Mapping system implementation. Different aspects regarding the representation of entities of the spatial concepts used in this hierarchical model, particularly considering regions, are investigated. The proposed representation is evaluated both as description of delimited regions and for the detection of transitions between them. In three user studies different aspects of the human-robot interaction issues of Human Augmented Mapping are investigated and discussed. Results from the studies support the proposed model and representation approaches and can serve as basis for further studies in this area. / QC 20100914
79

Fysisk, känslomässig och social interaktion : En analys av upplevelserna av robotsälen Paro hos kognitivt funktionsnedsatta och på äldreboende / Tangible, affective and social interaction : Analysing experiences of Paro the robot seal in elderly care and among cognitively disabled

Nobelius, Jörgen January 2011 (has links)
This field study examined how elderly and cognitively disabled people used and experienced a social companion robot. The following pages explores the questions: Which are the physical, social and affective qualities during the interaction? The aim was to through observations see how qualities of interaction could activate different forms of behavior. The results show that motion, sound and the eyes together created communicative and emotional changes for users who felt joy and were willing to share the activity with others. The robot stimulated to some extent users to create their own imaginative experiences but often failed to involve user or group for a long time and was also considered too large and heavy to handle. / Denna fältstudie undersökte hur äldre och kognitivt funktionsnedsatta personer använde och upplevde en social robot. Följande sidor utforskar frågorna: Vilka fysiska, sociala och affektiva kvaliteter finns i interaktionen? Målet var att genom observationer se hur kvaliteterna i interaktionen kunde aktivera olika typer av beteenden. Resultatet visar att rörelse, ljud och ögon tillsammans skapade kommunikativa och känslomässiga förändringar hos användarna som visade glädje och som gärna delade upplevelsen med andra. Roboten stimulerade till viss del användarna att skapa egna fantasifulla upplevelser men lyckades inte ofta involvera användare eller grupp under någon längre tid och ansågs även vara för stor och tung att hantera.
80

Determining User Requirements Of First-of-a-kind Interactive Systems: An Implementation Of Cognitive Analysis On Human Robot Interaction

Acikgoz Kopanoglu, Teksin 01 March 2011 (has links) (PDF)
Although, user requirements are critical for the conformance of a system (or a product) design with the user, they may be appraised late in the development processes. Hence, resources and schedules may be planned with the limitations of system oriented requirements. Therefore, late discovered critical feedbacks from the users may not be reflected to the requirements or the design. The focus of this thesis is how to determine the user requirements of first-of-a-kind interactive systems, early in the development process. First-of-a-kind interactive systems differentiate from others for not having experienced users and subject matter experts. Cognitive analysis techniques are investigated with the aim to discover and integrate user requirements early in the development processes of first-of-a-kind systems. Hybrid Cognitive Task Analysis, one of the cognitive analysis techniques, is carried out for the determination of user requirements of a system in the Human Robot Interaction area. Therefore, while exemplifying the methodology, its competency and correspondence with the domain is observed.

Page generated in 0.1062 seconds