Spelling suggestions: "subject:"human robotinteraction"" "subject:"human robotinteraktion""
51 |
Wie kommt die Robotik zum Sozialen? Epistemische Praktiken der Sozialrobotik.Bischof, Andreas 01 March 2017 (has links) (PDF)
In zahlreichen Forschungsprojekten wird unter Einsatz großer finanzieller und personeller Ressourcen daran gearbeitet, dass Roboter die Fabrikhallen verlassen und Teil von Alltagswelten wie Krankenhäusern, Kindergärten und Privatwohnungen werden. Die Konstrukteurinnen und Konstrukteure stehen dabei vor einer nicht-trivialen Herausforderung: Sie müssen die Ambivalenzen und Kontingenzen alltäglicher Interaktion in die diskrete Sprache der Maschinen übersetzen. Wie sie dieser Herausforderung begegnen, welche Muster und Lösungen sie heranziehen und welche Implikationen für die Verwendung von Sozialrobotern dabei gelegt werden, ist der Gegenstand des Buches. Auf der Suche nach der Antwort, was Roboter sozial macht, hat Andreas Bischof Forschungslabore und Konferenzen in Europa und Nordamerika besucht und ethnografisch erforscht. Zu den wesentlichen Ergebnissen dieser Studie gehört die Typologisierung von Forschungszielen in der Sozialrobotik, eine epistemische Genealogie der Idee des Roboters in Alltagswelten, die Rekonstruktion der Bezüge zu 'echten' Alltagswelten in der
Sozialrobotik-Entwicklung und die Analyse dreier Gattungen epistemischer Praktiken, derer sich die Ingenieurinnen und Ingenieure bedienen, um Roboter sozial zu machen.
|
52 |
Human-in-the-loop control for cooperative human-robot tasksChipalkatty, Rahul 29 March 2012 (has links)
Even with the advance of autonomous robotics and automation, many automated tasks still require human intervention or guidance to mediate uncertainties in the environment or to execute the complexities of a task that autonomous robots are not yet equipped to handle. As such, robot controllers are needed that utilize the strengths of both autonomous agents, adept at handling lower level control tasks, and humans, superior at handling higher-level cognitive tasks.
To address this need, we develop a control theoretic framework that seeks to incorporate user commands such that user intention is preserved while an automated task is carried out by the controller. This is a novel approach in that system theoretic tools allow for analytic guarantees of feasibility and convergence to goal states which naturally lead to varying levels of autonomy. We develop a model predictive controller that takes human input, infers human intent, then applies a control that minimizes deviations from the intended human control while ensuring that the lower-level automated task is being completed.
This control framework is then evaluated in a human operator study involving a shared control task with human guidance of a mobile robot for navigation. These theoretical and experimental results lay the foundation for applying this control method for human-robot cooperative control to actual human-robot tasks. Specifically, the control is applied to a Urban Search and Rescue robot task where the shared control of a quadruped rescue robot is needed to ensure static stability during human-guided leg placements in uneven terrain. This control framework is also extended to a multiple user and multiple agent system where the human operators control multiple agents such that the agents maintain a formation while allowing the human operators to manipulate the shape of the formation. User studies are also conducted to evaluate the control in multiple operator scenarios.
|
53 |
Haptic interaction between naive participants and mobile manipulators in the context of healthcareChen, Tiffany L. 22 May 2014 (has links)
Human-scale mobile robots that manipulate objects (mobile manipulators) have the potential to perform a variety of useful roles in healthcare. Many promising roles for robots require physical contact with patients and caregivers, which is fraught with both psychological and physical implications. In this thesis, we used a human factors approach to evaluate system performance and participant responses when potential end users performed a healthcare task involving physical contact with a robot. We performed four human-robot interaction studies with 100 people who were not experts in robotics (naive participants). We show that physical contact between naive participants and human-scale mobile manipulators can be acceptable and effective in a variety of healthcare contexts. In this thesis, we investigated two forms of touch-based (haptic) interaction relevant to healthcare. First, we studied how participants responded to physical contact initiated by an autonomous robotic nurse. On average, people responded favorably to
robot-initiated touch when the robot indicated that it was a necessary part of a healthcare task. However, their responses strongly depended on what they thought the robot's intentions were, which suggests that this will be an important consideration for future healthcare robots. Second, we investigated the coordination of whole-body motion between human-scale robots and people by the application of forces to the robot's hands and arms. Nurses found this haptic interaction to be intuitive and preferred it over a standard gamepad interface. They also navigated the robot through a cluttered healthcare environment in less time, with fewer collisions, and with less cognitive load via haptic interaction. Through a study with expert dancers, we demonstrated the feasibility of robots as dance-based exercise partners. The experts rated a robot that used only haptic interaction to be a good follower according to subjective measures of dance quality. We also determined that healthy older adults were accepting of using a robot for partner dance-based exercise. On average, they found the robot easy and enjoyable to use and that it performed a partnered stepping task well. The findings in this work make several impacts on the design of robots in healthcare. We found that the perceived intent of robot-initiated touch significantly influenced people's responses. Thus, we determined that autonomous robots that initiate touch with patients can be acceptable in some contexts. This result highlights the importance of
considering the psychological responses of users when designing physical human-robot interactions in addition to considering the mechanics of performing tasks. We found that naive users across three user groups could quickly learn how to effectively use physical interaction to lead a robot during navigation, positioning, and partnered stepping tasks. These consistent results underscore the value of using physical interaction to enable users of varying backgrounds to lead a robot during whole-body motion coordination across different healthcare contexts.
|
54 |
The development of a human-robot interface for industrial collaborative systemTang, Gilbert 04 1900 (has links)
Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future.
A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”.
In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance.
The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user
ii
effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot.
|
55 |
Blinking in human communicative behaviour and its reproduction in artificial agentsFord, Christopher Colin January 2014 (has links)
A significant year-on-year rise in the creation and sales of personal and domestic robotic systems and the development of online embodied communicative agents (ECAs) has in parallel seen an increase in end-users from the public domain interacting with these systems. A number of these robotic/ECA systems are defined as social, whereby they are physically designed to resemble the bodily structure of a human and behaviorally designed to exist within human social surroundings. Their behavioural design is especially important with respect to communication as it is commonly stated that for any social robotic/ECA system to be truly useful within its role, it will need to be able to effectively communicate with its human users. Currently however, the act of a human user instructing a social robotic/ECA system to perform a task highlights many areas of contention in human communication understanding. Commonly, social robotic/ECA systems are embedded with either non-human-like communication interfaces or deficient imitative human communication interfaces, neither of which reach the levels of communicative interaction expected by human users, leading to communication difficulties which in turn create negative association with the social robotic/ECA system in its users. These communication issues lead to a strong requirement for the development of more effective imitative human communication behaviours within these systems. This thesis presents findings from our research into human non-verbal facial behaviour in communication. The objective of the work was to improve communication grounding between social robotic/ECA systems and their human users through the conceptual design of a computational system of human non-verbal facial behaviour (which in human-human communicative behaviour is shown to carry in the range of 55% of the intended semantic meaning of a transferred message) and the development of a highly accurate computational model of human blink behaviour and a computational model of physiological saccadic eye movement in human-human communication, enriching the human-like properties of the facial non-verbal communicative feedback expressed by the social robotic/ECA system. An enhanced level of interaction would likely be achieved, leading to increased empathic response from the user and an improved chance of a satisfactory communicative conclusion to a user’s task requirement instructions. The initial focus of the work was in the capture, transcription and analysis of common human non-verbal facial behavioural traits within human-human communication, linked to the expression of mental communicative states of understanding, uncertainty, misunderstanding and thought. Facial Non-Verbal behaviour data was collected and transcribed from twelve participants (six female) through a dialogue-based communicative interaction. A further focus was the analysis of blink co-occurrence with other traits of human-human communicative non-verbal facial behaviour and the capture of saccadic eye movement at common proxemic distances. From these data analysis tasks, the computational models of human blink behaviour and saccadic eye movement behaviour whilst listening / speaking within human-human communication were designed and then implemented within the LightHead social robotic system. Human-based studies on the perception of naïve users of the imitative probabilistic computational blink model performance on the LightHead robotic system are presented and the results discussed. The thesis concludes on the impact of the work along with suggestions for further studies towards the improvement of the important task of achieving seamless interactive communication between social robotic/ECA systems and their human users.
|
56 |
A study of non-linguistic utterances for social human-robot interactionRead, Robin January 2014 (has links)
The world of animation has painted an inspiring image of what the robots of the future could be. Taking the robots R2D2 and C3PO from the Star Wars films as representative examples, these robots are portrayed as being more than just machines, rather, they are presented as intelligent and capable social peers, exhibiting many of the traits that people have also. These robots have the ability to interact with people, understand us, and even relate to us in very personal ways through a wide repertoire of social cues. As robotic technologies continue to make their way into society at large, there is a growing trend toward making social robots. The field of Human-Robot Interaction concerns itself with studying, developing and realising these socially capable machines, equipping them with a very rich variety of capabilities that allow them to interact with people in natural and intuitive ways, ranging from the use of natural language, body language and facial gestures, to more unique ways such as expression through colours and abstract sounds. This thesis studies the use of abstract, expressive sounds, like those used iconically by the robot R2D2. These are termed Non-Linguistic Utterances (NLUs) and are a means of communication which has a rich history in film and animation. However, very little is understood about how such expressive sounds may be utilised by social robots, and how people respond to these. This work presents a series of experiments aimed at understanding how NLUs can be utilised by a social robot in order to convey affective meaning to people both young and old, and what factors impact on the production and perception of NLUs. Firstly, it is shown that not all robots should use NLUs. The morphology of the robot matters. People perceive NLUs differently across different robots, and not always in a desired manner. Next it is shown that people readily project affective meaning onto NLUs though not in a coherent manner. Furthermore, people's affective inferences are not subtle, rather they are drawn to well established, basic affect prototypes. Moreover, it is shown that the valence of the situation in which an NLU is made, overrides the initial valence of the NLU itself: situational context biases how people perceive utterances made by a robot, and through this, coherence between people in their affective inferences is found to increase. Finally, it is uncovered that NLUs are best not used as a replacement to natural language (as they are by R2D2), rather, people show a preference for them being used alongside natural language where they can play a supportive role by providing essential social cues.
|
57 |
Robots learning actions and goals from everyday peopleAkgun, Baris 07 January 2016 (has links)
Robots are destined to move beyond the caged factory floors towards domains where they will be interacting closely with humans. They will encounter highly varied environments, scenarios and user demands. As a result, programming robots after deployment will be an important requirement. To address this challenge, the field of Learning from Demonstration (LfD) emerged with the vision of programming robots through demonstrations of the desired behavior instead of explicit programming. The field of LfD within robotics has been around for more than 30 years and is still an actively researched field. However, very little research is done on the implications of having a non-robotics expert as a teacher. This thesis aims to bridge this gap by developing learning from demonstration algorithms and interaction paradigms that allow non-expert people to teach robots new skills.
The first step of the thesis was to evaluate how non-expert teachers provide demonstrations to robots. Keyframe demonstrations are introduced to the field of LfD to help people teach skills to robots and compared with the traditional trajectory demonstrations. The utility of keyframes are validated by a series of experiments with more than 80 participants. Based on the experiments, a hybrid of trajectory and keyframe demonstrations are proposed to take advantage of both and a method was developed to learn from trajectories, keyframes and hybrid demonstrations in a unified way.
A key insight from these user experiments was that teachers are goal oriented. They concentrated on achieving the goal of the demonstrated skills rather than providing good quality demonstrations. Based on this observation, this thesis introduces a method that can learn actions and goals from the same set of demonstrations. The action models are used to execute the skill and goal models to monitor this execution. A user study with eight participants and two skills showed that successful goal models can be learned from non- expert teacher data even if the resulting action models are not as successful. Following these results, this thesis further develops a self-improvement algorithm that uses the goal monitoring output to improve the action models, without further user input. This approach is validated with an expert user and two skills. Finally, this thesis builds an interactive LfD system that incorporates both goal learning and self-improvement and evaluates it with 12 naive users and three skills. The results suggests that teacher feedback during experiments increases skill execution and monitoring success. Moreover, non-expert data can be used as a seed to self-improvement to fix unsuccessful action models.
|
58 |
Facilitating play between children with autism and an autonomous robotFrancois, Dorothee C. M. January 2009 (has links)
This thesis is part of the Aurora project, an ongoing long-term project investigating the potential use of robots to help children with autism overcome some of their impairments in social interaction, communication and imagination. Autism is a spectrum disorder and children with autism have different abilities and needs. Related research has shown that robots can play the role of a mediator for social interaction in the context of autism. Robots can enable simple interactions, by initially providing a relatively predictable environment for play. Progressively, the complexity of the interaction can be increased. The purpose of this thesis is to facilitate play between children with autism and an autonomous robot. Children with autism have a potential for play but often encounter obstacles to actualize this potential. Through play, children can develop multidisciplinary skills, involving social interaction, communication and imagination. Besides, play is a medium for self-expression. The purpose here is to enable children with autism to experience a large range of play situations, ranging from dyadic play with progressively better balanced interaction styles, to situations of triadic play with both the robot and the experimenter. These triadic play situations could also involve symbolic or pretend play. This PhD work produced the following results: • A new methodological approach of how to design, conduct and analyse robotassisted play was developed and evaluated. This approach draws inspiration from non-directive play therapy where the child is the main leader for play and the experimenter participates in the play sessions. I introduced a regulation process which enables the experimenter to intervene under precise conditions in order to: i) prevent the child from entering or staying in repetitive behaviours, ii) provide bootstrapping that helps the child reach a situation of play she is about to enter and iii) ask the child questions dealing with affect or reasoning about the robot. This method has been tested in a long-term study with six children with autism. Video recordings of the play sessions were analysed in detail according to three dimensions, namely Play, Reasoning and Affect. Results have shown the ability of this approach to meet each child’s specific needs and abilities. Future work may develop this work towards a novel approach in autism therapy. • A novel and generic computational method for the automatic recognition of human-robot interaction styles (specifically gentleness and frequency of touch interaction) in real time was developed and tested experimentally. This method, the Cascaded Information Bottleneck Method, is based on an information theoretic approach. It relies on the principle that the relevant information can be progressively extracted from a time series with a cascade of successive bottlenecks sharing the same cardinality of bottleneck states but trained successively. This method has been tested with data that had been generated with a physical robot a) during human-robot interactions in laboratory conditions and b) during child-robot interactions in school. The method shows a sound recognition of both short-term and mid-term time scale events. The recognition process only involves a very short delay. The Cascaded Information Bottleneck is a generic method that can potentially be applied to various applications of socially interactive robots. • A proof-of-concept system of an adaptive robot was demonstrated that is responsive to different styles of interaction in human-robot interaction. Its impact was evaluated in a short-term study with seven children with autism. The recognition process relies on the Cascaded Information Bottleneck Method. The robot rewards well-balanced interaction styles. The study shows the potential of the adaptive robot i) to encourage children to engage more in the interaction and ii) to positively influence the children’s play styles towards better balanced interaction styles. It is hoped that this work is a step forward towards socially adaptive robots as well as robot-assisted play for children with autism.
|
59 |
The design space for robot appearance and behaviour for social robot companionsWalters, Michael L. January 2008 (has links)
To facilitate necessary task-based interactions and to avoid annoying or upsetting people a domestic robot will have to exhibit appropriate non-verbal social behaviour. Most current robots have the ability to sense and control for the distance of people and objects in their vicinity. An understanding of human robot proxemic and associated non-verbal social behaviour is crucial for humans to accept robots as domestic or servants. Therefore, this thesis addressed the following hypothesis: Attributes of robot appearance, behaviour, task context and situation will affect the distances that people will find comfortable between themselves and a robot. Initial exploratory Human-Robot Interaction (HRI) experiments replicated human-human studies into comfortable approach distances with a mechanoid robot in place of one of the human interactors. It was found that most human participants respected the robot's interpersonal space and there were systematic differences for participants' comfortable approach distances to robots with different voice styles. It was proposed that greater initial comfortable approach distances to the robot were due to perceived inconsistencies between the robots overall appearance and voice style. To investigate these issues further it was necessary to develop HRI experimental set-ups, a novel Video-based HRI (VHRI) trial methodology, trial data collection methods and analytical methodologies. An exploratory VHRI trial then investigated human perceptions and preferences for robot appearance and non-verbal social behaviour. The methodological approach highlighted the holistic and embodied nature of robot appearance and behaviour. Findings indicated that people tend to rate a particular behaviour less favourably when the behaviour is not consistent with the robot’s appearance. A live HRI experiment finally confirmed and extended from these previous findings that there were multiple factors which significantly affected participants preferences for robot to human approach distances. There was a significant general tendency for participants to prefer either a tall humanoid robot or a short mechanoid robot and it was suggested that this may be due to participants internal or demographic factors. Participants' preferences for robot height and appearance were both found to have significant effects on their preferences for live robot to Human comfortable approach distances, irrespective of the robot type they actually encountered. The thesis confirms for mechanoid or humanoid robots, results that have previously been found in the domain of human-computer interaction (cf. Reeves & Nass (1996)), that people seem to automatically treat interactive artefacts socially. An original empirical human-robot proxemic framework is proposed in which the experimental findings from the study can be unified in the wider context of human-robot proxemics. This is seen as a necessary first step towards the desired end goal of creating and implementing a working robot proxemic system which can allow the robot to: a) exhibit socially acceptable social spatial behaviour when interacting with humans, b) interpret and gain additional valuable insight into a range of HRI situations from the relative proxemic behaviour of humans in the immediate area. Future work concludes the thesis.
|
60 |
Robot-mediated interviews : a robotic intermediary for facilitating communication with childrenWood, Luke Jai January 2015 (has links)
Robots have been used in a variety of education, therapy or entertainment contexts. This thesis introduces the novel application of using humanoid robots for Robot-Mediated Interviews (RMIs). In the initial stages of this research it was necessary to first establish as a baseline if children would respond to a robot in an interview setting, therefore the first study compared how children responded to a robot and a human in an interview setting. Following this successful initial investigation, the second study expanded on this research by examining how children would respond to different types and difficulty of questions from a robot compared to a human interviewer. Building on these studies, the third study investigated how a RMI approach would work for children with special needs. Following the positive results from the three studies indicating that a RMI approach may have some potential, three separate user panel sessions were organised with user groups that have expertise in working with children and for whom the system would be potentially useful in their daily work. The panel sessions were designed to gather feedback on the previous studies and outline a set of requirements to make a RMI system feasible for real world users. The feedback and requirements from the user groups were considered and implemented in the system before conducting a final field trial of the system with a potential real world user. The results of the studies in this research reveal that the children generally interacted with KASPAR in a very similar to how they interacted with a human interviewer regardless of question type or difficulty. The feedback gathered from experts working with children suggested that the three most important and desirable features of a RMI system were: reliability, flexibility and ease of use. The feedback from the experts also indicated that a RMI system would most likely be used with children with special needs. The final field trial with 10 children and a potential real world user illustrated that a RMI system could potentially be used effectively outside of a research context, with all of the children in the trial responding to the robot. Feedback from the educational psychologist testing the system would suggest that a RMI approach could have real world implications if the system were developed further.
|
Page generated in 0.1373 seconds