Spelling suggestions: "subject:"humanrobot"" "subject:"humanoidrobot""
81 |
Robots learning actions and goals from everyday peopleAkgun, Baris 07 January 2016 (has links)
Robots are destined to move beyond the caged factory floors towards domains where they will be interacting closely with humans. They will encounter highly varied environments, scenarios and user demands. As a result, programming robots after deployment will be an important requirement. To address this challenge, the field of Learning from Demonstration (LfD) emerged with the vision of programming robots through demonstrations of the desired behavior instead of explicit programming. The field of LfD within robotics has been around for more than 30 years and is still an actively researched field. However, very little research is done on the implications of having a non-robotics expert as a teacher. This thesis aims to bridge this gap by developing learning from demonstration algorithms and interaction paradigms that allow non-expert people to teach robots new skills.
The first step of the thesis was to evaluate how non-expert teachers provide demonstrations to robots. Keyframe demonstrations are introduced to the field of LfD to help people teach skills to robots and compared with the traditional trajectory demonstrations. The utility of keyframes are validated by a series of experiments with more than 80 participants. Based on the experiments, a hybrid of trajectory and keyframe demonstrations are proposed to take advantage of both and a method was developed to learn from trajectories, keyframes and hybrid demonstrations in a unified way.
A key insight from these user experiments was that teachers are goal oriented. They concentrated on achieving the goal of the demonstrated skills rather than providing good quality demonstrations. Based on this observation, this thesis introduces a method that can learn actions and goals from the same set of demonstrations. The action models are used to execute the skill and goal models to monitor this execution. A user study with eight participants and two skills showed that successful goal models can be learned from non- expert teacher data even if the resulting action models are not as successful. Following these results, this thesis further develops a self-improvement algorithm that uses the goal monitoring output to improve the action models, without further user input. This approach is validated with an expert user and two skills. Finally, this thesis builds an interactive LfD system that incorporates both goal learning and self-improvement and evaluates it with 12 naive users and three skills. The results suggests that teacher feedback during experiments increases skill execution and monitoring success. Moreover, non-expert data can be used as a seed to self-improvement to fix unsuccessful action models.
|
82 |
Facilitating play between children with autism and an autonomous robotFrancois, Dorothee C. M. January 2009 (has links)
This thesis is part of the Aurora project, an ongoing long-term project investigating the potential use of robots to help children with autism overcome some of their impairments in social interaction, communication and imagination. Autism is a spectrum disorder and children with autism have different abilities and needs. Related research has shown that robots can play the role of a mediator for social interaction in the context of autism. Robots can enable simple interactions, by initially providing a relatively predictable environment for play. Progressively, the complexity of the interaction can be increased. The purpose of this thesis is to facilitate play between children with autism and an autonomous robot. Children with autism have a potential for play but often encounter obstacles to actualize this potential. Through play, children can develop multidisciplinary skills, involving social interaction, communication and imagination. Besides, play is a medium for self-expression. The purpose here is to enable children with autism to experience a large range of play situations, ranging from dyadic play with progressively better balanced interaction styles, to situations of triadic play with both the robot and the experimenter. These triadic play situations could also involve symbolic or pretend play. This PhD work produced the following results: • A new methodological approach of how to design, conduct and analyse robotassisted play was developed and evaluated. This approach draws inspiration from non-directive play therapy where the child is the main leader for play and the experimenter participates in the play sessions. I introduced a regulation process which enables the experimenter to intervene under precise conditions in order to: i) prevent the child from entering or staying in repetitive behaviours, ii) provide bootstrapping that helps the child reach a situation of play she is about to enter and iii) ask the child questions dealing with affect or reasoning about the robot. This method has been tested in a long-term study with six children with autism. Video recordings of the play sessions were analysed in detail according to three dimensions, namely Play, Reasoning and Affect. Results have shown the ability of this approach to meet each child’s specific needs and abilities. Future work may develop this work towards a novel approach in autism therapy. • A novel and generic computational method for the automatic recognition of human-robot interaction styles (specifically gentleness and frequency of touch interaction) in real time was developed and tested experimentally. This method, the Cascaded Information Bottleneck Method, is based on an information theoretic approach. It relies on the principle that the relevant information can be progressively extracted from a time series with a cascade of successive bottlenecks sharing the same cardinality of bottleneck states but trained successively. This method has been tested with data that had been generated with a physical robot a) during human-robot interactions in laboratory conditions and b) during child-robot interactions in school. The method shows a sound recognition of both short-term and mid-term time scale events. The recognition process only involves a very short delay. The Cascaded Information Bottleneck is a generic method that can potentially be applied to various applications of socially interactive robots. • A proof-of-concept system of an adaptive robot was demonstrated that is responsive to different styles of interaction in human-robot interaction. Its impact was evaluated in a short-term study with seven children with autism. The recognition process relies on the Cascaded Information Bottleneck Method. The robot rewards well-balanced interaction styles. The study shows the potential of the adaptive robot i) to encourage children to engage more in the interaction and ii) to positively influence the children’s play styles towards better balanced interaction styles. It is hoped that this work is a step forward towards socially adaptive robots as well as robot-assisted play for children with autism.
|
83 |
Robot-mediated interviews : a robotic intermediary for facilitating communication with childrenWood, Luke Jai January 2015 (has links)
Robots have been used in a variety of education, therapy or entertainment contexts. This thesis introduces the novel application of using humanoid robots for Robot-Mediated Interviews (RMIs). In the initial stages of this research it was necessary to first establish as a baseline if children would respond to a robot in an interview setting, therefore the first study compared how children responded to a robot and a human in an interview setting. Following this successful initial investigation, the second study expanded on this research by examining how children would respond to different types and difficulty of questions from a robot compared to a human interviewer. Building on these studies, the third study investigated how a RMI approach would work for children with special needs. Following the positive results from the three studies indicating that a RMI approach may have some potential, three separate user panel sessions were organised with user groups that have expertise in working with children and for whom the system would be potentially useful in their daily work. The panel sessions were designed to gather feedback on the previous studies and outline a set of requirements to make a RMI system feasible for real world users. The feedback and requirements from the user groups were considered and implemented in the system before conducting a final field trial of the system with a potential real world user. The results of the studies in this research reveal that the children generally interacted with KASPAR in a very similar to how they interacted with a human interviewer regardless of question type or difficulty. The feedback gathered from experts working with children suggested that the three most important and desirable features of a RMI system were: reliability, flexibility and ease of use. The feedback from the experts also indicated that a RMI system would most likely be used with children with special needs. The final field trial with 10 children and a potential real world user illustrated that a RMI system could potentially be used effectively outside of a research context, with all of the children in the trial responding to the robot. Feedback from the educational psychologist testing the system would suggest that a RMI approach could have real world implications if the system were developed further.
|
84 |
Autonomous Vehicle Social Behavior for Highway DrivingWei, Junqing 01 May 2017 (has links)
In recent years, autonomous driving has become an increasingly practical technology. With state-of-the-art computer and sensor engineering, autonomous vehicles may be produced and widely used for travel and logistics in the near future. They have great potential to reduce traffic accidents, improve transportation efficiency, and release people from driving tasks while commuting. Researchers have built autonomous vehicles that can drive on public roads and handle normal surrounding traffic and obstacles. However, in situations like lane changing and merging, the autonomous vehicle faces the challenge of performing smooth interaction with human-driven vehicles. To do this, autonomous vehicle intelligence still needs to be improved so that it can better understand and react to other human drivers on the road. In this thesis, we argue for the importance of implementing ”socially cooperative driving”, which is an integral part of everyday human driving, in autonomous vehicles. An intention-integrated Prediction- and Cost function-Based algorithm (iPCB) framework is proposed to enable an autonomous vehicles to perform cooperative social behaviors. We also propose a behavioral planning framework to enable the socially cooperative behaviors with the iPCB algorithm. The new architecture is implemented in an autonomous vehicle and can coordinate the existing Adaptive Cruise Control (ACC) and Lane Centering interface to perform socially cooperative behaviors. The algorithm has been tested in over 500 entrance ramp and lane change scenarios on public roads in multiple cities in the US and over 10; 000 in simulated case and statistical testing. Results show that the proposed algorithm and framework for autonomous vehicle improves the performance of autonomous lane change and entrance ramp handling. Compared with rule-based algorithms that were previously developed on an autonomous vehicle for these scenarios, over 95% of potentially unsafe situations are avoided.
|
85 |
Proposition d’une architecture de contrôle adaptative pour la tolérance aux fautes / Proposition of an adaptive Control architecture for fault toleranceDurand, Bastien 15 June 2011 (has links)
Les architectures logicielles de contrôles sont le centre névralgique des robots. Malheureusement les robots et leurs architectures souffrent de nombreuses imperfections qui perturbent et/ou compromettent la réalisation des missions qui leurs sont affectés. Nous proposons donc une méthodologie de conception d'architecture de contrôle adaptative pour la mise en œuvre de la tolérance aux fautes.La première partie de ce manuscrit propose un état de l'art de la sureté de fonctionnement, d'abord générique avant d'être spécifié au contexte des architectures de contrôle. La seconde partie nous permet de détailler la méthodologie proposée permettant d'identifier les fautes potentielles d'un robot et d'y répondre à l'aide des moyens de tolérance aux fautes. La troisième partie présente le contexte expérimental et applicatif dans lequel la méthodologie proposée sera mise en œuvre et qui constitue la quatrième partie de ce manuscrit. Une expérimentation spécifique mettant en lumière les aspects de la méthodologie est détaillée dans la dernière partie. / The software control architectures are the decisional center of robots. Unfortunately, the robots and their architectures suffer from numerous flaws that disrupt and / or compromise the achievement of missions they are assigned. We therefore propose a methodology for designing adaptive control architecture for the implementation of fault tolerance.The first part of this thesis proposes a state of the art of dependability, at first in a generic way before being specified in the context of control architectures. The second part allows us to detail the proposed methodology to identify potential errors of a robot and respond using the means of fault tolerance. The third part presents the experimental context and application in which the proposed methodology will be implemented and described in the fourth part of this manuscript. An experiment highlighting specific aspects of the methodology is detailed in the last part.
|
86 |
Gender differences in navigation dialogues with computer systemsKoulouri, Theodora January 2013 (has links)
Gender is among the most influential of the factors underlying differences in spatial abilities, human communication and interactions with and through computers. Past research has offered important insights into gender differences in navigation and language use. Yet, given the multidimensionality of these domains, many issues remain contentious while others unexplored. Moreover, having been derived from non-interactive, and often artificial, studies, the generalisability of this research to interactive contexts of use, particularly in the practical domain of Human-Computer Interaction (HCI), may be problematic. At the same time, little is known about how gender strategies, behaviours and preferences interact with the features of technology in various domains of HCI, including collaborative systems and systems with natural language interfaces. Targeting these knowledge gaps, the thesis aims to address the central question of how gender differences emerge and operate in spatial navigation dialogues with computer systems. To this end, an empirical study is undertaken, in which, mixed-gender and same-gender pairs communicate to complete an urban navigation task, with one of the participants being under the impression that he/she interacts with a robot. Performance and dialogue data were collected using a custom system that supported synchronous navigation and communication between the user and the robot. Based on this empirical data, the thesis describes the key role of the interaction of gender in navigation performance and communication processes, which outweighed the effect of individual gender, moderating gender differences and reversing predicted patterns of performance and language use. This thesis has produced several contributions; theoretical, methodological and practical. From a theoretical perspective, it offers novel findings in gender differences in navigation and communication. The methodological contribution concerns the successful application of dialogue as a naturalistic, and yet experimentally sound, research paradigm to study gender and spatial language. The practical contributions include concrete design guidelines for natural language systems and implications for the development of gender-neutral interfaces in specific domains of HCI.
|
87 |
Modeling humans as peers and supervisors in computing systems through runtime modelsZhong, Christopher January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Scott A. DeLoach / There is a growing demand for more effective integration of humans and computing systems, specifically in multiagent and multirobot systems. There are two aspects to consider in human integration: (1) the ability to control an arbitrary number of robots (particularly heterogeneous robots) and (2) integrating humans as peers in computing systems instead of being just users or supervisors.
With traditional supervisory control of multirobot systems, the number of robots that a human can manage effectively is between four and six [17]. A limitation of traditional supervisory control is that the human must interact individually with each robot, which limits the upper-bound on the number of robots that a human can control effectively. In this work, I define the concept of "organizational control" together with an autonomous mechanism that can perform task allocation and other low-level housekeeping duties, which significantly reduces the need for the human to interact with individual robots.
Humans are very versatile and robust in the types of tasks they can accomplish. However, failures in computing systems are common and thus redundancies are included to mitigate the chance of failure. When all redundancies have failed, system failure will occur and the computing system will be unable to accomplish its tasks. One way to further reduce the chance of a system failure is to integrate humans as peer "agents" in the computing system. As part of the system, humans can be assigned tasks that would have been impossible to complete due to failures.
|
88 |
Modélisation du profil émotionnel de l’utilisateur dans les interactions parlées Humain-Machine / User’s emotional profile modelling in spoken Human-Machine interactionsDelaborde, Agnès 19 December 2013 (has links)
Les travaux de recherche de la thèse portent sur l'étude et la formalisation des interactions émotionnelles Humain-Machine. Au delà d’une détection d'informations paralinguistiques (émotions, disfluences,...) ponctuelles, il s'agit de fournir au système un profil interactionnel et émotionnel de l'utilisateur dynamique, enrichi pendant l’interaction. Ce profil permet d’adapter les stratégies de réponses de la machine au locuteur, et il peut également servir pour mieux gérer des relations à long terme. Le profil est fondé sur une représentation multi-niveau du traitement des indices émotionnels et interactionnels extraits à partir de l'audio via les outils de détection des émotions du LIMSI. Ainsi, des indices bas niveau (variations de la F0, d'énergie, etc.), fournissent des informations sur le type d'émotion exprimée, la force de l'émotion, le degré de loquacité, etc. Ces éléments à moyen niveau sont exploités dans le système afin de déterminer, au fil des interactions, le profil émotionnel et interactionnel de l'utilisateur. Ce profil est composé de six dimensions : optimisme, extraversion, stabilité émotionnelle, confiance en soi, affinité et domination (basé sur le modèle de personnalité OCEAN et les théories de l’interpersonal circumplex). Le comportement social du système est adapté en fonction de ce profil, de l'état de la tâche en cours, et du comportement courant du robot. Les règles de création et de mise à jour du profil émotionnel et interactionnel, ainsi que de sélection automatique du comportement du robot, ont été implémentées en logique floue à l'aide du moteur de décision développé par un partenaire du projet ROMEO. L’implémentation du système a été réalisée sur le robot NAO. Afin d’étudier les différents éléments de la boucle d’interaction émotionnelle entre l’utilisateur et le système, nous avons participé à la conception de plusieurs systèmes : système en Magicien d’Oz pré-scripté, système semi-automatisé, et système d’interaction émotionnelle autonome. Ces systèmes ont permis de recueillir des données en contrôlant plusieurs paramètres d’élicitation des émotions au sein d’une interaction ; nous présentons les résultats de ces expérimentations, et des protocoles d’évaluation de l’Interaction Humain-Robot via l’utilisation de systèmes à différents degrés d’autonomie. / Analysing and formalising the emotional aspect of the Human-Machine Interaction is the key to a successful relation. Beyond and isolated paralinguistic detection (emotion, disfluences…), our aim consists in providing the system with a dynamic emotional and interactional profile of the user, which can evolve throughout the interaction. This profile allows for an adaptation of the machine’s response strategy, and can deal with long term relationships. A multi-level processing of the emotional and interactional cues extracted from speech (LIMSI emotion detection tools) leads to the constitution of the profile. Low level cues ( F0, energy, etc.), are then interpreted in terms of expressed emotion, strength, or talkativeness of the speaker. These mid-level cues are processed in the system so as to determine, over the interaction sessions, the emotional and interactional profile of the user. The profile is made up of six dimensions: optimism, extroversion, emotional stability, self-confidence, affinity and dominance (based on the OCEAN personality model and the interpersonal circumplex theories). The information derived from this profile could allow for a measurement of the engagement of the speaker. The social behaviour of the system is adapted according to the profile, and the current task state and robot behaviour. Fuzzy logic rules drive the constitution of the profile and the automatic selection of the robotic behaviour. These determinist rules are implemented on a decision engine designed by a partner in the project ROMEO. We implemented the system on the humanoid robot NAO. The overriding issue dealt with in this thesis is the viable interpretation of the paralinguistic cues extracted from speech into a relevant emotional representation of the user. We deem it noteworthy to point out that multimodal cues could reinforce the profile’s robustness. So as to analyse the different parts of the emotional interaction loop between the user and the system, we collaborated in the design of several systems with different autonomy degrees: a pre-scripted Wizard-of-Oz system, a semi-automated system, and a fully autonomous system. Using these systems allowed us to collect emotional data in robotic interaction contexts, by controlling several emotion elicitation parameters. This thesis presents the results of these data collections, and offers an evaluation protocol for Human-Robot Interaction through systems with various degrees of autonomy.
|
89 |
Using Motion Capture and Virtual Reality to test the advantages of Human Robot CollaborationRivera, Francisco January 2019 (has links)
Nowadays Virtual Reality (VR) and Human Robot Collaboration (HRC) are becoming more and more important in Industry as well as science. This investigation studies the applications of these two technologies in the ergonomic field by developing a system able to visualise and present ergonomics evaluation results in real time assembly tasks in a VR Environment, and also, evaluating the advantages of Human Robot Collaboration by studying in Virtual Reality a specific operation carried at Volvo Global Trucks Operation´s factory in Skövde. Regarding the first part of this investigation an innovative system was developed able to show ergonomic feedbacks in real time, as well as make ergonomic evaluations of the whole workload inside of a VR environment. This system can be useful for future research in the Virtual Ergonomics field regarding matters related to ergonomic learning rate of the workers when performing assembly tasks, design of ergonomic workstations, effect of different types assembly instructions in VR and a wide variety of different applications. The assembly operation with and without robot was created in IPS to use its VR functionality in order to test the assembly task in real users with natural movements of the body. The posture data of the users performing the tasks in Virtual Reality was collected. The users performed the task without the collaborative robot and then, with the collaborative robot. Their posture data was collected by using a Motion Capture equipment called Smart Textiles (developed at the University of Skövde) and the two different ergonomic evaluations (Using Smart Textiles’ criteria) of the two different task compared. The results show that when the robot implemented in this specific assembly task, the posture of the workers (specially the posture of the arms) has a great improvement if it is compared to the same task without the robot.
|
90 |
Formulation et études des problèmes de commande en co-manipulation robotique / Formulation and study of different control problems for co-manipulation tasksJlassi, Sarra 28 November 2013 (has links)
Dans ce travail de thèse, nous abordons les problèmes de commande posés en co-manipulation robotique pour des tâches de manutention à travers un point de vue dont nous pensons qu’il n’est pas suffisamment exploité, bien qu’il a recourt à des outils classiques en robotique. Le problème de commande en co-manipulation robotique est souvent abordé par le biais des méthodes de contrôle d’impédance, où l’objectif est d’établir une relation mathématique entre la vitesse linéaire du point d’interaction homme-robot et la force d’interaction appliquée par l’opérateur humain au même point. Cette thèse aborde le problème de co-manipulation robotique pour des tâches de manutention comme un problème de commande optimale sous contrainte. Le point de vue proposé se base sur la mise en œuvre d’un Générateur de Trajectoire Temps-Réel spécifique, combiné à une boucle d’asservissement cinématique. Le générateur de trajectoire est conçu de manière à traduire les intentions de l’opérateur humain en trajectoires idéales que le robot doit suivre ? Il fonctionne comme un automate à deux états dont les transitions sont contrôlées par évènement, en comparant l’amplitude de la force d’interaction à un seuil de force ajustable, afin de permettre à l’opérateur humain de garder l’autorité sur les états de mouvement du robot. Pour assurer une interaction fluide, nous proposons de générer un profil de vitesse colinéaire à la force appliquée au point d’interaction. La boucle d’asservissement est alors utilisée afin de satisfaire les exigences de stabilité et de qualité du suivi de trajectoire tout en garantissant l’assistance une interaction homme-robot sûre. Plusieurs méthodes de synthèse sont appliquées pour concevoir des correcteurs efficaces qui assurent un bon suivi des trajectoires générées. L’ensemble est illustré à travers deux modèles de robot. Le premier est le penducobot, qui correspond à un robot sous-actionné à deux degrés de liberté et évoluant dans le plan. Le deuxième est un robot à deux bras complètement actionné. / In this thesis, we address the co-manipulation control problems for the handling tasks through a viewpoint that we do not think sufficiently explored, even it employs classical tools of robotics. The problem of robotic co-manipulation is often addressed using impedance control based methods where we seek to establish a mathematical relation between the velocity of the human-robot interaction point and the force applied by the human operator at this point. This thesis addresses the problem of co-manipulation for handling tasks seen as a constrained optimal control problem. The proposed point of view relies on the implementation of a specific online trajectory generator (OTG) associated to a kinematic feedback loop. This OTG is designed so as to translate the human operator intentions to ideal trajectories that the robot must follow. It works as an automaton with two states of motion whose transitions are controlled by comparing the magnitude of the force to an adjustable threshold, in order to enable the operator to keep authority over the robot’s states of motion. To ensure the smoothness of the interaction, we propose to generate a velocity profile collinear to the force applied at the interaction point. The feedback control loop is then used to satisfy the requirements of stability and of trajectory tracking to guarantee assistance and operator security. Several methods are used to design efficient controllers that ensure the tracking of the generated trajectory. The overall strategy is illustrated through two mechanical systems. The first is the penducobot which is an underactuated robot. The second is the planar robot with two degrees of freedom fully actuated.
|
Page generated in 0.043 seconds