Spelling suggestions: "subject:"humanrobot interaction"" "subject:"humanoidrobot interaction""
51 |
Aplicação de um robô humanoide autônomo por meio de reconhecimento de imagem e voz em sessões pedagógicas interativas / Application of an autonomous humanoid robot by image and voice recognition in interactive pedagogical sessionsDaniel Carnieto Tozadore 03 March 2016 (has links)
A Robótica Educacional consiste na utilização de robôs para aplicação prática dos conteúdos teóricos discutidos em sala de aula. Porém, os robôs mais usados apresentam uma carência de interação com os usuários, a qual pode ser melhorada com a inserção de robôs humanoides. Esta dissertação tem como objetivo a combinação de técnicas de visão computacional, robótica social e reconhecimento e síntese de fala para a construção de um sistema interativo que auxilie em sessões pedagógicas por meio de um robô humanoide. Diferentes conteúdos podem ser abordados pelos robôs de forma autônoma. Sua aplicação visa o uso do sistema como ferramenta de auxílio no ensino de matemática para crianças. Para uma primeira abordagem, o sistema foi treinado para interagir com crianças e reconhecer figuras geométricas 3D. O esquema proposto é baseado em módulos, no qual cada módulo é responsável por uma função específica e contém um grupo de funcionalidades. No total são 4 módulos: Módulo Central, Módulo de Diálogo, Módulo de Visão e Módulo Motor. O robô escolhido é o humanoide NAO. Para visão computacional, foram comparados a rede LEGION e o sistema VOCUS2 para detecção de objetos e SVM e MLP para classificação de imagens. O reconhecedor de fala Google Speech Recognition e o sintetizador de voz do NAOqi API são empregados para interações sonoras. Também foi conduzido um estudo de interação, por meio da técnica de Mágico-de-Oz, para analisar o comportamento das crianças e adequar os métodos para melhores resultados da aplicação. Testes do sistema completo mostraram que pequenas calibrações são suficientes para uma sessão de interação com poucos erros. Os resultados mostraram que crianças que tiveram contato com uma maior interatividade com o robô se sentiram mais engajadas e confortáveis nas interações, tanto nos experimentos quanto no estudo em casa para as próximas sessões, comparadas às crianças que tiveram contato com menor nível de interatividade. Intercalar comportamentos desafiadores e comportamentos incentivadores do robô trouxeram melhores resultados na interação com as crianças do que um comportamento constante. / Educational Robotics is a growing area that uses robots to apply theoretical concepts discussed in class. However, robots usually present a lack of interaction with users that can be improved with humanoid robots. This dissertation presents a project that combines computer vision techniques, social robotics and speech synthesis and recognition to build an interactive system which leads educational sessions through a humanoid robot. This system can be trained with different content to be addressed autonomously to users by a robot. Its application covers the use of the system as a tool in the mathematics teaching for children. For a first approach, the system has been trained to interact with children and recognize 3D geometric figures. The proposed scheme is based on modules, wherein each module is responsible for a specific function and includes a group of features for this purpose. In total there are 4 modules: Central Module, Dialog Module, Vision Module and Motor Module. The chosen robot was the humanoid NAO. For the Vision Module, LEGION network and VOCUS2 system were compared for object detection and SVM and MLP for image classification. The Google Speech Recognition speech recognizer and Voice Synthesizer Naoqi API are used for sound interactions. An interaction study was conducted by Wizard-of-Oz technique to analyze the behavior of children and adapt the methods for better application results. Full system testing showed that small calibrations are sufficient for an interactive session with few errors. Children who had experienced greater interaction degrees from the robot felt more engaged and comfortable during interactions, both in the experiments and studying at home for the next sessions, compared to children who had contact with a lower level of interactivity. Interim challenging behaviors and support behaviors brought better results in interaction than a constant behavior.
|
52 |
Wie kommt die Robotik zum Sozialen? Epistemische Praktiken der Sozialrobotik.Bischof, Andreas 01 March 2017 (has links) (PDF)
In zahlreichen Forschungsprojekten wird unter Einsatz großer finanzieller und personeller Ressourcen daran gearbeitet, dass Roboter die Fabrikhallen verlassen und Teil von Alltagswelten wie Krankenhäusern, Kindergärten und Privatwohnungen werden. Die Konstrukteurinnen und Konstrukteure stehen dabei vor einer nicht-trivialen Herausforderung: Sie müssen die Ambivalenzen und Kontingenzen alltäglicher Interaktion in die diskrete Sprache der Maschinen übersetzen. Wie sie dieser Herausforderung begegnen, welche Muster und Lösungen sie heranziehen und welche Implikationen für die Verwendung von Sozialrobotern dabei gelegt werden, ist der Gegenstand des Buches. Auf der Suche nach der Antwort, was Roboter sozial macht, hat Andreas Bischof Forschungslabore und Konferenzen in Europa und Nordamerika besucht und ethnografisch erforscht. Zu den wesentlichen Ergebnissen dieser Studie gehört die Typologisierung von Forschungszielen in der Sozialrobotik, eine epistemische Genealogie der Idee des Roboters in Alltagswelten, die Rekonstruktion der Bezüge zu 'echten' Alltagswelten in der
Sozialrobotik-Entwicklung und die Analyse dreier Gattungen epistemischer Praktiken, derer sich die Ingenieurinnen und Ingenieure bedienen, um Roboter sozial zu machen.
|
53 |
Human-in-the-loop control for cooperative human-robot tasksChipalkatty, Rahul 29 March 2012 (has links)
Even with the advance of autonomous robotics and automation, many automated tasks still require human intervention or guidance to mediate uncertainties in the environment or to execute the complexities of a task that autonomous robots are not yet equipped to handle. As such, robot controllers are needed that utilize the strengths of both autonomous agents, adept at handling lower level control tasks, and humans, superior at handling higher-level cognitive tasks.
To address this need, we develop a control theoretic framework that seeks to incorporate user commands such that user intention is preserved while an automated task is carried out by the controller. This is a novel approach in that system theoretic tools allow for analytic guarantees of feasibility and convergence to goal states which naturally lead to varying levels of autonomy. We develop a model predictive controller that takes human input, infers human intent, then applies a control that minimizes deviations from the intended human control while ensuring that the lower-level automated task is being completed.
This control framework is then evaluated in a human operator study involving a shared control task with human guidance of a mobile robot for navigation. These theoretical and experimental results lay the foundation for applying this control method for human-robot cooperative control to actual human-robot tasks. Specifically, the control is applied to a Urban Search and Rescue robot task where the shared control of a quadruped rescue robot is needed to ensure static stability during human-guided leg placements in uneven terrain. This control framework is also extended to a multiple user and multiple agent system where the human operators control multiple agents such that the agents maintain a formation while allowing the human operators to manipulate the shape of the formation. User studies are also conducted to evaluate the control in multiple operator scenarios.
|
54 |
Haptic interaction between naive participants and mobile manipulators in the context of healthcareChen, Tiffany L. 22 May 2014 (has links)
Human-scale mobile robots that manipulate objects (mobile manipulators) have the potential to perform a variety of useful roles in healthcare. Many promising roles for robots require physical contact with patients and caregivers, which is fraught with both psychological and physical implications. In this thesis, we used a human factors approach to evaluate system performance and participant responses when potential end users performed a healthcare task involving physical contact with a robot. We performed four human-robot interaction studies with 100 people who were not experts in robotics (naive participants). We show that physical contact between naive participants and human-scale mobile manipulators can be acceptable and effective in a variety of healthcare contexts. In this thesis, we investigated two forms of touch-based (haptic) interaction relevant to healthcare. First, we studied how participants responded to physical contact initiated by an autonomous robotic nurse. On average, people responded favorably to
robot-initiated touch when the robot indicated that it was a necessary part of a healthcare task. However, their responses strongly depended on what they thought the robot's intentions were, which suggests that this will be an important consideration for future healthcare robots. Second, we investigated the coordination of whole-body motion between human-scale robots and people by the application of forces to the robot's hands and arms. Nurses found this haptic interaction to be intuitive and preferred it over a standard gamepad interface. They also navigated the robot through a cluttered healthcare environment in less time, with fewer collisions, and with less cognitive load via haptic interaction. Through a study with expert dancers, we demonstrated the feasibility of robots as dance-based exercise partners. The experts rated a robot that used only haptic interaction to be a good follower according to subjective measures of dance quality. We also determined that healthy older adults were accepting of using a robot for partner dance-based exercise. On average, they found the robot easy and enjoyable to use and that it performed a partnered stepping task well. The findings in this work make several impacts on the design of robots in healthcare. We found that the perceived intent of robot-initiated touch significantly influenced people's responses. Thus, we determined that autonomous robots that initiate touch with patients can be acceptable in some contexts. This result highlights the importance of
considering the psychological responses of users when designing physical human-robot interactions in addition to considering the mechanics of performing tasks. We found that naive users across three user groups could quickly learn how to effectively use physical interaction to lead a robot during navigation, positioning, and partnered stepping tasks. These consistent results underscore the value of using physical interaction to enable users of varying backgrounds to lead a robot during whole-body motion coordination across different healthcare contexts.
|
55 |
The development of a human-robot interface for industrial collaborative systemTang, Gilbert 04 1900 (has links)
Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future.
A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”.
In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance.
The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user
ii
effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot.
|
56 |
Blinking in human communicative behaviour and its reproduction in artificial agentsFord, Christopher Colin January 2014 (has links)
A significant year-on-year rise in the creation and sales of personal and domestic robotic systems and the development of online embodied communicative agents (ECAs) has in parallel seen an increase in end-users from the public domain interacting with these systems. A number of these robotic/ECA systems are defined as social, whereby they are physically designed to resemble the bodily structure of a human and behaviorally designed to exist within human social surroundings. Their behavioural design is especially important with respect to communication as it is commonly stated that for any social robotic/ECA system to be truly useful within its role, it will need to be able to effectively communicate with its human users. Currently however, the act of a human user instructing a social robotic/ECA system to perform a task highlights many areas of contention in human communication understanding. Commonly, social robotic/ECA systems are embedded with either non-human-like communication interfaces or deficient imitative human communication interfaces, neither of which reach the levels of communicative interaction expected by human users, leading to communication difficulties which in turn create negative association with the social robotic/ECA system in its users. These communication issues lead to a strong requirement for the development of more effective imitative human communication behaviours within these systems. This thesis presents findings from our research into human non-verbal facial behaviour in communication. The objective of the work was to improve communication grounding between social robotic/ECA systems and their human users through the conceptual design of a computational system of human non-verbal facial behaviour (which in human-human communicative behaviour is shown to carry in the range of 55% of the intended semantic meaning of a transferred message) and the development of a highly accurate computational model of human blink behaviour and a computational model of physiological saccadic eye movement in human-human communication, enriching the human-like properties of the facial non-verbal communicative feedback expressed by the social robotic/ECA system. An enhanced level of interaction would likely be achieved, leading to increased empathic response from the user and an improved chance of a satisfactory communicative conclusion to a user’s task requirement instructions. The initial focus of the work was in the capture, transcription and analysis of common human non-verbal facial behavioural traits within human-human communication, linked to the expression of mental communicative states of understanding, uncertainty, misunderstanding and thought. Facial Non-Verbal behaviour data was collected and transcribed from twelve participants (six female) through a dialogue-based communicative interaction. A further focus was the analysis of blink co-occurrence with other traits of human-human communicative non-verbal facial behaviour and the capture of saccadic eye movement at common proxemic distances. From these data analysis tasks, the computational models of human blink behaviour and saccadic eye movement behaviour whilst listening / speaking within human-human communication were designed and then implemented within the LightHead social robotic system. Human-based studies on the perception of naïve users of the imitative probabilistic computational blink model performance on the LightHead robotic system are presented and the results discussed. The thesis concludes on the impact of the work along with suggestions for further studies towards the improvement of the important task of achieving seamless interactive communication between social robotic/ECA systems and their human users.
|
57 |
A study of non-linguistic utterances for social human-robot interactionRead, Robin January 2014 (has links)
The world of animation has painted an inspiring image of what the robots of the future could be. Taking the robots R2D2 and C3PO from the Star Wars films as representative examples, these robots are portrayed as being more than just machines, rather, they are presented as intelligent and capable social peers, exhibiting many of the traits that people have also. These robots have the ability to interact with people, understand us, and even relate to us in very personal ways through a wide repertoire of social cues. As robotic technologies continue to make their way into society at large, there is a growing trend toward making social robots. The field of Human-Robot Interaction concerns itself with studying, developing and realising these socially capable machines, equipping them with a very rich variety of capabilities that allow them to interact with people in natural and intuitive ways, ranging from the use of natural language, body language and facial gestures, to more unique ways such as expression through colours and abstract sounds. This thesis studies the use of abstract, expressive sounds, like those used iconically by the robot R2D2. These are termed Non-Linguistic Utterances (NLUs) and are a means of communication which has a rich history in film and animation. However, very little is understood about how such expressive sounds may be utilised by social robots, and how people respond to these. This work presents a series of experiments aimed at understanding how NLUs can be utilised by a social robot in order to convey affective meaning to people both young and old, and what factors impact on the production and perception of NLUs. Firstly, it is shown that not all robots should use NLUs. The morphology of the robot matters. People perceive NLUs differently across different robots, and not always in a desired manner. Next it is shown that people readily project affective meaning onto NLUs though not in a coherent manner. Furthermore, people's affective inferences are not subtle, rather they are drawn to well established, basic affect prototypes. Moreover, it is shown that the valence of the situation in which an NLU is made, overrides the initial valence of the NLU itself: situational context biases how people perceive utterances made by a robot, and through this, coherence between people in their affective inferences is found to increase. Finally, it is uncovered that NLUs are best not used as a replacement to natural language (as they are by R2D2), rather, people show a preference for them being used alongside natural language where they can play a supportive role by providing essential social cues.
|
58 |
Robots learning actions and goals from everyday peopleAkgun, Baris 07 January 2016 (has links)
Robots are destined to move beyond the caged factory floors towards domains where they will be interacting closely with humans. They will encounter highly varied environments, scenarios and user demands. As a result, programming robots after deployment will be an important requirement. To address this challenge, the field of Learning from Demonstration (LfD) emerged with the vision of programming robots through demonstrations of the desired behavior instead of explicit programming. The field of LfD within robotics has been around for more than 30 years and is still an actively researched field. However, very little research is done on the implications of having a non-robotics expert as a teacher. This thesis aims to bridge this gap by developing learning from demonstration algorithms and interaction paradigms that allow non-expert people to teach robots new skills.
The first step of the thesis was to evaluate how non-expert teachers provide demonstrations to robots. Keyframe demonstrations are introduced to the field of LfD to help people teach skills to robots and compared with the traditional trajectory demonstrations. The utility of keyframes are validated by a series of experiments with more than 80 participants. Based on the experiments, a hybrid of trajectory and keyframe demonstrations are proposed to take advantage of both and a method was developed to learn from trajectories, keyframes and hybrid demonstrations in a unified way.
A key insight from these user experiments was that teachers are goal oriented. They concentrated on achieving the goal of the demonstrated skills rather than providing good quality demonstrations. Based on this observation, this thesis introduces a method that can learn actions and goals from the same set of demonstrations. The action models are used to execute the skill and goal models to monitor this execution. A user study with eight participants and two skills showed that successful goal models can be learned from non- expert teacher data even if the resulting action models are not as successful. Following these results, this thesis further develops a self-improvement algorithm that uses the goal monitoring output to improve the action models, without further user input. This approach is validated with an expert user and two skills. Finally, this thesis builds an interactive LfD system that incorporates both goal learning and self-improvement and evaluates it with 12 naive users and three skills. The results suggests that teacher feedback during experiments increases skill execution and monitoring success. Moreover, non-expert data can be used as a seed to self-improvement to fix unsuccessful action models.
|
59 |
Facilitating play between children with autism and an autonomous robotFrancois, Dorothee C. M. January 2009 (has links)
This thesis is part of the Aurora project, an ongoing long-term project investigating the potential use of robots to help children with autism overcome some of their impairments in social interaction, communication and imagination. Autism is a spectrum disorder and children with autism have different abilities and needs. Related research has shown that robots can play the role of a mediator for social interaction in the context of autism. Robots can enable simple interactions, by initially providing a relatively predictable environment for play. Progressively, the complexity of the interaction can be increased. The purpose of this thesis is to facilitate play between children with autism and an autonomous robot. Children with autism have a potential for play but often encounter obstacles to actualize this potential. Through play, children can develop multidisciplinary skills, involving social interaction, communication and imagination. Besides, play is a medium for self-expression. The purpose here is to enable children with autism to experience a large range of play situations, ranging from dyadic play with progressively better balanced interaction styles, to situations of triadic play with both the robot and the experimenter. These triadic play situations could also involve symbolic or pretend play. This PhD work produced the following results: • A new methodological approach of how to design, conduct and analyse robotassisted play was developed and evaluated. This approach draws inspiration from non-directive play therapy where the child is the main leader for play and the experimenter participates in the play sessions. I introduced a regulation process which enables the experimenter to intervene under precise conditions in order to: i) prevent the child from entering or staying in repetitive behaviours, ii) provide bootstrapping that helps the child reach a situation of play she is about to enter and iii) ask the child questions dealing with affect or reasoning about the robot. This method has been tested in a long-term study with six children with autism. Video recordings of the play sessions were analysed in detail according to three dimensions, namely Play, Reasoning and Affect. Results have shown the ability of this approach to meet each child’s specific needs and abilities. Future work may develop this work towards a novel approach in autism therapy. • A novel and generic computational method for the automatic recognition of human-robot interaction styles (specifically gentleness and frequency of touch interaction) in real time was developed and tested experimentally. This method, the Cascaded Information Bottleneck Method, is based on an information theoretic approach. It relies on the principle that the relevant information can be progressively extracted from a time series with a cascade of successive bottlenecks sharing the same cardinality of bottleneck states but trained successively. This method has been tested with data that had been generated with a physical robot a) during human-robot interactions in laboratory conditions and b) during child-robot interactions in school. The method shows a sound recognition of both short-term and mid-term time scale events. The recognition process only involves a very short delay. The Cascaded Information Bottleneck is a generic method that can potentially be applied to various applications of socially interactive robots. • A proof-of-concept system of an adaptive robot was demonstrated that is responsive to different styles of interaction in human-robot interaction. Its impact was evaluated in a short-term study with seven children with autism. The recognition process relies on the Cascaded Information Bottleneck Method. The robot rewards well-balanced interaction styles. The study shows the potential of the adaptive robot i) to encourage children to engage more in the interaction and ii) to positively influence the children’s play styles towards better balanced interaction styles. It is hoped that this work is a step forward towards socially adaptive robots as well as robot-assisted play for children with autism.
|
60 |
The design space for robot appearance and behaviour for social robot companionsWalters, Michael L. January 2008 (has links)
To facilitate necessary task-based interactions and to avoid annoying or upsetting people a domestic robot will have to exhibit appropriate non-verbal social behaviour. Most current robots have the ability to sense and control for the distance of people and objects in their vicinity. An understanding of human robot proxemic and associated non-verbal social behaviour is crucial for humans to accept robots as domestic or servants. Therefore, this thesis addressed the following hypothesis: Attributes of robot appearance, behaviour, task context and situation will affect the distances that people will find comfortable between themselves and a robot. Initial exploratory Human-Robot Interaction (HRI) experiments replicated human-human studies into comfortable approach distances with a mechanoid robot in place of one of the human interactors. It was found that most human participants respected the robot's interpersonal space and there were systematic differences for participants' comfortable approach distances to robots with different voice styles. It was proposed that greater initial comfortable approach distances to the robot were due to perceived inconsistencies between the robots overall appearance and voice style. To investigate these issues further it was necessary to develop HRI experimental set-ups, a novel Video-based HRI (VHRI) trial methodology, trial data collection methods and analytical methodologies. An exploratory VHRI trial then investigated human perceptions and preferences for robot appearance and non-verbal social behaviour. The methodological approach highlighted the holistic and embodied nature of robot appearance and behaviour. Findings indicated that people tend to rate a particular behaviour less favourably when the behaviour is not consistent with the robot’s appearance. A live HRI experiment finally confirmed and extended from these previous findings that there were multiple factors which significantly affected participants preferences for robot to human approach distances. There was a significant general tendency for participants to prefer either a tall humanoid robot or a short mechanoid robot and it was suggested that this may be due to participants internal or demographic factors. Participants' preferences for robot height and appearance were both found to have significant effects on their preferences for live robot to Human comfortable approach distances, irrespective of the robot type they actually encountered. The thesis confirms for mechanoid or humanoid robots, results that have previously been found in the domain of human-computer interaction (cf. Reeves & Nass (1996)), that people seem to automatically treat interactive artefacts socially. An original empirical human-robot proxemic framework is proposed in which the experimental findings from the study can be unified in the wider context of human-robot proxemics. This is seen as a necessary first step towards the desired end goal of creating and implementing a working robot proxemic system which can allow the robot to: a) exhibit socially acceptable social spatial behaviour when interacting with humans, b) interpret and gain additional valuable insight into a range of HRI situations from the relative proxemic behaviour of humans in the immediate area. Future work concludes the thesis.
|
Page generated in 0.1276 seconds