Spelling suggestions: "subject:"humanrobot 1interaction"" "subject:"humanrobot 3dinteraction""
271 |
Human-Inspired Robot Task Teaching and LearningWu, Xianghai 28 October 2009 (has links)
Current methods of robot task teaching and learning have several limitations: highly-trained personnel are usually required to teach robots specific tasks; service-robot systems are limited in learning different types of tasks utilizing the same system; and the teacher’s expertise in the task is not well exploited. A human-inspired robot-task teaching and learning method is developed in this research with the aim of allowing general users to teach different object-manipulation tasks to a service robot, which will be able to adapt its learned tasks to new task setups.
The proposed method was developed to be interactive and intuitive to the user. In a closed loop with the robot, the user can intuitively teach the tasks, track the learning states of the robot, direct the robot attention to perceive task-related key state changes, and give timely feedback when the robot is practicing the task, while the robot can reveal its learning progress and refine its knowledge based on the user’s feedback.
The human-inspired method consists of six teaching and learning stages: 1) checking and teaching the needed background knowledge of the robot; 2) introduction of the overall task to be taught to the robot: the hierarchical task structure, and the involved objects and robot hand actions; 3) teaching the task step by step, and directing the robot to perceive important state changes; 4) demonstration of the task in whole, and offering vocal subtask-segmentation cues in subtask transitions; 5) robot learning of the taught task using a flexible vote-based algorithm to segment the demonstrated task trajectories, a probabilistic optimization process to assign obtained task trajectory episodes (segments) to the introduced subtasks, and generalization of the taught task trajectories in different reference frames; and 6) robot practicing of the learned task and refinement of its task knowledge according to the teacher’s timely feedback, where the adaptation of the learned task to new task setups is achieved by blending the task trajectories generated from pertinent frames.
An agent-based architecture was designed and developed to implement this robot-task teaching and learning method. This system has an interactive human-robot teaching interface subsystem, which is composed of: a) a three-camera stereo vision system to track user hand motion; b) a stereo-camera vision system mounted on the robot end-effector to allow the robot to explore its workspace and identify objects of interest; and c) a speech recognition and text-to-speech system, utilized for the main human-robot interaction.
A user study involving ten human subjects was performed using two tasks to evaluate the system based on time spent by the subjects on each teaching stage, efficiency measures of the robot’s understanding of users’ vocal requests, responses, and feedback, and their subjective evaluations. Another set of experiments was done to analyze the ability of the robot to adapt its previously learned tasks to new task setups using measures such as object, target and robot starting-point poses; alignments of objects on targets; and actual robot grasp and release poses relative to the related objects and targets. The results indicate that the system enabled the subjects to naturally and effectively teach the tasks to the robot and give timely feedback on the robot’s practice performance. The robot was able to learn the tasks as expected and adapt its learned tasks to new task setups. The robot properly refined its task knowledge based on the teacher’s feedback and successfully applied the refined task knowledge in subsequent task practices. The robot was able to adapt its learned tasks to new task setups that were considerably different from those in the demonstration. The alignments of objects on the target were quite close to those taught, and the executed grasping and releasing poses of the robot relative to objects and targets were almost identical to the taught poses. The robot-task learning ability was affected by limitations of the vision-based human-robot teleoperation interface used in hand-to-hand teaching and the robot’s capacity to sense its workspace. Future work will investigate robot learning of a variety of different tasks and the use of more robot in-built primitive skills.
|
272 |
Human-Inspired Robot Task Teaching and LearningWu, Xianghai 28 October 2009 (has links)
Current methods of robot task teaching and learning have several limitations: highly-trained personnel are usually required to teach robots specific tasks; service-robot systems are limited in learning different types of tasks utilizing the same system; and the teacher’s expertise in the task is not well exploited. A human-inspired robot-task teaching and learning method is developed in this research with the aim of allowing general users to teach different object-manipulation tasks to a service robot, which will be able to adapt its learned tasks to new task setups.
The proposed method was developed to be interactive and intuitive to the user. In a closed loop with the robot, the user can intuitively teach the tasks, track the learning states of the robot, direct the robot attention to perceive task-related key state changes, and give timely feedback when the robot is practicing the task, while the robot can reveal its learning progress and refine its knowledge based on the user’s feedback.
The human-inspired method consists of six teaching and learning stages: 1) checking and teaching the needed background knowledge of the robot; 2) introduction of the overall task to be taught to the robot: the hierarchical task structure, and the involved objects and robot hand actions; 3) teaching the task step by step, and directing the robot to perceive important state changes; 4) demonstration of the task in whole, and offering vocal subtask-segmentation cues in subtask transitions; 5) robot learning of the taught task using a flexible vote-based algorithm to segment the demonstrated task trajectories, a probabilistic optimization process to assign obtained task trajectory episodes (segments) to the introduced subtasks, and generalization of the taught task trajectories in different reference frames; and 6) robot practicing of the learned task and refinement of its task knowledge according to the teacher’s timely feedback, where the adaptation of the learned task to new task setups is achieved by blending the task trajectories generated from pertinent frames.
An agent-based architecture was designed and developed to implement this robot-task teaching and learning method. This system has an interactive human-robot teaching interface subsystem, which is composed of: a) a three-camera stereo vision system to track user hand motion; b) a stereo-camera vision system mounted on the robot end-effector to allow the robot to explore its workspace and identify objects of interest; and c) a speech recognition and text-to-speech system, utilized for the main human-robot interaction.
A user study involving ten human subjects was performed using two tasks to evaluate the system based on time spent by the subjects on each teaching stage, efficiency measures of the robot’s understanding of users’ vocal requests, responses, and feedback, and their subjective evaluations. Another set of experiments was done to analyze the ability of the robot to adapt its previously learned tasks to new task setups using measures such as object, target and robot starting-point poses; alignments of objects on targets; and actual robot grasp and release poses relative to the related objects and targets. The results indicate that the system enabled the subjects to naturally and effectively teach the tasks to the robot and give timely feedback on the robot’s practice performance. The robot was able to learn the tasks as expected and adapt its learned tasks to new task setups. The robot properly refined its task knowledge based on the teacher’s feedback and successfully applied the refined task knowledge in subsequent task practices. The robot was able to adapt its learned tasks to new task setups that were considerably different from those in the demonstration. The alignments of objects on the target were quite close to those taught, and the executed grasping and releasing poses of the robot relative to objects and targets were almost identical to the taught poses. The robot-task learning ability was affected by limitations of the vision-based human-robot teleoperation interface used in hand-to-hand teaching and the robot’s capacity to sense its workspace. Future work will investigate robot learning of a variety of different tasks and the use of more robot in-built primitive skills.
|
273 |
Human-Robot Interaction for Semi-Autonomous Assistive Robots : Empirical Studies and an Interaction Concept for Supporting Elderly People at Home / Människa-robotinteraktion för semi-autonoma robotar : Empiriska studier och ett interaktionskoncept för att stödja äldre i hemmiljöMast, Marcus January 2014 (has links)
The research addresses current shortcomings of autonomous service robots operating in domestic environments by considering the concept of a semi-autonomous robot that would be supported by human remote operators whenever the robot cannot handle a task autonomously. The main research objective was to investigate how to design the human-robot interaction for a robotic system to assist elderly people with physical tasks at home according to this conceptual idea. The research procedure followed the principles of human-centered design and is structured into four phases: In the first phase, the context of use of the system to be designed was determined. A focus group study yielded characteristics and attitudes of several potential user groups. A survey determined the demands of elderly people and informal caregivers for services a semiautonomous assistive robot may provide. An ethnographic study investigated the living conditions of elderly people and determined technical challenges for robots operating in this type of environment. Another ethnographic study investigated the work environment in teleassistive service centers and determined the feasibility of extending their range of services to incorporate robotic teleassistance. In the second phase, two studies were carried out to understand the interaction requirements. The first study determined common types of failure of current autonomous robots and required human interventions to resolve such failure states. The second study investigated how the human assistance could be provided considering a range of potential interaction devices. In the third phase, a human-robot interaction concept with three user groups and dedicated user interfaces was designed. The concept and user interfaces were refined in an iterative process based on the results of evaluations with prospective users and received encouraging results for user satisfaction and user experience. In the fourth and final phase the utility of two specific user interface features was investigated experimentally. The first experiment investigated the utility of providing remote operators with global 3D environment maps during robot navigation and identified beneficial usage scenarios. The second experiment investigated the utility of stereoscopic display for remote manipulation and robot navigation. Results suggested temporal advantages under stereoscopic display for one of three investigated task types and potential advantages for the other two. / Forskningen behandlar problem med autonoma robotar som agerar i hemmiljö. Specifikt studeras konceptet semi-autonoma robotar, vilket innebär att robotarna stöds av mänskliga operatörer när de inte klarar uppgifter på egen hand. Syftet med forskningen är att undersöka design av människa-robotinteraktion för robotsystem som stöder äldres behov av hjälp med fysiska uppgifter i hemmiljö. Forskningen är användarcentrerad och har strukturerats i fyra faser: I den första fasen undersöktes användarkontexten för systemet. I en fokusgruppsstudie utforskades karakteristika och attityder för flera potentiella användargrupper. Kraven på en semi-autonom robot för att assistera äldre och informella vårdgivare fastställdes. En enkätstudie undersökte levnadsvillkor hos äldre för att utforska tekniska utmaningar dessa omgivningar ställer på robotar. En etnografisk studie undersökte arbetskontexten på servicecenter för teleassistans och undersökte genomförbarhet i att utöka tjänsteutbudet till att även inkludera teleassisterade robotar. I den andra fasen utfördes två studier för att få kunskap om interaktionskrav. Den första studien fastställde vanliga typer av fel som inträffar med nuvarande typer av autonoma robotar och de typer av mänsklig assistans som krävs för att hantera dessa fel. Den andra studien undersökte hur mänsklig assistans kan utformas givet en repertoar av potentiella interaktionsanordningar. I den tredje fasen utformades ett interaktionskoncept för människa-robot interaktionen för tre användargrupper med dedicerade gränssnitt. Koncepten och användargränssnitten förfinades i en iterativ process baserat på resultat från utvärderingar med tänkta användare, och resulterade i uppmuntrande resultat vad gäller användarnas uppskattning och tillfredsställelse. I den fjärde och sista fasen studerades nyttan hos två specifika gränssnitt experimentellt. Det första experimentet undersökte nyttan med att ge operatörer på distans globala 3D-kartor under robotnavigeringen och identifierade användarscenarier där detta kan utnyttjas. Det andra experimentet undersökte nyttan med en stereoskopisk display för att manipulera och navigera roboten på distans. Resultaten visar på temporala fördelar med stereoskopisk display för en av tre undersökta uppgiftstyper och potentiella fördelar för de andra två.
|
274 |
Vers une interaction humain-robot à une initiative mixe : une équipe coopérative composée par des drones et un opérateur humain / Towards mixed-initiative human-robot interaction : a cooperative human-drone team frameworkUbaldino de Souza, Paulo Eduardo 19 October 2017 (has links)
L’interaction homme-robot est un domaine qui en est encore à ses balbutiements.Les développements se sont avant tout concentrés sur l’autonomie et l’intelligence artificielle et doter les robots de capacités avancées pour exécuter des tâches complexes. Dans un proche avenir, les robots développeront probablement la capacité de s’adapter et d’apprendre de leur environnement. Les robots ont confiance, ne s’ennuient pas et peuvent fonctionner dans des environnements hostiles et dynamiques - tous des attributs souhaités à l’exploration spatiale et aux situations d’urgence ou militaires. Ils réduisent également les coûts de mission, augmentent la flexibilité de conception et maximisent la production de données. Cependant, lorsqu’ils sont confrontés à de nouveaux scénarios et à des événements inattendus, les robots sont moins performants par rapport aux êtres humains intuitifs et créatifs (mais aussi faillibles et biaisés). L’avenir exigera que les concepteurs de mission équilibrent intelligemment la souplesse et l’ingéniosité des humains avec des systèmes robotiques robustes et sophistiqués. Ce travail de recherche propose un cadre formel, basé sur la théorie de jeux, pour une équipe de drones qui doit coordonner leurs actions entre eux et fournir à l’opérateur humain des données suffisantes pour prendre des décisions « difficiles » qui maximisent l’efficacité de la mission, selon certaines directives opérationnelles. Notre première contribution a consisté à présenter un cadre décentralisé et une fonction d’utilité pour une mission de patrouille avec une équipe de drones. Ensuite, nous avons considéré l’effet de cadrage, ou « framing effect » en anglais, dans le contexte de notre étude,afin de mieux comprendre et modéliser à terme certains processus décisionnels sous incertitude.Ainsi, nous avons réalisé deux expérimentations avec 20 et 12 participants respectivement. Nos résultats ont révélé que la façon dont le problème a été présenté (effet de cadrage positif ou négatif), l’engagement émotionnel et les couleurs du texte ont affecté statistiquement les choix des opérateurs humains. Les données expérimentales nous ont permis de développer un modèle d’utilité pour l’opérateur humain que nous cherchons à intégrer dans la boucle décisionnelle du système homme-robots. Enfin, nous formalisons et évaluons l’ensemble du cadre proposé où nous "fermons la boucle" à travers une expérimentation en ligne avec 101 participants. Nos résultats suggèrent que notre approche permet d’optimiser le système homme-robots dans un contexte où des décisions doivent être prises dans un environnement incertain. / Human-robot interaction is a field that is still in its infancy. Developments havefocused on autonomy and artificial intelligence, and provide robots with advanced capabilitiesto perform complex tasks. In the near future, robots will likely develop the ability to adapt andlearn from their surroundings. Robots have reliance, do not get bored and can operate in hostileand dynamics environments - all attributes well suited for space exploration, and emergency ormilitary situations. They also reduce mission costs, increase design flexibility, and maximizedata production. However, when coped with new scenarios and unexpected events, robots palein comparison with intuitive and creative human beings. The future will require that missiondesigners balance intelligently the flexibility and ingenuity of humans with robust and sophisticatedrobotic systems. This research work proposes a game-theoretic framework for a drone teamthat must coordinate their actions among them and provide the human operator sufficient datato make “hard” decisions that maximize the mission efficiency, according with some operationalguidelines. Our first contribution was to present a decentralized framework and utility functionfor a drone-team patrolling mission. Then, we considered the framing effect in the context of ourstudy, in order to better understand and model certain human decision-making processes underuncertainty. Hence, two experiments were conducted with 20 and 12 participants respectively.Our findings revealed that the way the problem was presented (positive or negative framing), theemotional commitment and the text colors statistically affected the choices made by the humanoperators. The experimental data allowed us to develop a utility model for the human operatorthat we sought to integrate into the decision-making loop of the human-robot system. Finally,we formalized and evaluated the close-loop of the whole proposed framework with a last onlineexperiment with 101 participants. Our results suggest that our approach allow us to optimize thehuman-robot system in a context where decisions must be made in an uncertain environment.
|
275 |
[en] DESIGN OF GRAPHICAL ROBOT USER INTERFACES: A STUDY OF USABILITY AND HUMAN-MACHINE INTERACTION / [pt] DESIGN DE INTERFACES GRÁFICAS DE SISTEMAS ROBÓTICOS: UM ESTUDO DE USABILIDADE E INTERAÇÃO HUMANO-MÁQUINAJULIA RAMOS CAMPANA 09 July 2018 (has links)
[pt] Hoje, os constantes avanços tecnológicos em interfaces digitais, e por consequência as interfaces gráficas do usuário, se fazem cada vez mais presentes na interação humano-máquina. Porém, num contexto em que sistemas inteligentes, a exemplo dos sistemas robóticos, já são uma realidade, ainda restam lacunas a
serem preenchidas quando se pensa em integrar, com fluidez, robôs a trabalhos customizados e complexos. Esta pesquisa tem como foco a análise da usabilidade de interfaces de usuário específicas para a interação com robôs remotos também conhecidas como Robot User Interfaces (RUIs). Quando bem executadas, tais
interfaces permitem aos operadores realizar remotamente tarefas em ambientes complexos. Para tanto, trabalha-se com a hipótese de que, se RUIs forem concebidas considerando as especificidades desses modelos de interação, as falhas operacionais serão reduzidas. O objetivo desta pesquisa foi avaliar diretrizes específicas para sistemas robóticos, compreendendo a relevância destas na usabilidade de interfaces. Para uma base teórica, foram levantados os modelos já existentes de interação com robôs e sistemas automatizados; e os princípios de design que se aplicam a estes modelos. Após a revisão bibliográfica, foram realizadas entrevistas contextuais com usuários de sistemas robóticos e testes de
usabilidade, a fim de reproduzir, em interfaces com e sem diretrizes de RUIs, os processos de interação na realização de tarefas. Os resultados finais das técnicas aplicadas apontaram para a validade da hipótese - se interfaces específicas para sistemas robóticos forem concebidas considerando as especificidades dos modelos de interação humano-robô, as falhas operacionais na interação serão reduzidas - à medida que os sistemas desenvolvidos com interfaces específicas ao contexto de interação com robôs proporcionaram uma melhor usabilidade e mitigaram a ocorrência de uma série de possíveis falhas humanas. / [en] Nowadays, the constant technological advances, and consequently the graphical user interfaces, have become more and more present in the humanmachine interaction. However, in a context where intelligent systems, such as robotic systems, are already a reality, there are still gaps to be filled when we think about integrating robots with custom and complex activities. This research aims on the analysis of Robot User Interfaces (RUIs) usability. When well executed, such interfaces allow operators to remotely perform tasks in complex environments. To that intent, our hypothesis is that, if RUIs are conceived considering the specificities of these interaction models, operational failures will be reduced. The main goal of this research was to evaluate specific guidelines for robotic systems, understanding their relevance in usability. For a theoretical basis, the existing models of interaction with robots and autonomous systems were raised; as well as the design principles that apply to these models. After a bibliographic review, we conducted contextual interviews with users of robotic systems, and usability tests to reproduce, in interfaces with and without RUI guidelines, the interaction processes in the task completion. The final results of the applied techniques proved the validity of the hypothesis, as the systems developed with interfaces specific to the interaction with robots provided better usability and mitigated the occurrence of array of human faults.
|
276 |
Hur skapas robotar som accepteras av den äldre generationen? : En studie om robotar inom äldreomsorg / How are robots created that are accepted by the older generation? : A study on robots in elderly careSidiropoulou Coster, Sofia, Donnerberg, Isabelle January 2017 (has links)
Utvecklingen av robotar inom äldreomsorg går fort framåt. Det finns ett brett utbud av robotar för olika sammanhang. År 2050 förväntas en global fördubbling av andelen äldre människor. Robotar som kan hjälpa till i äldrevården har fått ökat politiskt intresse då den demografiska utvecklingen pekar på att andelen äldre kommer öka i omfattning. Denna rapport kommer att undersöka vilka faktorer som gör att äldre människor accepterar användandet av robotar samt hur experter inom robotik arbetar för att utveckla dessa. Studien kommer baseras på insamlad data från både äldre människor, 65 år och över, samt experter inom robotik. Till grund för uppsatsen har en mängd vetenskapliga artiklar använts. Resultatet av studien visar att faktorer som är viktiga för äldres acceptans av robotar går in under utformning, kunskap, säkerhet, integritet, lätt att använda samt uppfattad användbarhet. Studien visar att experter i nuläget arbetat väl med hänsyn till de äldre. / The development of robotics within the elderly care is moving fast forward. There is a wide range of robots for different occasions. By year 2050 it is expected that the worlds older population will be doubled. Robots that can help in the elderly care has thus gotten a political interest hence the demographic development that point to an increase in the proportion of elderly people. This essay will investigate which factors make elderly people accept the usage of robots and how experts in robotics work to develop these robots. The study will be based on data from both elderly people over 65 and experts in robotics and a number of scientific articles have been used for the essay. The results of the study show that factors that are important for older people's acceptance of robots fall under; configuration, knowledge, safety, integrity, perceived ease of use and perceived usefulness. The study shows that experts are currently working well with regard to the elderly.
|
277 |
Learning Continuous Human-Robot Interactions from Human-Human DemonstrationsVogt, David 02 March 2018 (has links) (PDF)
In der vorliegenden Dissertation wurde ein datengetriebenes Verfahren zum maschinellen Lernen von Mensch-Roboter Interaktionen auf Basis von Mensch-Mensch Demonstrationen entwickelt. Während einer Trainingsphase werden Bewegungen zweier Interakteure mittels Motion Capture erfasst und in einem Zwei-Personen Interaktionsmodell gelernt. Zur Laufzeit wird das Modell sowohl zur Erkennung von Bewegungen des menschlichen Interaktionspartners als auch zur Generierung angepasster Roboterbewegungen eingesetzt. Die Leistungsfähigkeit des Ansatzes wird in drei komplexen Anwendungen evaluiert, die jeweils kontinuierliche Bewegungskoordination zwischen Mensch und Roboter erfordern. Das Ergebnis der Dissertation ist ein Lernverfahren, das intuitive, zielgerichtete und sichere Kollaboration mit Robotern ermöglicht.
|
278 |
'MATERIALITY AND THE CONSTRUCTION OF INTERSUBJECTIVITY. HUMAN-ROBOT INTERACTION IN TYPICAL DEVELOPMENT AND THE USE OF THE OBJECTS IN ASD CHILDREN'MANZI, FEDERICO 04 December 2018 (has links)
Gli obiettivi della presente tesi sono di analizzare, nei bambini con sviluppo tipico e atipico, la costruzione dell'intersoggettività attraverso l'uso di oggetti nello sviluppo atipico (ASD) e attraverso l'interazione con un robot umanoide nello sviluppo tipico. Per quanto riguarda i robot umanoidi, il problema è osservare la possibile attribuzione di caratteristiche simili all'uomo a un robot che possa rendere l'interazione uomo-robot simile all'interazione uomo-uomo che è una classica interazione intersoggettiva per la presenza di due soggetti umani. Questo scopo viene indagato considerando due quadri teorici: 1) la Teoria della Mente che spiegano la costruzione dell'intersoggettività attraverso l'attribuzione di stati mentali ad altri; 2) Prospettiva socio-materiale che postula che in molti casi la costruzione dell'intersoggettività sia mediata dagli oggetti. Quindi, la domanda che sorge in questa dissertazione sui robot umanoidi è: cosa succede quando l'oggetto mediatore (il robot) è anche il partner interattivo.
Per raggiungere questo scopo, la presente tesi investiga due temi principali: 1) i modelli di interazione con e l'attribuzione di una mente a un robot da parte di bambini con sviluppo tipico; 2) i modelli di interazione nello sviluppo atipico - bambini autistici - in un'interazione bambino-adulto mediata da oggetti. Il primo tema viene analizzato in una prospettiva teorica e un approccio metodologico innovativi, che porta a una comprensione dell'interazione bambino-robot. La prospettiva teorica collega il ruolo della Teoria della Mente alll'HRI; l'approccio metodologico osserva le strategie decisionali dei bambini nell'HRI, confrontando questi modelli comportamentali quando i bambini interagiscono con un essere umano e con un robot. Il secondo tema della tesi riguarda il ruolo dell'oggetto come mediatore della relazione tra bambini autistici e adulti. Questo viene studiato attraverso la prospettiva socio-materiale che suggerisce che le caratteristiche materiali degli oggetti e le loro qualità sociali intrinseche siano strettamente connesse. / The aims of the present dissertation are to analyse, in children with typical and atypical development, the construction of the intersubjectivity through the use of objects in atypical development (ASD) and through the interaction with a humanoid robot in typical development. About the humanoid robots, the issue is to observe the possible attribution of human-like features to a robot that can make the human-robot interaction similar to the human-human interaction that is a classic intersubjective interaction because of the presence of two human subjects. This purpose is investigated considering two theoretical frameworks: 1) Theory of Mind that explain the construction of the intersubjectivity through the attribution of mental states to others; 2) Socio-material perspective which postulates that the construction of the intersubjectivity is mediated by objects in many cases. Thus, the question arises in this dissertation about the humanoid robot is: what happen when the mediator object (the robot) is also the interactive partner.
To achieve this purposes, the present thesis studies two main topics: 1) the interactional patterns with and the attribution of a mind to a robot by typically developing children; 2) the interactional patterns in atypical development - autistic children - in a child-adult interaction mediated by objects. The first topic is analysed in an innovative theoretical perspective and a through novel methodological approach, leading to an innovative understanding of the child-robot interaction. The theoretical perspective connects the role of Theory of Mind in HRI; the methodological approach observes children’s decision-making strategies in the HRI, comparing these behavioural patterns when children interact with a human and with a robot. The second issue of the thesis is about the role of the object as mediator of the relationship between autistic children and adults. This is studied through the Socio- material perspective that hypothesised that material features of the objects and their intrinsic social qualities are strictly connected.
|
279 |
Implementation of Social practices on the Pepper Robot in the Elderly Care Domain : AI Planning with social practicesMokkapati, Siva January 2021 (has links)
Social practices are acceptable ways of doing things, contextual and materially mediated, shared between actors, and routinized over time. These social practices are interactions and can be used to define rule protocols those drive human-robot interactions. We follow these practices in our day-to-day life in almost all the interactions with other people. Social practices also allow computer scientists to model human computer interactions. Human interactions are very dynamic and so the social practices and they naturally follow some versions. Even though they might deviate from pre-specified patterns they will usually expect to follow the general flow of a social practice and adapt it to a specific circumstances. Now we wanted to take these social practices and apply them in a human-robot-interactions setup by choosing a more applicable scenario for the definition of social practices. In our case a simple scenario in elderly care. More specifically, we want to show how these social practices influence the way AI planning in a Pepper robot in the elderly care setting. Given a specific representation of Social Practices we show how the elements of the Social Practice are used in the planning process. We then show briefly how this plan is executed on the Pepper robot in the scenario. We briefly touch on the hardware aspect of the Pepper robot’s API and the inclusion of external APIs to make it for better results. We design and implement the social practices on Pepper robot and conduct experiments and present the results here.
|
280 |
Robot Gaze Behaviour for Handling Confrontational Scenarios / Blickbeteendet hos en robot för att hantera konfrontationsscenarierGorgis, Paul January 2021 (has links)
In everyday communication, humans utilise eye gaze due to its importance as a communication tool. As technology evolves, social robots are expected to become more adopted in society and, since they interact with humans, they should similarly use eye gaze to elevate the level of the interaction and increase humans’ perception of them. Previous studies have shown that robots possessing human-like gaze behaviour increase the interactants’ task performance and their perception of the robot. However, social robots must also be able to behave and respond appropriately when humans act inappropriately, and failure in doing so may normalize bad behaviour even towards other humans. Additionally, with the recent progress of wearable eyetracking technology, there is interest to see how this technology can be used to help generate human gaze into a robot. This thesis work investigates how the eye gaze behaviour from a human being can be modeled into the robot Furhat to behave more human-like in a confrontational scenario. It further investigates how a robot possessing the developed human-like gaze model compares to a robot using a believable heuristic gaze model. We created a pipeline which concerned selecting scenarios, conducting roleplays between actors of these scenarios to collect gaze, extracting and processing that gaze data and extracting probability distributions that the human-like model would utilise. Our model used frequencies to determine where to gaze and head rotation, while gamma distributions were used to sample gaze length. We then executed an online video study with the two robot conditions where participants rated either robot by filling out a questionnaire. The results show that while no statistical difference could be found, the human-like condition scored higher on the measures anthropomorphism/human-likeness and composure, whereas the heuristic condition rated higher on expertise and extroversion. As such, the human-like model did not yield a significant benefit on robot perception to opt for it. Still, we suggest that the pipeline used in this thesis work may help HRI researchers to perform gaze studies and possibly build a foundation for further development. / I vardaglig kommunikation använder människor sig av blickar på grund av dess betydelse som kommunikationsverktyg. Då teknologi ständigt utvecklas förväntas det att sociala robotar kommer att bli mer involverade i samhället, och eftersom de integrerar med människor så bör de på samma sätt använda sig av blickar och ögonrörelser för att höja nivån på interaktionen och därmed förbättra människors uppfattning av dem. Tidigare studier har visat att robotar som använder sig av blickar likt människor kan förbättra deltagarnas utförande av uppgifter samt deras uppfattning av roboten. Sociala robotar måste dock även kunna agera och svara på ett lämpligt sätt när människor beter sig olämpligt, och gör dem inte det finns risken att det olämpliga beteendet normaliseras, även i interaktioner med andra människor. Med de senaste framstegen av portabla eye-tracking enheter finns det därför ett intresse att se hur denna teknologi kan användas för att generera människolikt blickbeteende som sedan används i en robot. Denna studie undersöker hur en människas sätt att blicka och titta kan modelleras i roboten Furhat för att bete sig mer människolikt i ett scenario där konfrontation behövs. Studien undersöker dessutom hur en robot som bär ett människolikt blickbeteende jämför sig med en robot som bär ett trovärdigt heuristiskt blickbeteende. Vi skapade en struktur som involverade att välja scenarion, utföra rollspel mellan skådespelare i dessa scenarier för att samla data om deras blickmönster, extrahera och bearbeta denna data, och extraherade sannolikhetsfördelningar som den människolika modellen skulle använda sig av. Vår modell använde sig av frekvenser för att besluta var roboten skulle blicka, medan gammafördelningar användes för att generera blickens längd. Vi utförde därefter en videostudie online med de två robotvarianterna, där deltagare bedömde någon av robotarna genom att svara på en enkät. Resultaten visar att ingen statistisk signifikant skillnad kunde påvisas. Trender visade dock att modellen med människolik blickbeteende bedömdes högre i mätningen av attributerna antropomorfism/mänsklighet och fattning, medan den heuristiska modellen bedömdes högre i expertis och utåtvändighet. Därav erhöll den människolika modellen ingen signifikant framgång för att föredra den. Vi föreslår ändå att strukturen som användes i studien kan hjälpa MRI forskare att utföra studier som involverar blickbeteende hos människor, och möjligtvis bygga en grund för vidareutveckling av strukturen.
|
Page generated in 0.1122 seconds