Spelling suggestions: "subject:"cocial robotics"" "subject:"bsocial robotics""
1 |
Developing an engagement and social interaction model for a robotic educational agentBrown, LaVonda N. 07 January 2016 (has links)
Effective educational agents should accomplish four essential goals during a student's learning process: 1) monitor engagement, 2) re-engage when appropriate, 3) teach novel tasks, and 4) improve retention. In this dissertation, we focus on all of these objectives through use of a teaching device (computer, tablet, or virtual reality game) and a robotic educational agent. We begin by developing and validating an engagement model based on the interactions between the student and the teaching device. This model uses time, performance, and/or eye gaze to determine the student's level of engagement. We then create a framework for implementing verbal and nonverbal, or gestural, behaviors on a humanoid robot and evaluate its perception and effectiveness for social interaction. These verbal and nonverbal behaviors are applied throughout the learning scenario to re-engage the students when the engagement model deems it necessary. Finally, we describe and validate the entire educational system that uses the engagement model to activate the behavioral strategies embedded on the robot when learning a new task. We then follow-up this study to evaluate student retention when using this system. The outcome of this research is the development of an educational system that effectively monitors student engagement, applies behavioral strategies, teaches novel tasks, and improves student retention to achieve individualized learning.
|
2 |
Aperfeiçoamento de uma arquitetura para robótica social / Improvement of an architecture for social roboticsSilva, Renato Ramos da 07 February 2013 (has links)
Um aspecto importante da interação humana é a atenção compartilhada. Ela é um processo de comunicação onde uma pessoa redireciona a sua atenção para um objeto ou evento e a outra pessoa ou pessoas seguem o seu olhar para o mesmo lugar. O processo é finalizado com a pessoa que segue a atenção realizando um apontamento sobre o objeto e um comentário sobre a situação. Esta habilidade importante é aprendida por nós durante o período da infância e hoje, alguns pesquisadores em robótica estão tentando desenvolver arquiteturas robóticas para aprender essa habilidade em robôs. Deste modo, o laboratório de aprendizado de robôs está trabalhando em uma arquitetura robótica para esse fim. Ela é composta por três módulos, percepção de estímulo, controle de consequência e emissão de resposta. Esta arquitetura robótica foi avaliada no controle de uma cabeça robótica e foi capaz de aprender a seguir o olhar e identificar alguns objetos. No entanto, todos esses módulos têm algumas limitações. A fim de ter uma melhor interação entre um robô e um humano e reduzir os efeitos das limitações, algumas melhorias foram desenvolvidas. Entre elas incluem um novo algoritmo de classificação das posições da cabeça através do histograma de gradiente orientado, inserir novas funcionalidades (definidas como reflexos) ao módulo de controle de consequência e novos algoritmos de aprendizado para selecionar a melhor ação. Todas as modificações realizadas reduziram as limitações e pode melhorar as interações entre um robô e um ser humano / One important aspect of human interaction is the shared attention. It is a communication process where one person redirect his or her attention to an object or event and the other person or people follow gaze to the same place. This process end with a pointing and a comment about the situation by the person that follows the attention. This important ability was learned by us during the childhood and some roboticist are trying to develop robotics architectures to learn this ability in robots. In this way, the Learning Lab Robotics has been working on a robotic architecture used with this proposed. It is composed by three modules, stimulus perception, consequence control and response emission. This robotic architecture was evaluated to control a robotic head and it was capable to learn to follow gaze and identify some objects. However, all of these modules have some limitations. In order to take a better interaction between a robot and a human and reduce the effects of limitations, some improvements were developed. They include a new head pose classification algorithm using histogram of oriented gradient, increase the capability of consequence control with new reflexes and new learning algorithms to select the best action. All modification reduce the limitations and it can improve the interactions between a robot and a human being
|
3 |
Aperfeiçoamento de uma arquitetura para robótica social / Improvement of an architecture for social roboticsRenato Ramos da Silva 07 February 2013 (has links)
Um aspecto importante da interação humana é a atenção compartilhada. Ela é um processo de comunicação onde uma pessoa redireciona a sua atenção para um objeto ou evento e a outra pessoa ou pessoas seguem o seu olhar para o mesmo lugar. O processo é finalizado com a pessoa que segue a atenção realizando um apontamento sobre o objeto e um comentário sobre a situação. Esta habilidade importante é aprendida por nós durante o período da infância e hoje, alguns pesquisadores em robótica estão tentando desenvolver arquiteturas robóticas para aprender essa habilidade em robôs. Deste modo, o laboratório de aprendizado de robôs está trabalhando em uma arquitetura robótica para esse fim. Ela é composta por três módulos, percepção de estímulo, controle de consequência e emissão de resposta. Esta arquitetura robótica foi avaliada no controle de uma cabeça robótica e foi capaz de aprender a seguir o olhar e identificar alguns objetos. No entanto, todos esses módulos têm algumas limitações. A fim de ter uma melhor interação entre um robô e um humano e reduzir os efeitos das limitações, algumas melhorias foram desenvolvidas. Entre elas incluem um novo algoritmo de classificação das posições da cabeça através do histograma de gradiente orientado, inserir novas funcionalidades (definidas como reflexos) ao módulo de controle de consequência e novos algoritmos de aprendizado para selecionar a melhor ação. Todas as modificações realizadas reduziram as limitações e pode melhorar as interações entre um robô e um ser humano / One important aspect of human interaction is the shared attention. It is a communication process where one person redirect his or her attention to an object or event and the other person or people follow gaze to the same place. This process end with a pointing and a comment about the situation by the person that follows the attention. This important ability was learned by us during the childhood and some roboticist are trying to develop robotics architectures to learn this ability in robots. In this way, the Learning Lab Robotics has been working on a robotic architecture used with this proposed. It is composed by three modules, stimulus perception, consequence control and response emission. This robotic architecture was evaluated to control a robotic head and it was capable to learn to follow gaze and identify some objects. However, all of these modules have some limitations. In order to take a better interaction between a robot and a human and reduce the effects of limitations, some improvements were developed. They include a new head pose classification algorithm using histogram of oriented gradient, increase the capability of consequence control with new reflexes and new learning algorithms to select the best action. All modification reduce the limitations and it can improve the interactions between a robot and a human being
|
4 |
Contributions to 3D data processing and social roboticsEscalona, Félix 30 September 2021 (has links)
In this thesis, a study of artificial intelligence applied to 3D data and social robotics is carried out. The first part of the present document is dedicated to 3D object recognition. Object recognition consists on the automatic detection and categorisation of the objects that appear in a scene. This capability is an important need for social robots, as it allows them to understand and interact with their environment. Image-based methods have been largely studied with great results, but they only rely on visual features and can confuse different objects with similar appearances (picture with the object depicted in it), so 3D data can help to improve these systems using topological features. For this part, we present different novel techniques that use pure 3D data. The second part of the thesis is about the mapping of the environment. Mapping of the environment consists on constructing a map that can be used by a robot to locate itself. This capability enables them to perform a more elaborated navigation strategy, which is tremendously usable by a social robot to interact with the different rooms of a house and its objects. In this section, we will explore 2D and 3D maps and their refinement with object recognition. Finally, the third part of this work is about social robotics. Social robotics is focused on serving people in a caring interaction rather than to perform a mechanical task. Previous sections are related to two main capabilities of a social robot, and this final section contains a survey about this kind of robots and other projects that explore other aspects of them.
|
5 |
Teaching robots social autonomy from in situ human supervisionSenft, Emmanuel January 2018 (has links)
Traditionally the behaviour of social robots has been programmed. However, increasingly there has been a focus on letting robots learn their behaviour to some extent from example or through trial and error. This on the one hand excludes the need for programming, but also allows the robot to adapt to circumstances not foreseen at the time of programming. One such occasion is when the user wants to tailor or fully specify the robot's behaviour. The engineer often has limited knowledge of what the user wants or what the deployment circumstances specifically require. Instead, the user does know what is expected from the robot and consequently, the social robot should be equipped with a mechanism to learn from its user. This work explores how a social robot can learn to interact meaningfully with people in an efficient and safe way by learning from supervision by a human teacher in control of the robot's behaviour. To this end we propose a new machine learning framework called Supervised Progressively Autonomous Robot Competencies (SPARC). SPARC enables non-technical users to control and teach a robot, and we evaluate its effectiveness in Human-Robot Interaction (HRI). The core idea is that the user initially remotely operates the robot, while an algorithm associates actions to states and gradually learns. Over time, the robot takes over the control from the user while still giving the user oversight of the robot's behaviour by ensuring that every action executed by the robot has been actively or passively approved by the user. This is particularly important in HRI, as interacting with people, and especially vulnerable users, is a complex and multidimensional problem, and any errors by the robot may have negative consequences for the people involved in the interaction. Through the development and evaluation of SPARC, this work contributes to both HRI and Interactive Machine Learning, especially on how autonomous agents, such as social robots, can learn from people and how this specific teacher-robot interaction impacts the learning process. We showed that a supervised robot learning from their user can reduce the workload of this person, and that providing the user with the opportunity to control the robot's behaviour substantially improves the teaching process. Finally, this work also demonstrated that a robot supervised by a user could learn rich social behaviours in the real world, in a large multidimensional and multimodal sensitive environment, as a robot learned quickly (25 interactions of 4 sessions during in average 1.9 minutes) to tutor children in an educational game, achieving similar behaviours and educational outcomes compared to a robot fully controlled by the user, both providing 10 to 30% improvement in game metrics compared to a passive robot.
|
6 |
The impact of social expectation towards robots on human-robot interactionsSyrdal, Dag Sverre January 2018 (has links)
This work is presented in defence of the thesis that it is possible to measure the social expectations and perceptions that humans have of robots in an explicit and succinct manner, and these measures are related to how humans interact with, and evaluate, these robots. There are many ways of understanding how humans may respond to, or reason about, robots as social actors, but the approach that was adopted within this body of work was one which focused on interaction-specific expectations, rather than expectations regarding the true nature of the robot. These expectations were investigated using a questionnaire-based tool, the University of Hertfordshire Social Roles Questionnaire, which was developed as part of the work presented in this thesis and tested on a sample of 400 visitors to an exhibition in the Science Gallery in Dublin. This study suggested that responses to this questionnaire loaded on two main dimensions, one which related to the degree of social equality the participants expected the interactions with the robots to have, and the other was related to the degree of control they expected to exert upon the robots within the interaction. A single item, related to pet-like interactions, loaded on both and was considered a separate, third dimension. This questionnaire was deployed as part of a proxemics study, which found that the degree to which participants accepted particular proxemics behaviours was correlated with initial social expectations of the robot. If participants expected the robot to be more of a social equal, then the participants preferred the robot to approach from the front, while participants who viewed the robot more as a tool preferred it to approach from a less obtrusive angle. The questionnaire was also deployed in two long-term studies. In the first study, which involved one interaction a week over a period of two months, participant social expectations of the robots prior to the beginning of the study, not only impacted how participants evaluated open-ended interactions with the robots throughout the two-month period, but also how they collaborated with the robots in task-oriented interactions as well. In the second study, participants interacted with the robots twice a week over a period of 6 weeks. This study replicated the findings of the previous study, in that initial expectations impacted evaluations of interactions throughout the long-term study. In addition, this study used the questionnaire to measure post-interaction perceptions of the robots in terms of social expectations. The results from these suggest that while initial social expectations of robots impact how participants evaluate the robots in terms of interactional outcomes, social perceptions of robots are more closely related to the social/affective experience of the interaction.
|
7 |
Ambiente para interação baseada em reconhecimento de emoções por análise de expressões faciais / Environment based on emotion recognition for human-robot interactionRanieri, Caetano Mazzoni 09 August 2016 (has links)
Nas ciências de computação, o estudo de emoções tem sido impulsionado pela construção de ambientes interativos, especialmente no contexto dos dispositivos móveis. Pesquisas envolvendo interação humano-robô têm explorado emoções para propiciar experiências naturais de interação com robôs sociais. Um dos aspectos a serem investigados é o das abordagens práticas que exploram mudanças na personalidade de um sistema artificial propiciadas por alterações em um estado emocional inferido do usuário. Neste trabalho, é proposto um ambiente para interação humano-robô baseado em emoções, reconhecidas por meio de análise de expressões faciais, para plataforma Android. Esse sistema consistiu em um agente virtual agregado a um aplicativo, o qual usou informação proveniente de um reconhecedor de emoções para adaptar sua estratégia de interação, alternando entre dois paradigmas discretos pré-definidos. Nos experimentos realizados, verificou-se que a abordagem proposta tende a produzir mais empatia do que uma condição controle, entretanto esse resultado foi observado somente em interações suficientemente longas. / In computer sciences, the development of interactive environments have motivated the study of emotions, especially on the context of mobile devices. Research in human-robot interaction have explored emotions to create natural experiences on interaction with social robots. A fertile aspect consist on practical approaches concerning changes on the personality of an artificial system caused by modifications on the users inferred emotional state. The present project proposes to develop, for Android platform, an environment for human-robot interaction based on emotions. A dedicated module will be responsible for recognizing emotions by analyzing facial expressions. This system consisted of a virtual agent aggregated to an application, which used information of the emotion recognizer to adapt its interaction strategy, alternating between two pre-defined discrete paradigms. In the experiments performed, it was found that the proposed approach tends to produce more empathy than a control condition, however this result was observed only in sufficiently long interactions.
|
8 |
Interactions entre processus émotionnels et cognitifs : une étude en robotique mobile et sociale neuromimétique / Interactions between emotional and cognitive processes : a study in neuromimetic mobile and social roboticsBelkaid, Marwen 06 December 2016 (has links)
L'objectif de ma thèse est d'étudier les interactions entre processus cognitifs et émotionnels à travers le prisme de la robotique neuromimétique.Les modèles proposés sont implémentés sur des réseaux de neurones artificiels et encorporés dans des plateformes robotiques -- c'est-à-dire des systèmes encorporés et situés. D'une manière générale, l'intérêt est double : 1) s'inspirer des solutions biologiques pour concevoir des systèmes qui interagissent mieux avec leurs environnements physiques et sociaux, et 2) mettre en place des modèles computationels comme moyen de comprendre la cognition et les émotions biologiques.La première partie du manuscrit aborde la navigation comme cadre permettant d'étudier la cognition biologique et artificielle. Dans le Chapitre 1, je commence par donner un aperçu de la littérature en navigation bio-inspirée. Ensuite, deux problématiques sont traitées. Dans le Chapitre 2, la reconnaissance visuelle de lieux en environnement extérieur est abordée. Dans ce cadre, je propose un modèle basé sur les notions de textit{contextes visuels} et de textit{précédence globale} qui combine des informations locales et holistiques. Puis, dans le Chapitre 3, je considère l'textit{apprentissage interactif} de tâches de navigation à travers une communication homme--robot non-verbale basée sur des signaux visuomoteurs de bas niveau.La deuxième partie du manuscrit s'attaque à la question centrale des interactions entre emotion et cognition. Le Chapitre 4 introduit la recherche sur les émotions comme une entreprise inter-disciplinaire, incluant des théories en psychologie, des découvertes en neurosciences et des modèles computationnels. Dans le Chapitre 5, je propose un textit{modèle conceptuel} des interactions emotion--cognition donc différentes instantiations sont ensuite présentées. Plus précisément, dans le Chapitre 6, je modélise la perception de l'textit{espace péripersonnel} quand elle est modulée par des signaux sensoriels et physiologiques ayant une valence émotionnelle. Enfin, dans le Chapitre 7, j'introduis le concept de textit{métacontrôle émotionnel} comme un exemple d'interaction emotion--cognition. Cela consiste à utiliser des signaux émotionnels élicités par une auto-évaluation pour moduler des processus computationnels -- tels que l'attention ou la sélection de l'action -- dans le but de réguler le comportement.Une idée clé dans cette thèse est que, dans les systèmes autonomes, emotion et cognition ne peuvent pas être séparées. En effet, il est maintenant bien admis que les émotions sont très liées à la cognition, en particulier à travers la modulation de différents processus computationnels. Cela pose la question de savoir si ces modulations se produisent au niveau du traitement sensoriel ou celui de la sélection de l'action. Dans cette thèse, je préconise l'intégration des émotions artificielles dans les architectures robotiques via des influences bidirectionnelles avec les processus sensoriels, attentionnels, moteurs and décisionnels. Ce travail tente de mettre en avant la manière dont cette approche des aspects internes des processus émotionnels peut favoriser une interaction efficace avec l'environnement physique et social du système. / The purpose of my thesis is to study interactions between cognitive and emotional processes through the lens of neuromimetic robotics.The proposed models are implemented on artificial neural networks and embodied in robotic platforms -- that is to say embodied and situated systems. In general, the interest is twofold: 1) taking inspiration from biological solutions to design systems that better interact with their physical and social environments, and 2) providing computational models as a means to better understand biological cognition and emotion.The first part of the dissertation addresses spatial navigation as a framework to study biological and artificial cognition. In Chapter 1, I present a brief overview of the literature on biologically inspired navigation. Then, two issues are more specifically tackled. In Chapter 2, visual place recognition is addressed in the case of outdoor navigation. In that matter, I propose a model based on the notions of textit{visual context} and textit{global precedence} combining local and holistic visual information. Then, in Chapter 3, I consider the textit{interactive learning} of navigation tasks through non-verbal human--robot communication based on low-level visuomotor signals.The second part of the dissertation addresses the central question of emotion--cognition interactions. In Chapter 4, I give an overview of the research on emotion as a cross-disciplinary research, including psychological theories, neuroscientific findings and computational models. In Chapter 5, I propose a textit{conceptual model} of emotion--cognition interactions. Then, various instantiations of this model are presented. In Chapter 6, I model the perception of the textit{peripersonal space} when modulated by emotionally valenced sensory and physiological signals. Last, in Chapter 7, I introduce the concept of textit{Emotional Metacontrol} as an example of emotion--cognition interaction. It consists in using emotional signals elicited by self-assessment to modulate computational processes -- such as attention and action selection -- for the purpose of behavior regulation.In this thesis, a key idea is that, in autonomous systems, emotion and cognition cannot be separated. Indeed, it is becoming well admitted that emotion is closely related to cognition, in particular through the modulation of various computational processes. This raises the question of whether this modulation occurs at the level of sensory processing or at the level of action selection. In this thesis, I will advocate the idea that artificial emotion must be integrated in robotic architectures through bidirectional influences with sensory, attentional, decisional and motor processes. This work attempts to highlight how this approach to internal emotional processes could foster efficient interaction with the physical and social environment.
|
9 |
An integrative framework of time-varying affective robotic behaviorMoshkina, Lilia V. 04 April 2011 (has links)
As robots become more and more prevalent in our everyday life, making sure that our interactions with them are natural and satisfactory is of paramount importance. Given the propensity of humans to treat machines as social actors, and the integral role affect plays in human life, providing robots with affective responses is a step towards making our interaction with them more intuitive. To the end of promoting more natural, satisfying and effective human-robot interaction and enhancing robotic behavior in general, an integrative framework of time-varying affective robotic behavior was designed and implemented on a humanoid robot. This psychologically inspired framework (TAME) encompasses 4 different yet interrelated affective phenomena: personality Traits, affective Attitudes, Moods and Emotions. Traits determine consistent patterns of behavior across situations and environments and are generally time-invariant; attitudes are long-lasting and reflect likes or dislikes towards particular objects, persons, or situations; moods are subtle and relatively short in duration, biasing behavior according to favorable or unfavorable conditions; and emotions provide a fast yet short-lived response to environmental contingencies. The software architecture incorporating the TAME framework was designed as a stand-alone process to promote platform-independence and applicability to other domains.
In this dissertation, the effectiveness of affective robotic behavior was explored and evaluated in a number of human-robot interaction studies with over 100 participants. In one of these studies, the impact of Negative Mood and emotion of Fear was assessed in a mock-up search-and-rescue scenario, where the participants found the robot expressing affect more compelling, sincere, convincing and "conscious" than its non-affective counterpart. Another study showed that different robotic personalities are better suited for different tasks: an extraverted robot was found to be more welcoming and fun for a task as a museum robot guide, where an engaging and gregarious demeanor was expected; whereas an introverted robot was rated as more appropriate for a problem solving task requiring concentration. To conclude, multi-faceted robotic affect can have far-reaching practical benefits for human-robot interaction, from making people feel more welcome where gregariousness is expected to making unobtrusive partners for problem solving tasks to saving people's lives in dangerous situations.
|
10 |
Navigation de robot avec conscience sociale : entre l'evaluation des risques et celle des conventiones sociales / Socially-Aware Robot Navigation : combining Risk Assessment and Social ConventionsRios Martinez, Jorge 08 January 2013 (has links)
Cette thèse propose une méthode de navigation fondée sur les risques, y compris à la fois la notion traditionnelle de risque de collision et la notion de risque de perturbation. Avec la demande croissante d'assistance à la mobilité personnelle et de la robotique de services mobiles, les robots et les gens doivent partager les mêmes espaces physiques et suivre les mêmes conventions sociales. Les robots doivent respecter les contraintes de proximité, mais aussi respecter les gens qui interagissent. Par exemple, ils ne doivent pas briser l'interaction entre les gens qui parlent, à moins que la tâche du robot est de prendre part à la conversation. Dans ce cas, il doit être en mesure de rejoindre le groupe à l'aide d'un comportement socialement adapté. Le système de navigation socialement conscient proposée dans cette thèse intègre à la fois l'évaluation d'un risque de collision en utilisant des modèles prédictifs d'obstacles mobiles, et une évaluation de conformité avec les conventions sociales. La gestion humaine de l'espace (espace personnel, o-espace, espace d'activité ...) inspirée de la sociologie et la littérature robotique sociale est intégré, mais aussi des modèles de comportement qui permettent au robot la realisation de une prédiction à moyen terme des positions de l'homme. Les résultats de la simulation et des expériences sur un fauteuil roulant robotisé donnent validite a la méthode en montrant que notre robot est capable de naviguer dans un environnement dynamique en évitant les collisions avec des obstacles et des personnes et, en même temps, en réduisant l'inconfort chez les personnes en respectant les espaces mentionnés ci-dessus. / This thesis proposes a risk-based navigation method including both the traditional notion of risk of collision and the notion of risk of disturbance. With the growing demand of personal assistance to mobility and mobile service robotics, robots and people must share the same physical spaces and follow the same social conventions. Robots must respect proximity constraints but also respect people interacting. For example, they should not break interaction between people talking, unless the robot task is to take part in the conversation. In this case, it must be able to join the group using a socially adapted behavior. The socially-aware navigation system proposed in this thesis integrates both an assessment of a risk of collision using predictive models of moving obstacles, and an assessment of accordance with social conventions. Human management of space (personal space, o-space, activity space...) inspired from sociology and social robotics literature is integrated, but also models of behavior that enable the robot to make medium-term prediction of the human positions. Simulation and experimental results on a robotic wheelchair validate the method by showing that our robot is able to navigate in a dynamic environment avoiding collisions with obstacles and people and, at the same time, minimizing discomfort in people by respecting spaces mentioned above.
|
Page generated in 0.0592 seconds