Spelling suggestions: "subject:"humanrobot interactions"" "subject:"humanoidrobot interactions""
1 |
Cognitive-Developmental Learning for a Humanoid Robot: A Caregiver's GiftArsenio, Artur Miguel 26 September 2004 (has links)
The goal of this work is to build a cognitive system for the humanoid robot, Cog, that exploits human caregivers as catalysts to perceive and learn about actions, objects, scenes, people, and the robot itself. This thesis addresses a broad spectrum of machine learning problems across several categorization levels. Actions by embodied agents are used to automatically generate training data for the learning mechanisms, so that the robot develops categorization autonomously. Taking inspiration from the human brain, a framework of algorithms and methodologies was implemented to emulate different cognitive capabilities on the humanoid robot Cog. This framework is effectively applied to a collection of AI, computer vision, and signal processing problems. Cognitive capabilities of the humanoid robot are developmentally created, starting from infant-like abilities for detecting, segmenting, and recognizing percepts over multiple sensing modalities. Human caregivers provide a helping hand for communicating such information to the robot. This is done by actions that create meaningful events (by changing the world in which the robot is situated) thus inducing the "compliant perception" of objects from these human-robot interactions. Self-exploration of the world extends the robot's knowledge concerning object properties.This thesis argues for enculturating humanoid robots using infant development as a metaphor for building a humanoid robot's cognitive abilities. A human caregiver redesigns a humanoid's brain by teaching the humanoid robot as she would teach a child, using children's learning aids such as books, drawing boards, or other cognitive artifacts. Multi-modal object properties are learned using these tools and inserted into several recognition schemes, which are then applied to developmentally acquire new object representations. The humanoid robot therefore sees the world through the caregiver's eyes.Building an artificial humanoid robot's brain, even at an infant's cognitive level, has been a long quest which still lies only in the realm of our imagination. Our efforts towards such a dimly imaginable task are developed according to two alternate and complementary views: cognitive and developmental.
|
2 |
Modeling humans as peers and supervisors in computing systems through runtime modelsZhong, Christopher January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Scott A. DeLoach / There is a growing demand for more effective integration of humans and computing systems, specifically in multiagent and multirobot systems. There are two aspects to consider in human integration: (1) the ability to control an arbitrary number of robots (particularly heterogeneous robots) and (2) integrating humans as peers in computing systems instead of being just users or supervisors.
With traditional supervisory control of multirobot systems, the number of robots that a human can manage effectively is between four and six [17]. A limitation of traditional supervisory control is that the human must interact individually with each robot, which limits the upper-bound on the number of robots that a human can control effectively. In this work, I define the concept of "organizational control" together with an autonomous mechanism that can perform task allocation and other low-level housekeeping duties, which significantly reduces the need for the human to interact with individual robots.
Humans are very versatile and robust in the types of tasks they can accomplish. However, failures in computing systems are common and thus redundancies are included to mitigate the chance of failure. When all redundancies have failed, system failure will occur and the computing system will be unable to accomplish its tasks. One way to further reduce the chance of a system failure is to integrate humans as peer "agents" in the computing system. As part of the system, humans can be assigned tasks that would have been impossible to complete due to failures.
|
3 |
Robotic System Design For Reshaping Estimated Human Intention In Human-robot InteractionsDurdu, Akif 01 October 2012 (has links) (PDF)
This thesis outlines the methodology and experiments associated with the reshaping of human intention via based on the robot movements in Human-Robot Interactions (HRI). Although works on estimating human intentions are quite well known research areas in the literature, reshaping intentions through interactions is a new significant branching in the field of human-robot interaction. In this thesis, we analyze how previously estimated human intentions change based on his/her actions by cooperating with mobile robots in a real human-robot environment. Our approach uses the Observable Operator Models (OOMs) and Hidden Markov Models (HMMs) designed for the intelligent mobile robotic systems, which consists of two levels: the low-level tracks the human while the high-level guides the mobile robots into moves that aim to change intentions of individuals in the environment. In the low level, postures and locations of the human are monitored by applying image processing methods. The high level uses an algorithm which includes learned OOM models or HMM models to estimate human intention and decision making system to reshape the previously estimated human intention. Through this thesis, OOMs are started to be used at the human-robot interaction applications for first time. This two-level system is tested on video frames taken from a real human-robot environment. The results obtained using the proposed approaches are compared according to performance towards the degree of reshaping the detected intentions.
|
4 |
Localisation et suivi de visages à partir d'images et de sons : une approche Bayésienne temporelle et commumative / From images and sounds to face localization and tracking : a switching dynamical Bayesian frameworkDrouard, Vincent 18 December 2017 (has links)
Dans cette thèse, nous abordons le problème de l’estimation de pose de visage dans le contexte des interactions homme-robot. Nous abordons la résolution de cette tâche à l’aide d’une approche en deux étapes. Tout d’abord en nous inspirant de [Deleforge 15], nous proposons une nouvelle façon d’estimer la pose d’un visage, en apprenant un lien entre deux espaces, l’espace des paramètres de pose et un espace de grande dimension représentant les observations perçues par une caméra. L’apprentissage de ce lien se fait à l’aide d’une approche probabiliste, utilisant un mélange de regressions affines. Par rapport aux méthodes d’estimation de pose de visage déjà existantes, nous incorporons de nouvelles informations à l’espace des paramètres de pose, ces additions sont nécessaires afin de pouvoir prendre en compte la diversité des observations, comme les differents visages et expressions mais aussi lesdécalages entre les positions des visages détectés et leurs positions réelles, cela permet d’avoir une méthode robuste aux conditions réelles. Les évaluations ont montrées que cette méthode permettait d’avoir de meilleurs résultats que les méthodes de regression standard et des résultats similaires aux méthodes de l’état de l’art qui pour certaines utilisent plus d’informations, comme la profondeur, pour estimer la pose. Dans un second temps, nous développons un modèle temporel qui utilise les capacités des traqueurs pour combiner l’information du présent avec celle du passé. Le but à travers cela est de produire une estimation de la pose plus lisse dans le temps, mais aussi de corriger les oscillations entre deux estimations consécutives indépendantes. Le modèle proposé intègre le précédent modèle de régression dans une structure de filtrage de Kalman. Cette extension fait partie de la famille des modèles dynamiques commutatifs et garde tous les avantages du mélange de regressionsaffines précédent. Globalement, le modèle temporel proposé permet d’obtenir des estimations de pose plus précises et plus lisses sur une vidéo. Le modèle dynamique commutatif donne de meilleurs résultats qu’un modèle de suivi utilsant un filtre de Kalman standard. Bien qu’appliqué à l’estimation de pose de visage le modèle presenté dans cette thèse est très général et peut être utilisé pour résoudre d’autres problèmes de régressions et de suivis. / In this thesis, we address the well-known problem of head-pose estimationin the context of human-robot interaction (HRI). We accomplish this taskin a two step approach. First, we focus on the estimation of the head pose from visual features. We design features that could represent the face under different orientations and various resolutions in the image. The resulting is a high-dimensional representation of a face from an RGB image. Inspired from [Deleforge 15] we propose to solve the head-pose estimation problem by building a link between the head-pose parameters and the high-dimensional features perceived by a camera. This link is learned using a high-to-low probabilistic regression built using probabilistic mixture of affine transformations. With respect to classic head-pose estimation methods we extend the head-pose parameters by adding some variables to take into account variety in the observations (e.g. misaligned face bounding-box), to obtain a robust method under realistic conditions. Evaluation of the methods shows that our approach achieve better results than classic regression methods and similar results thanstate of the art methods in head pose that use additional cues to estimate the head pose (e.g depth information). Secondly, we propose a temporal model by using tracker ability to combine information from both the present and the past. Our aim through this is to give a smoother estimation output, and to correct oscillations between two consecutives independent observations. The proposed approach embeds the previous regression into a temporal filtering framework. This extention is part of the family of switching dynamic models and keeps all the advantages of the mixture of affine regressions used. Overall the proposed tracker gives a more accurate and smoother estimation of the head pose on a video sequence. In addition, the proposed switching dynamic model gives better results than standard tracking models such as Kalman filter. While being applied to the head-pose estimation problem the methodology presented in this thesis is really general and can be used to solve various regression and tracking problems, e.g. we applied it to the tracking of a sound source in an image.
|
5 |
Is Perceived Intentionality of a Virtual Robot Influenced by the Kinematics?Sasser, Jordan 01 January 2019 (has links)
Research has shown that in Human-Human Interactions kinematic information reveals that competitive and cooperative intentions are perceivable and suggests the existence of a cooperation bias. The present study invokes the same question in a Human-Robot Interaction by investigating the relationship between the acceleration of a virtual robot within a virtual reality environment and the participants perception of the situation being cooperative or competitive by attempting to identify the social cues used for those perceptions. Five trials, which are mirrored, faster acceleration, slower acceleration, varied acceleration with a loss, and varied acceleration with a win, were experienced by the participant; randomized within two groups of five totaling in ten events. Results suggest that when the virtual robot's acceleration pattern were faster than the participant's acceleration the situation was perceived as more competitive. Additionally, results suggest that while the slower acceleration was perceived as more cooperative, the condition was not significantly different from mirrored acceleration. These results may indicate that there may be some kinematic information found in the faster accelerations that invoke stronger competitive perceptions whereas slower accelerations and mirrored acceleration may blend together during perception; furthermore, the models used in the slower acceleration conditions and the mirrored acceleration provide no single identifiable contributor towards perceived cooperativeness possibly due to a similar cooperative bias. These findings are used as a baseline for understanding movements that can be utilized in the design of better social robotic movements. These movements would improve the interactions between humans and these robots, ultimately improving the robot's ability to help during situations.
|
6 |
Synchronisation et coordination interpersonnelle dans l'interaction Homme-robot / Synchrony and Interpersonal coordination in Human Robot interactionHasnain, Syed Khursheed 10 July 2014 (has links)
As robots start moving closer to our social and daily lives, issues of agency and social behavior become more important. However, despite noticeable advances in Human Robot Interaction (HRI), the developed technologies induce two major drawbacks : (i) HRI are highly demanding, (ii) humans have to adapt their way of thinking to the potential and limitations of the Robot. Thereby, HRI induce an important cognitive load which question the acceptability of the future robots. Consequently, we can address the question of understanding and mastering the development of pleasant yet efficient human-robot interactions which increase self- esteem, engagement (or pleasure), and efficacy of the human when interacting with the machine.In this race for more user-friendly HRI systems (robotic companion, intelligent objects etc.), working on the technical features (the design of appearance and superficial traits of behavior) can contribute to some partial solutions for punctual or short-term interactions. For instance, a major focus of interest has been put on the expressiveness and the appearance of robots and avatars. Yet, these approaches have neglected the importance of understanding the dynamics of interactions.In our opinion, intuitive communication refers to the ability of the robot to detect the crucial signals of the interaction and use them to adapt one's dynamics to the other's behavior. In fact, this central issue is highly dependent on the robot's capabilities to sense the human world and interact with it in a way that emulates human-human interactions.In early communication among humans, synchrony was found to be a funda- mental mechanism relying on very low-level sensory-motor networks, inducing the synchronization of inter-individual neural populations from sensory flows (vision, audition, or touch). Synchrony is caused by the interaction but also sustains the interaction itself in a circular way, as promoted by the enaction approach. Consequently, to become a partner in a working together scenario, the machine can obtain a minimal level of autonomy and adaptation by predicting the rhythmic structure of the interaction to build reinforcement signals to adapt the robot behavior as it can maintain the interest of the human in more long-term interactions.More precisely, as we are aiming for more “intuitive” and “natural” HRI, we took advantages of recent discoveries in low-level human interactions and studied Unintentional Synchronizations during rhythmic human robot interactions. We argue that exploiting natural stability and adaptability properties of unintentional synchronizations and rhythmic activities in human-human interactions can solve several of the acceptability problems of HRIs, and allow rethinking the current approaches to design them. / As robots start moving closer to our social and daily lives, issues of agency and social behavior become more important. However, despite noticeable advances in Human Robot Interaction (HRI), the developed technologies induce two major drawbacks : (i) HRI are highly demanding, (ii) humans have to adapt their way of thinking to the potential and limitations of the Robot. Thereby, HRI induce an important cognitive load which question the acceptability of the future robots. Consequently, we can address the question of understanding and mastering the development of pleasant yet efficient human-robot interactions which increase self- esteem, engagement (or pleasure), and efficacy of the human when interacting with the machine.In this race for more user-friendly HRI systems (robotic companion, intelligent objects etc.), working on the technical features (the design of appearance and superficial traits of behavior) can contribute to some partial solutions for punctual or short-term interactions. For instance, a major focus of interest has been put on the expressiveness and the appearance of robots and avatars. Yet, these approaches have neglected the importance of understanding the dynamics of interactions.In our opinion, intuitive communication refers to the ability of the robot to detect the crucial signals of the interaction and use them to adapt one's dynamics to the other's behavior. In fact, this central issue is highly dependent on the robot's capabilities to sense the human world and interact with it in a way that emulates human-human interactions.In early communication among humans, synchrony was found to be a funda- mental mechanism relying on very low-level sensory-motor networks, inducing the synchronization of inter-individual neural populations from sensory flows (vision, audition, or touch). Synchrony is caused by the interaction but also sustains the interaction itself in a circular way, as promoted by the enaction approach. Consequently, to become a partner in a working together scenario, the machine can obtain a minimal level of autonomy and adaptation by predicting the rhythmic structure of the interaction to build reinforcement signals to adapt the robot behavior as it can maintain the interest of the human in more long-term interactions.More precisely, as we are aiming for more “intuitive” and “natural” HRI, we took advantages of recent discoveries in low-level human interactions and studied Unintentional Synchronizations during rhythmic human robot interactions. We argue that exploiting natural stability and adaptability properties of unintentional synchronizations and rhythmic activities in human-human interactions can solve several of the acceptability problems of HRIs, and allow rethinking the current approaches to design them.
|
7 |
Comment le langage impose-t-il la structure du sens : construal et narration / How Language Imposes Structure on Meaning : Construal and NarrativeMealier, Anne-Laure 12 December 2016 (has links)
Cette thèse a été effectuée dans le cadre du projet européen WYSIWYD (What You Say is What You Did). Ce projet a pour but de rendre, plus naturelles, les interactions Humain-robot, notamment par le biais du langage. Le déploiement de robots compagnon et de robots de service requière que les humains et les robots puissent se comprendre mutuellement et communiquer. Les humains ont développé une codification avancée de leur comportement qui fournit la base de la transparence de la plupart de leurs actions et de leur communication. Jusqu'à présent, les robots ne partagent pas ce code de comportement et ne sont donc pas capables d'expliquer leurs propres actions aux humains. Nous savons que dans le langage parlé, il existe un lien direct entre le langage et le sens permettant à une personne qui écoute d'orienter son attention sur un aspect précis d'un événement. Ceci est particulièrement vrai en production de langage. On sait que la perception visuelle permet l'extraction des aspects de «qui a fait quoi à qui» dans la compréhension des événements sociaux. Mais dans le cadre d'interactions humaines, il existe d'autres aspects importants qui ne peuvent être déterminés uniquement à partir de l'image visuelle. L'échange d'un objet peut être interprété suivant différents points de vue, par exemple du point de vue du donateur ou de celui du preneur. Nous introduisons ainsi la notion de construal. Le construal est la manière dont une personne interprète le monde ou comprend une situation particulière. De plus, les événements sont reliés dans le temps, mais il y a des liens de causalité ainsi que des liens intentionnels qui ne peuvent pas être vus d'un point de vue uniquement visuel. Un agent exécute une action, car il sait que cette action satisfait le désir d'un autre agent. Cela peut ne pas être visible directement dans la scène visuelle. Le langage permet ainsi de préciser cette particularité : "Il vous a donné le livre parce que vous le vouliez". La première problématique que nous mettons en évidence dans ce travail est la manière dont le langage peut être utilisé pour représenter ces construals. Autrement dit, la manière dont un orateur choisit une construction grammaticale plutôt qu'une autre en fonction de son centre d'intérêt. Pour y répondre, nous avons développé un système dans lequel un modèle mental représente un événement d'action. Ce modèle est déterminé par la correspondance entre deux vecteurs abstraits : le vecteur de force exercée par l'action et le vecteur de résultat correspondant à l'effet de la force exercée. La deuxième problématique que nous étudions est comment des constructions de discours narratif peuvent être apprises grâce à un modèle de discours narratifs. Ce modèle se base sur des réseaux neuronaux de production et de compréhension de phrases existants que nous enrichissons avec des structures additionnelles permettant de représenter un contexte de discours. Nous présentons également la manière dont ce modèle peut s'intégrer dans un système cognitif global permettant de comprendre et de générer de nouvelles constructions de discours narratifs ayant une structure similaire, mais des arguments différents. Pour chacun des travaux cités précédemment, nous montrons comment ces modèles théoriques sont intégrés dans la plateforme de développement du robot humanoïde iCub. Cette thèse étudiera donc principalement deux mécanismes qui permettent d'enrichir le sens des évènements par le langage. Le travail se situe entre les neurosciences computationnelles, l'élaboration de modèles de réseaux neuronaux de compréhension et de production de discours narratifs, et la linguistique cognitive où comprendre et expliquer un sens en fonction de l'attention est crucial / This thesis takes place in the context of the European project WYSIWYD (What You Say is What You Did). The goal of this project is to provide transparency in Human-robot interactions, including by mean of language. The deployment of companion and service robots requires that humans and robots can understand each other and communicate. Humans have developed an advanced coding of their behavior that provides the basis of transparency of most of their actions and their communication. Until now, the robots do not share this code of behavior and are not able to explain their own actions to humans. We know that in spoken language, there is a direct mapping between languages and meaning allowing a listener to focus attention on a specific aspect of an event. This is particularly true in language production. Moreover, visual perception allows the extraction of the aspects of "who did what to whom" in the understanding of social events. However, in the context of human interaction, other important aspects cannot be determined only from the visual image. The exchange of an object can be interpreted from the perspective of the giver or taker. This introduces the notion of construal that is how a person interprets the world and perceive a particular situation. The events are related in time, but there are causal and intentional connexion that cannot be seen only from a visual standpoint. An agent performs an action because he knows that this action satisfies the need for another person. This may not be directly visible in the visual scene. The language allows specifying this characteristic: "He gave you the book because you like it." The first point that we demonstrate in this work is how the language can be used to represent these construals. In response, we have developed a system in which a mental model represents an action event. This model is determined by the correspondence between two abstract vectors: the force vector exerted by the action and the result vector corresponding to the effect of the applied force. The application of an attentional process selects one of the two vectors, thus generating the construal of the event. The second point that we consider in this work is how the construction of narrative discourse can be learned with a narrative discourse model. This model is based on both existing neural networks of production and comprehension of sentences that we enrich with additional structures to represent a context of discourse. We present also how this model can be integrated into an overall cognitive system for understanding and generate new constructions of narrative discourse based on similar structure, but different arguments. For each of the works mentioned above, we show how these theoretical models are integrated into the development platform of the iCub humanoid robot. This thesis will explore two main mechanisms to enrich the meaning of events through language. The work is situated between computational neuroscience, with development of neural network models of comprehension and production of narrative discourse, and cognitive linguistics where to understand and explain the meaning according to joint attention is crucial
|
Page generated in 0.0959 seconds