• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 20
  • 17
  • 15
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 293
  • 293
  • 293
  • 103
  • 86
  • 56
  • 49
  • 48
  • 41
  • 38
  • 38
  • 36
  • 35
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Gender differences in navigation dialogues with computer systems

Koulouri, Theodora January 2013 (has links)
Gender is among the most influential of the factors underlying differences in spatial abilities, human communication and interactions with and through computers. Past research has offered important insights into gender differences in navigation and language use. Yet, given the multidimensionality of these domains, many issues remain contentious while others unexplored. Moreover, having been derived from non-interactive, and often artificial, studies, the generalisability of this research to interactive contexts of use, particularly in the practical domain of Human-Computer Interaction (HCI), may be problematic. At the same time, little is known about how gender strategies, behaviours and preferences interact with the features of technology in various domains of HCI, including collaborative systems and systems with natural language interfaces. Targeting these knowledge gaps, the thesis aims to address the central question of how gender differences emerge and operate in spatial navigation dialogues with computer systems. To this end, an empirical study is undertaken, in which, mixed-gender and same-gender pairs communicate to complete an urban navigation task, with one of the participants being under the impression that he/she interacts with a robot. Performance and dialogue data were collected using a custom system that supported synchronous navigation and communication between the user and the robot. Based on this empirical data, the thesis describes the key role of the interaction of gender in navigation performance and communication processes, which outweighed the effect of individual gender, moderating gender differences and reversing predicted patterns of performance and language use. This thesis has produced several contributions; theoretical, methodological and practical. From a theoretical perspective, it offers novel findings in gender differences in navigation and communication. The methodological contribution concerns the successful application of dialogue as a naturalistic, and yet experimentally sound, research paradigm to study gender and spatial language. The practical contributions include concrete design guidelines for natural language systems and implications for the development of gender-neutral interfaces in specific domains of HCI.
62

Modélisation du profil émotionnel de l’utilisateur dans les interactions parlées Humain-Machine / User’s emotional profile modelling in spoken Human-Machine interactions

Delaborde, Agnès 19 December 2013 (has links)
Les travaux de recherche de la thèse portent sur l'étude et la formalisation des interactions émotionnelles Humain-Machine. Au delà d’une détection d'informations paralinguistiques (émotions, disfluences,...) ponctuelles, il s'agit de fournir au système un profil interactionnel et émotionnel de l'utilisateur dynamique, enrichi pendant l’interaction. Ce profil permet d’adapter les stratégies de réponses de la machine au locuteur, et il peut également servir pour mieux gérer des relations à long terme. Le profil est fondé sur une représentation multi-niveau du traitement des indices émotionnels et interactionnels extraits à partir de l'audio via les outils de détection des émotions du LIMSI. Ainsi, des indices bas niveau (variations de la F0, d'énergie, etc.), fournissent des informations sur le type d'émotion exprimée, la force de l'émotion, le degré de loquacité, etc. Ces éléments à moyen niveau sont exploités dans le système afin de déterminer, au fil des interactions, le profil émotionnel et interactionnel de l'utilisateur. Ce profil est composé de six dimensions : optimisme, extraversion, stabilité émotionnelle, confiance en soi, affinité et domination (basé sur le modèle de personnalité OCEAN et les théories de l’interpersonal circumplex). Le comportement social du système est adapté en fonction de ce profil, de l'état de la tâche en cours, et du comportement courant du robot. Les règles de création et de mise à jour du profil émotionnel et interactionnel, ainsi que de sélection automatique du comportement du robot, ont été implémentées en logique floue à l'aide du moteur de décision développé par un partenaire du projet ROMEO. L’implémentation du système a été réalisée sur le robot NAO. Afin d’étudier les différents éléments de la boucle d’interaction émotionnelle entre l’utilisateur et le système, nous avons participé à la conception de plusieurs systèmes : système en Magicien d’Oz pré-scripté, système semi-automatisé, et système d’interaction émotionnelle autonome. Ces systèmes ont permis de recueillir des données en contrôlant plusieurs paramètres d’élicitation des émotions au sein d’une interaction ; nous présentons les résultats de ces expérimentations, et des protocoles d’évaluation de l’Interaction Humain-Robot via l’utilisation de systèmes à différents degrés d’autonomie. / Analysing and formalising the emotional aspect of the Human-Machine Interaction is the key to a successful relation. Beyond and isolated paralinguistic detection (emotion, disfluences…), our aim consists in providing the system with a dynamic emotional and interactional profile of the user, which can evolve throughout the interaction. This profile allows for an adaptation of the machine’s response strategy, and can deal with long term relationships. A multi-level processing of the emotional and interactional cues extracted from speech (LIMSI emotion detection tools) leads to the constitution of the profile. Low level cues ( F0, energy, etc.), are then interpreted in terms of expressed emotion, strength, or talkativeness of the speaker. These mid-level cues are processed in the system so as to determine, over the interaction sessions, the emotional and interactional profile of the user. The profile is made up of six dimensions: optimism, extroversion, emotional stability, self-confidence, affinity and dominance (based on the OCEAN personality model and the interpersonal circumplex theories). The information derived from this profile could allow for a measurement of the engagement of the speaker. The social behaviour of the system is adapted according to the profile, and the current task state and robot behaviour. Fuzzy logic rules drive the constitution of the profile and the automatic selection of the robotic behaviour. These determinist rules are implemented on a decision engine designed by a partner in the project ROMEO. We implemented the system on the humanoid robot NAO. The overriding issue dealt with in this thesis is the viable interpretation of the paralinguistic cues extracted from speech into a relevant emotional representation of the user. We deem it noteworthy to point out that multimodal cues could reinforce the profile’s robustness. So as to analyse the different parts of the emotional interaction loop between the user and the system, we collaborated in the design of several systems with different autonomy degrees: a pre-scripted Wizard-of-Oz system, a semi-automated system, and a fully autonomous system. Using these systems allowed us to collect emotional data in robotic interaction contexts, by controlling several emotion elicitation parameters. This thesis presents the results of these data collections, and offers an evaluation protocol for Human-Robot Interaction through systems with various degrees of autonomy.
63

Formulation et études des problèmes de commande en co-manipulation robotique / Formulation and study of different control problems for co-manipulation tasks

Jlassi, Sarra 28 November 2013 (has links)
Dans ce travail de thèse, nous abordons les problèmes de commande posés en co-manipulation robotique pour des tâches de manutention à travers un point de vue dont nous pensons qu’il n’est pas suffisamment exploité, bien qu’il a recourt à des outils classiques en robotique. Le problème de commande en co-manipulation robotique est souvent abordé par le biais des méthodes de contrôle d’impédance, où l’objectif est d’établir une relation mathématique entre la vitesse linéaire du point d’interaction homme-robot et la force d’interaction appliquée par l’opérateur humain au même point. Cette thèse aborde le problème de co-manipulation robotique pour des tâches de manutention comme un problème de commande optimale sous contrainte. Le point de vue proposé se base sur la mise en œuvre d’un Générateur de Trajectoire Temps-Réel spécifique, combiné à une boucle d’asservissement cinématique. Le générateur de trajectoire est conçu de manière à traduire les intentions de l’opérateur humain en trajectoires idéales que le robot doit suivre ? Il fonctionne comme un automate à deux états dont les transitions sont contrôlées par évènement, en comparant l’amplitude de la force d’interaction à un seuil de force ajustable, afin de permettre à l’opérateur humain de garder l’autorité sur les états de mouvement du robot. Pour assurer une interaction fluide, nous proposons de générer un profil de vitesse colinéaire à la force appliquée au point d’interaction. La boucle d’asservissement est alors utilisée afin de satisfaire les exigences de stabilité et de qualité du suivi de trajectoire tout en garantissant l’assistance une interaction homme-robot sûre. Plusieurs méthodes de synthèse sont appliquées pour concevoir des correcteurs efficaces qui assurent un bon suivi des trajectoires générées. L’ensemble est illustré à travers deux modèles de robot. Le premier est le penducobot, qui correspond à un robot sous-actionné à deux degrés de liberté et évoluant dans le plan. Le deuxième est un robot à deux bras complètement actionné. / In this thesis, we address the co-manipulation control problems for the handling tasks through a viewpoint that we do not think sufficiently explored, even it employs classical tools of robotics. The problem of robotic co-manipulation is often addressed using impedance control based methods where we seek to establish a mathematical relation between the velocity of the human-robot interaction point and the force applied by the human operator at this point. This thesis addresses the problem of co-manipulation for handling tasks seen as a constrained optimal control problem. The proposed point of view relies on the implementation of a specific online trajectory generator (OTG) associated to a kinematic feedback loop. This OTG is designed so as to translate the human operator intentions to ideal trajectories that the robot must follow. It works as an automaton with two states of motion whose transitions are controlled by comparing the magnitude of the force to an adjustable threshold, in order to enable the operator to keep authority over the robot’s states of motion. To ensure the smoothness of the interaction, we propose to generate a velocity profile collinear to the force applied at the interaction point. The feedback control loop is then used to satisfy the requirements of stability and of trajectory tracking to guarantee assistance and operator security. Several methods are used to design efficient controllers that ensure the tracking of the generated trajectory. The overall strategy is illustrated through two mechanical systems. The first is the penducobot which is an underactuated robot. The second is the planar robot with two degrees of freedom fully actuated.
64

Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde / Cognitive approach for representing the haptic physical human-humanoid interaction

Bussy, Antoine 10 October 2013 (has links)
Les robots sont tout près d'arriver chez nous. Mais avant cela, ils doivent acquérir la capacité d'interagir physiquement avec les humains, de manière sûre et efficace. De telles capacités sont indispensables pour qu'il puissent vivre parmi nous, et nous assister dans diverses tâches quotidiennes, comme porter une meuble. Dans cette thèse, nous avons pour but de doter le robot humanoïde bipède HRP-2 de la capacité à effectuer des actions haptiques en commun avec l'homme. Dans un premier temps, nous étudions comment des dyades humains collaborent pour transporter un objet encombrant. De cette étude, nous extrayons un modèle global de primitives de mouvement que nous utilisons pour implémenter un comportement proactif sur le robot HRP-2, afin qu'il puisse effectuer la même tâche avec un humain. Puis nous évaluons les performances de ce schéma de contrôle proactif au cours de tests utilisateurs. Finalement, nous exposons diverses pistes d'évolution de notre travail: la stabilisation d'un humanoïde à travers l'interaction physique, la généralisation du modèle de primitives de mouvements à d'autres tâches collaboratives et l'inclusion de la vision dans des tâches collaboratives haptiques. / Robots are very close to arrive in our homes. But before doing so, they must master physical interaction with humans, in a safe and efficient way. Such capacities are essential for them to live among us, and assit us in various everyday tasks, such as carrying a piece of furniture. In this thesis, we focus on endowing the biped humanoid robot HRP-2 with the capacity to perform haptic joint actions with humans. First, we study how human dyads collaborate to transport a cumbersome object. From this study, we define a global motion primitives' model that we use to implement a proactive behavior on the HRP-2 robot, so that it can perform the same task with a human. Then, we assess the performances of our proactive control scheme by perfoming user studies. Finally, we expose several potential extensions to our work: self-stabilization of a humanoid through physical interaction, generalization of the motion primitives' model to other collaboratives tasks and the addition of visionto haptic joint actions.
65

Teaching robots social autonomy from in situ human supervision

Senft, Emmanuel January 2018 (has links)
Traditionally the behaviour of social robots has been programmed. However, increasingly there has been a focus on letting robots learn their behaviour to some extent from example or through trial and error. This on the one hand excludes the need for programming, but also allows the robot to adapt to circumstances not foreseen at the time of programming. One such occasion is when the user wants to tailor or fully specify the robot's behaviour. The engineer often has limited knowledge of what the user wants or what the deployment circumstances specifically require. Instead, the user does know what is expected from the robot and consequently, the social robot should be equipped with a mechanism to learn from its user. This work explores how a social robot can learn to interact meaningfully with people in an efficient and safe way by learning from supervision by a human teacher in control of the robot's behaviour. To this end we propose a new machine learning framework called Supervised Progressively Autonomous Robot Competencies (SPARC). SPARC enables non-technical users to control and teach a robot, and we evaluate its effectiveness in Human-Robot Interaction (HRI). The core idea is that the user initially remotely operates the robot, while an algorithm associates actions to states and gradually learns. Over time, the robot takes over the control from the user while still giving the user oversight of the robot's behaviour by ensuring that every action executed by the robot has been actively or passively approved by the user. This is particularly important in HRI, as interacting with people, and especially vulnerable users, is a complex and multidimensional problem, and any errors by the robot may have negative consequences for the people involved in the interaction. Through the development and evaluation of SPARC, this work contributes to both HRI and Interactive Machine Learning, especially on how autonomous agents, such as social robots, can learn from people and how this specific teacher-robot interaction impacts the learning process. We showed that a supervised robot learning from their user can reduce the workload of this person, and that providing the user with the opportunity to control the robot's behaviour substantially improves the teaching process. Finally, this work also demonstrated that a robot supervised by a user could learn rich social behaviours in the real world, in a large multidimensional and multimodal sensitive environment, as a robot learned quickly (25 interactions of 4 sessions during in average 1.9 minutes) to tutor children in an educational game, achieving similar behaviours and educational outcomes compared to a robot fully controlled by the user, both providing 10 to 30% improvement in game metrics compared to a passive robot.
66

On the Sociability of a Game-Playing Agent: A Software Framework and Empirical Study

Behrooz, Morteza 10 April 2014 (has links)
The social element of playing games is what makes us play together to enjoy more than just what the game itself has to offer. There are millions of games with different rules and goals; They are played by people of many cultures and various ages. However, this social element remains as crucial. Nowadays, the role of social robots and virtual agents is rapidly expanding in daily activities and entertainment and one of these areas is games. Therefore, it seems desirable for an agent to be able to play games socially, as opposed to simply having the computer make the moves in game application. To achieve this goal, verbal and non-verbal communication should be inspired by the game events and human input, to create a human-like social experience. Moreover, a better social interaction can be created if the agent can change its game strategies in accordance with social criteria. To bring sociability to the gaming experience with many different robots, virtual agents and games, we have developed a generic software framework which generates social comments based on the gameplay semantics. We also conducted a user study, with this framework as a core component, involving the rummy card game and the checkers board game. In our analysis, we observed both subjective and objective measures of the effects of social gaze and comments in the gaming interactions. Participants' gaming experience proved to be significantly more social, human-like, enjoyable and adoptable when social behaviors were employed. Moreover, since facial expressions can be a strong indication of internal state, we measured the number of participants' smiles during the gameplay and observed them to smile significantly more when social behaviors were involved than when they were not.
67

The impact of social expectation towards robots on human-robot interactions

Syrdal, Dag Sverre January 2018 (has links)
This work is presented in defence of the thesis that it is possible to measure the social expectations and perceptions that humans have of robots in an explicit and succinct manner, and these measures are related to how humans interact with, and evaluate, these robots. There are many ways of understanding how humans may respond to, or reason about, robots as social actors, but the approach that was adopted within this body of work was one which focused on interaction-specific expectations, rather than expectations regarding the true nature of the robot. These expectations were investigated using a questionnaire-based tool, the University of Hertfordshire Social Roles Questionnaire, which was developed as part of the work presented in this thesis and tested on a sample of 400 visitors to an exhibition in the Science Gallery in Dublin. This study suggested that responses to this questionnaire loaded on two main dimensions, one which related to the degree of social equality the participants expected the interactions with the robots to have, and the other was related to the degree of control they expected to exert upon the robots within the interaction. A single item, related to pet-like interactions, loaded on both and was considered a separate, third dimension. This questionnaire was deployed as part of a proxemics study, which found that the degree to which participants accepted particular proxemics behaviours was correlated with initial social expectations of the robot. If participants expected the robot to be more of a social equal, then the participants preferred the robot to approach from the front, while participants who viewed the robot more as a tool preferred it to approach from a less obtrusive angle. The questionnaire was also deployed in two long-term studies. In the first study, which involved one interaction a week over a period of two months, participant social expectations of the robots prior to the beginning of the study, not only impacted how participants evaluated open-ended interactions with the robots throughout the two-month period, but also how they collaborated with the robots in task-oriented interactions as well. In the second study, participants interacted with the robots twice a week over a period of 6 weeks. This study replicated the findings of the previous study, in that initial expectations impacted evaluations of interactions throughout the long-term study. In addition, this study used the questionnaire to measure post-interaction perceptions of the robots in terms of social expectations. The results from these suggest that while initial social expectations of robots impact how participants evaluate the robots in terms of interactional outcomes, social perceptions of robots are more closely related to the social/affective experience of the interaction.
68

Ambiente para interação baseada em reconhecimento de emoções por análise de expressões faciais / Environment based on emotion recognition for human-robot interaction

Ranieri, Caetano Mazzoni 09 August 2016 (has links)
Nas ciências de computação, o estudo de emoções tem sido impulsionado pela construção de ambientes interativos, especialmente no contexto dos dispositivos móveis. Pesquisas envolvendo interação humano-robô têm explorado emoções para propiciar experiências naturais de interação com robôs sociais. Um dos aspectos a serem investigados é o das abordagens práticas que exploram mudanças na personalidade de um sistema artificial propiciadas por alterações em um estado emocional inferido do usuário. Neste trabalho, é proposto um ambiente para interação humano-robô baseado em emoções, reconhecidas por meio de análise de expressões faciais, para plataforma Android. Esse sistema consistiu em um agente virtual agregado a um aplicativo, o qual usou informação proveniente de um reconhecedor de emoções para adaptar sua estratégia de interação, alternando entre dois paradigmas discretos pré-definidos. Nos experimentos realizados, verificou-se que a abordagem proposta tende a produzir mais empatia do que uma condição controle, entretanto esse resultado foi observado somente em interações suficientemente longas. / In computer sciences, the development of interactive environments have motivated the study of emotions, especially on the context of mobile devices. Research in human-robot interaction have explored emotions to create natural experiences on interaction with social robots. A fertile aspect consist on practical approaches concerning changes on the personality of an artificial system caused by modifications on the users inferred emotional state. The present project proposes to develop, for Android platform, an environment for human-robot interaction based on emotions. A dedicated module will be responsible for recognizing emotions by analyzing facial expressions. This system consisted of a virtual agent aggregated to an application, which used information of the emotion recognizer to adapt its interaction strategy, alternating between two pre-defined discrete paradigms. In the experiments performed, it was found that the proposed approach tends to produce more empathy than a control condition, however this result was observed only in sufficiently long interactions.
69

Error-related potentials for adaptive decoding and volitional control

Salazar Gómez, Andrés Felipe 10 July 2017 (has links)
Locked-in syndrome (LIS) is a condition characterized by total or near-total paralysis with preserved cognitive and somatosensory function. For the locked-in, brain-machine interfaces (BMI) provide a level of restored communication and interaction with the world, though this technology has not reached its fullest potential. Several streams of research explore improving BMI performance but very little attention has been given to the paradigms implemented and the resulting constraints imposed on the users. Learning new mental tasks, constant use of external stimuli, and high attentional and cognitive processing loads are common demands imposed by BMI. These paradigm constraints negatively affect BMI performance by locked-in patients. In an effort to develop simpler and more reliable BMI for those suffering from LIS, this dissertation explores using error-related potentials, the neural correlates of error awareness, as an access pathway for adaptive decoding and direct volitional control. In the first part of this thesis we characterize error-related local field potentials (eLFP) and implement a real-time decoder error detection (DED) system using eLFP while non-human primates controlled a saccade BMI. Our results show specific traits in the eLFP that bridge current knowledge of non-BMI evoked error-related potentials with error-potentials evoked during BMI control. Moreover, we successfully perform real-time DED via, to our knowledge, the first real-time LFP-based DED system integrated into an invasive BMI, demonstrating that error-based adaptive decoding can become a standard feature in BMI design. In the second part of this thesis, we focus on employing electroencephalography error-related potentials (ErrP) for direct volitional control. These signals were employed as an indicator of the user’s intentions under a closed-loop binary-choice robot reaching task. Although this approach is technically challenging, our results demonstrate that ErrP can be used for direct control via binary selection and, given the appropriate levels of task engagement and agency, single-trial closed-loop ErrP decoding is possible. Taken together, this work contributes to a deeper understanding of error-related potentials evoked during BMI control and opens new avenues of research for employing ErrP as a direct control signal for BMI. For the locked-in community, these advancements could foster the development of real-time intuitive brain-machine control.
70

Utvärdering av VR-manövrering av en brandrobot / Evaluation of VR-maneuvering of a firefighting robot

Segerström, Niklas January 2019 (has links)
This thesis evaluates the suitability of using a VR-interface to help maneuver a teleoperated robot designed to assist firefighters. Head mounted displays together with teleoperated robots are becoming more popular but the question is if the technology can be adapted to fit the needs and requirements of the firefighters. The technology will be used in high-stress situations without a big margin for error. Focus group interviews and user tests together with an analysis of current research have been performed to try and give an answer on how suitable VR-interfaces are. To efficiently implement a VR-interface the developers of the robot need to have a clear user profile. The designers need to use the expert knowledge of the firefighters for best results. The implementation will overall have a positive effect on the users situational awareness and the robot will become easier to maneuver, given that the developers use hardware compatible to the needs of VRglasses.

Page generated in 0.1297 seconds