• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Learning socio-communicative behaviors of a humanoid robot by demonstration / Apprendre les comportements socio-communicatifs d'un robot humanoïde par la démonstration

Nguyen, Duc-Canh 22 October 2018 (has links)
Un robot d'assistance sociale (SAR) est destiné à engager les gens dans une interaction située comme la surveillance de l'exercice physique, la réadaptation neuropsychologique ou l'entraînement cognitif. Alors que les comportements interactifs de ces systèmes sont généralement scriptés, nous discutons ici du cadre d’apprentissage de comportements interactifs multimodaux qui est proposé par le projet SOMBRERO.Dans notre travail, nous avons utilisé l'apprentissage par démonstration afin de fournir au robot des compétences nécessaires pour effectuer des tâches collaboratives avec des partenaires humains. Il y a trois étapes principales d'apprentissage de l'interaction par démonstration: (1) recueillir des comportements interactifs représentatifs démontrés par des tuteurs humains; (2) construire des modèles des comportements observés tout en tenant compte des connaissances a priori (modèle de tâche et d'utilisateur, etc.); et ensuite (3) fournir au robot-cible des contrôleurs de gestes appropriés pour exécuter les comportements souhaités.Les modèles multimodaux HRI (Human-Robot Interaction) sont fortement inspirés des interactions humain-humain (HHI). Le transfert des comportements HHI aux modèles HRI se heurte à plusieurs problèmes: (1) adapter les comportements humains aux capacités interactives du robot en ce qui concerne ses limitations physiques et ses capacités de perception, d'action et de raisonnement limitées; (2) les changements drastiques des comportements des partenaires humains face aux robots ou aux agents virtuels; (3) la modélisation des comportements interactifs conjoints; (4) la validation des comportements robotiques par les partenaires humains jusqu'à ce qu'ils soient perçus comme adéquats et significatifs.Dans cette thèse, nous étudions et faisons des progrès sur ces quatre défis. En particulier, nous traitons les deux premiers problèmes (transfert de HHI vers HRI) en adaptant le scénario et en utilisant la téléopération immersive. En outre, nous utilisons des réseaux neuronaux récurrents pour modéliser les comportements interactifs multimodaux (tels que le discours, le regard, les mouvements de bras, les mouvements de la tête, les canaux). Ces techniques récentes surpassent les méthodes traditionnelles (Hidden Markov Model, Dynamic Bayesian Network, etc.) en termes de précision et de coordination inter-modalités. A la fin de cette thèse, nous évaluons une première version de robot autonome équipé des modèles construits par apprentissage. / A socially assistive robot (SAR) is meant to engage people into situated interaction such as monitoring physical exercise, neuropsychological rehabilitation or cognitive training. While the interactive behavioral policies of such systems are mainly hand-scripted, we discuss here key features of the training of multimodal interactive behaviors in the framework of the SOMBRERO project.In our work, we used learning by demonstration in order to provide the robot with adequate skills for performing collaborative tasks in human centered environments. There are three main steps of learning interaction by demonstration: we should (1) collect representative interactive behaviors from human coaches; (2) build comprehensive models of these overt behaviors while taking into account a priori knowledge (task and user model, etc.); and then (3) provide the target robot with appropriate gesture controllers to execute the desired behaviors.Multimodal HRI (Human-Robot Interaction) models are mostly inspired by Human-Human interaction (HHI) behaviors. Transferring HHI behaviors to HRI models faces several issues: (1) adapting the human behaviors to the robot’s interactive capabilities with regards to its physical limitations and impoverished perception, action and reasoning capabilities; (2) the drastic changes of human partner behaviors in front of robots or virtual agents; (3) the modeling of joint interactive behaviors; (4) the validation of the robotic behaviors by human partners until they are perceived as adequate and meaningful.In this thesis, we study and make progress over those four challenges. In particular, we solve the two first issues (transfer from HHI to HRI) by adapting the scenario and using immersive teleoperation. In addition, we use Recurrent Neural Networks to model multimodal interactive behaviors (such as speech, gaze, arm movements, head motion, backchannels) that surpass traditional methods (Hidden Markov Model, Dynamic Bayesian Network, etc.) in both accuracy and coordination between the modalities. We also build and evaluate a proof-of-concept autonomous robot to perform the tasks.
2

Robot-based haptic perception and telepresence for the visually impaired

Park, Chung Hyuk 28 June 2012 (has links)
With the advancements in medicine and welfare systems, the average life span of modern human beings is expanding, creating a new market for elderly care and assistive technology. Along with the development of assistive devices based on traditional aids such as voice-readers, electronic wheelchairs, and prosthetic limbs, a robotic platform is one of the most suitable platforms for providing multi-purpose assistance in human life. This research focuses on the transference of environmental perception to a human user through the use of interactive multi-modal feedback and an assistive robotic platform. A novel framework for haptic telepresence is presented to solve the problem, and state-of-the-art methodologies from computer vision, haptics, and robotics are utilized. The objective of this research is to design a framework that achieves the following: 1) This framework integrates visual perception from heterogeneous vision sensors, 2) it enables real-time interactive haptic representation of the real world through a mobile manipulation robotic platform and a haptic interface, and 3) it achieves haptic fusion of multiple sensory modalities from the robotic platform and provides interactive feedback to the human user. Specifically, a set of multi-disciplinary algorithms such as stereo-vision processes, three-dimensional (3D) map-building algorithms, and virtual-proxy based haptic volume representation processes will be integrated into a unified framework to successfully accomplish the goal. The application area of this work is focused on, but not limited to, assisting people with visual impairment with a robotic platform by providing multi-modal feedback of the environment.
3

A study of human-robot interaction with an assistive robot to help people with severe motor impairments

Choi, Young Sang 06 July 2009 (has links)
The thesis research aims to further the study of human-robot interaction (HRI) issues, especially regarding the development of an assistive robot designed to help individuals possessing motor impairments. In particular, individuals with amyotrophic lateral sclerosis (ALS), represent a potential user population that possess an array of motor impairment due to the progressive nature of the disease. Through review of the literature, an initial target for robotic assistance was determined to be object retrieval and delivery tasks to aid with dropped or otherwise unreachable objects, which represent a common and significant difficulty for individuals with limited motor capabilities. This thesis research has been conducted as part of a larger, collaborative project between the Georgia Institute of Technology and Emory University. To this end, we developed and evaluated a semi-autonomous mobile healthcare service robot named EL-E. I conducted four human studies involving patients with ALS with the following objectives: 1) to investigate and better understand the practical, everyday needs and limitations of people with severe motor impairments; 2) to translate these needs into pragmatic tasks or goals to be achieved through an assistive robot and reflect these needs and limitations into the robot's design; 3) to develop practical, usable, and effective interaction mechanisms by which the impaired users can control the robot; and 4) and to evaluate the performance of the robot and improve its usability. I anticipate that the findings from this research will contribute to the ongoing research in the development and evaluation of effective and affordable assistive manipulation robots, which can help to mitigate the difficulties, frustration, and lost independence experienced by individuals with significant motor impairments and improve their quality of life.

Page generated in 0.0632 seconds