• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 24
  • 18
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 383
  • 383
  • 311
  • 126
  • 107
  • 69
  • 64
  • 63
  • 57
  • 52
  • 50
  • 49
  • 45
  • 44
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Interactive concept acquisition for embodied artificial agents

de Greeff, Joachim January 2013 (has links)
An important capacity that is still lacking in intelligent systems such as robots, is the ability to use concepts in a human-like manner. Indeed, the use of concepts has been recognised as being fundamental to a wide range of cognitive skills, including classification, reasoning and memory. Intricately intertwined with language, concepts are at the core of human cognition; but despite a large body or research, their functioning is as of yet not well understood. Nevertheless it remains clear that if intelligent systems are to achieve a level of cognition comparable to humans, they will have to posses the ability to deal with the fundamental role that concepts play in cognition. A promising manner in which conceptual knowledge can be acquired by an intelligent system is through ongoing, incremental development. In this view, a system is situated in the world and gradually acquires skills and knowledge through interaction with its social and physical environment. Important in this regard is the notion that cognition is embodied. As such, both the physical body and the environment shape the manner in which cognition, including the learning and use of concepts, operates. Through active partaking in the interaction, an intelligent system might influence its learning experience as to be more effective. This work presents experiments which illustrate how these notions of interaction and embodiment can influence the learning process of artificial systems. It shows how an artificial agent can benefit from interactive learning. Rather than passively absorbing knowledge, the system actively partakes in its learning experience, yielding improved learning. Next, the influence of embodiment on perception is further explored in a case study concerning colour perception, which results in an alternative explanation for the question of why human colour experience is very similar amongst individuals despite physiological differences. Finally experiments, in which an artificial agent is embodied in a novel robot that is tailored for human-robot interaction, illustrate how active strategies are also beneficial in an HRI setting in which the robot learns from a human teacher.
202

Contribution au développement d'un dispositif de sécurité intelligente pour la cobotique / Contribution to the development of an intelligent safety device for cobotics

Ayoubi, Younsse 10 July 2018 (has links)
Au cours des dernières années, nous avons assisté à un changement de paradigme, passant de la fabrication de robots rigides à des robots compliants. Ceci est dû à plusieurs raisons telles que l'amélioration de l'efficacité des robots dans la réalisation des mouvements explosifs ou cycliques. En fait, l'une des premières motivations à l'origine de ce changement est la sécurité. Parlant de la sécurité à la fois du sujet humain et du robot, tout en s'engageant dans des tâches collaboratives. Ainsi la désignation des cobots. Les cobots peuvent aider un opérateur humain expérimenté dans plusieurs domaines où la précision est essentielle, comme les applications industrielles ou les tâches médicales. Jusqu'à présent, les cobots présentent toujours des problèmes de sécurité, même avec des recommandations réglementaires telles que ISO / TS 15066 et ISO 10218-1 et 2 qui limitent leurs avantages économiques. Dans cette vue, plusieurs projets de recherche ont été lancés dans le monde entier pour améliorer la dynamique des cobots par rapport à la sécurité, ANR-SISCob (Safety Intelligent Sensor for cobots) étant l'un de ces projets. Les travaux menés au cours de cette thèse ont pour but de concevoir des dispositifs de sécurité qui sécuriseront les robots en y introduisant l’aspect de compliance. En effet, nous avons développé deux dispositifs dans lesquels l'aspect sécurité est atteint avec deux approches différentes :- Prismatic Compliant Joint (PCJ) : qui vise à la mise en œuvre dans les articulations linéaires, car peu de travaux ont traité de tels systèmes d'actionnement. Ici, la sécurité est atteinte biomimétiquement tout en faisant face à d'autres critères de sécurité liés aux propriétés mécaniques du corps humain.- Variable Stiffness Safety Oriented Mechanism (V2SOM) : Contrairement au premier dispositif d'inspiration biomimétique qui sert aux systèmes d'actionnement linéaires, le profil de sécurité du V2SOM est axé sur la sécurité selon deux critères de sécurité: force d’impact et HIC. L'aspect ‘orienté sécurité’ est dû à ce que nous appelons la capacité de découplage d'inertie de son profil de rigidité. V2SOM est actuellement dans ses dernières étapes de brevetage.Ces deux appareils seront intégrés dans un robot sériel réalisé dans notre laboratoire. / In the recent years, we witnessed a paradigm shift from making stiff robots toward compliant ones. This is due to several reasons such as enhancing the efficiency of robots in making explosive or cyclic motion. In fact, one of the earliest motivations from which this change stems are safety. Speaking of safety of both the human subject and the robot alike, while engaging in a collaborative task. Thus, the designation of cobots. Cobots may assist well-experienced human operator in several domains where precision is a must, such as industrial applications or medical tasks. Until now cobots still display safety concerns, even with regulatory recommendations such as ISO/TS 15066 and ISO 10218-1 et 2 that limits their economic benefits. In this view, several research projects were launched worldwide to enhance the cobot’s dynamics vs safety, ANR-SISCob (Safety Intelligent Sensor for cobots) is one of these projects. The works conducted during this thesis aims at making safety devices that will make robots safe by introducing compliance aspect in them. Indeed, we developed two devices in which safety aspect is achieved with two different approaches: - Prismatic Compliant Joint (PCJ): is aimed at prismatic joint’s implementation, as few works have dealt with such actuation systems. Herein, safety is biomimetically attained while coping with other safety criteria related to the mechanical properties of human body. - Variable Stiffness Safety Oriented Mechanism (V2SOM): Unlike the first device that’s biomimetically inspired and serves at linear actuation systems, V2SOM’s safety profile is safety oriented according to two safety criteria Impact force and HIC, and is designed for rotary actuation. The safety oriented aspect is due to what we call inertia decoupling capacity of its stiffness profile. V2SOM is currently in its final patenting process.Both devices will be integrated in serial robot built in our lab.
203

Conception de systèmes cobotiques industriels : approche robotique avec prise en compte des facteurs humains : application à l'industrie manufacturière au sein de Safran et ArianeGroup / Industrial cobotic system design : robotics approach and taking into consideration human factors : practical application to manufacturing within Safran and ArianeGroup

Bitonneau, David 25 May 2018 (has links)
La cobotique est un domaine émergeant qui offre de nouvelles perspectives pour améliorer la performance des entreprises et la santé des hommes au travail, en alliant l'expertise et les capacités cognitives des opérateurs aux atouts des robots. Dans cette thèse la cobotique est positionnée comme le domaine de la collaboration homme-robot. Nous définissons les systèmes cobotiques comme des systèmes au sein desquels l'homme et le robot interagissent pour réaliser une tâche commune.Cette thèse d'ingénierie robotique a été réalisée en binôme avec Théo Moulières-Seban, doctorant en cognitique. Ces deux thèses Cifre ont été menées avec Safran et ArianeGroup qui ont reconnu la cobotique comme stratégique pour le développement de leur compétitivité. Pour étudier et développer les systèmes cobotiques, nous avons proposé conjointement une approche méthodologique interdisciplinaire appliquée à l'industrie et validée par nos encadrants académiques. Cette approche offre une place centrale à l'intégration des futurs utilisateurs dans la conception, à travers l'analyse de leur activité de travail et la réalisation de simulations participatives. Nous avons déployé cette démarche pour répondre à différents besoins industriels concrets chez ArianeGroup.Dans cette thèse, nous détaillons la conception d'un système cobotique pour améliorer la santé et la sécurité des opérateurs sur le poste de nettoyage des cuves de propergol. Les opérations réalisées sur ce poste sont difficiles physiquement et présentent un risque pyrotechnique. Conjointement avec l'équipe projet ArianeGroup, nous avons proposé un système cobotique de type téléopération pour conserver l'expertise des opérateurs tout en les plaçant en sécurité pendant la réalisation des opérations pyrotechniques. Cette solution est en cours d'industrialisation dans la perspective de la production du propergol des fusées Ariane.L'application de notre démarche d'ingénierie des systèmes cobotiques sur une variété de postes de travail et de besoins industriels nous a permis de l'enrichir avec des outils opérationnels pour guider la conception. Nous prévoyons que la cobotique soit une des clés pour replacer l'homme au cœur des moyens de production dans le cadre de l'Usine du futur. Réciproquement, l'intégration des opérateurs dans les projets de conception sera déterminante pour assurer la performance et l'acceptation des futurs systèmes cobotiques. / Human Robot Collaboration provides new perspectives to improve companies' performance and operators' working conditions, by bringing together workers expertise and adaptation capacity with robots' power and precision. In this research, we introduce the concept of "cobotic system", in which humans and robots -- with possibly different roles -- interact, sharing a common purpose of solving a task.This robotic engineering PhD thesis has been completed as a team with the cognitive engineer Théo Moulières-Seban. Both PhD thesis were conducted under the leadership of Safran and ArianeGroup, which have recognized Human Robot Collaboration has strategic for their industrial performance. Together, we proposed the "cobotic system engineering": a cross-disciplinary approach for cobotic system design. This approach was applied to several industrial needs within ArianeGroup.In this thesis, we detail the design of a cobotic system to improve operators' health and safety on the "tank cleaning" workstation. We have proposed a teleoperation cobotic system to keep operators' expertise while placing them in a safe place to conduct operations. This solution is now under an industrialization phase for the production of Ariane launch vehicles.We argue that thanks to their flexibility, their connectivity to modern workshops' technological ecosystem and their ability to take humans into account, cobotic systems will be one of the key parts composing the Industry 4.0.
204

Integração de sistemas cognitivo e robótico por meio de uma ontologia para modelar a percepção do ambiente / Integration of cognitive and robotic systems through an ontology to model the perception of the environment

Azevedo, Helio 01 August 2018 (has links)
A disseminação do uso de robôs na sociedade moderna é uma realidade. Do começo restrito às operações fabris como pintura e soldagem, até o início de seu uso nas residências, apenas algumas décadas se passaram. A robótica social é uma área de pesquisa que visa desenvolver modelos para que a interação direta de robôs com seres humanos ocorra de forma natural. Um dos fatores que compromete a rápida evolução da robótica social é a dificuldade em integrar sistemas cognitivos e robóticos, principalmente devido ao volume e complexidade da informação produzida por um mundo caótico repleto de dados sensoriais. Além disso, a existência de múltiplas configurações de robôs, com arquiteturas e interfaces distintas, dificulta a verificação e repetibilidade dos experimentos realizados pelos diversos grupos de pesquisa. Esta tese contribui para a evolução da robótica social ao definir uma arquitetura, denominada Cognitive Model Development Environment (CMDE) que simplifica a conexão entre sistemas cognitivos e robóticos. Essa conexão é formalizada com uma ontologia, denominada OntPercept, que modela a percepção do ambiente a partir de informações sensoriais captadas pelos sensores presentes no agente robótico. Nos últimos anos, diversas ontologias foram propostas para aplicações robóticas, mas elas não são genéricas o suficiente para atender completamente às necessidades das áreas de robótica e automação. A formalização oferecida pela OntPercept facilita o desenvolvimento, a reprodução e a comparação de experimentos associados a robótica social. A validação do sistema proposto ocorre com suporte do simulador Robot House Simulator (RHS), que fornece um ambiente onde, o agente robótico e o personagem humano podem interagir socialmente com níveis crescentes de processamento cognitivo. A proposta da CMDE viabiliza a utilização de qualquer sistema cognitivo, em particular, o experimento elaborado para validação desta pesquisa utiliza Soar como arquitetura cognitiva. Em conjunto, os elementos: arquitetura CMDE, ontologia OntPercept e simulador RHS, todos disponibilizados livremente no GitHub, estabelecem um ambiente completo que propiciam o desenvolvimento de experimentos envolvendo sistemas cognitivos dirigidos para a área de robótica social. / The use of robots in modern society is a reality. From the beginning restricted to the manufacturing operations like painting and welding, until the beginning of its use in the residences, only a few decades have passed. Social robotics is an area that aims to develop models so that the direct interaction of robots with humans occurs naturally. One of the factors that compromises the rapid evolution of social robotics is the difficulty in integrating cognitive and robotic systems, mainly due to the volume and complexity of the information produced by a chaotic world full of sensory data. In addition, the existence of multiple configurations of robots, with different architectures and interfaces, makes it difficult to verify and repeat the experiments performed by the different research groups. This research contributes to the evolution of social robotics by defining an architecture, called Cognitive Model Development Environment (CMDE), which simplifies the connection between cognitive and robotic systems. This connection is formalized with an ontology, called OntPercept, which models the perception of the environment from the sensory information captured by the sensors present in the robotic agent. In recent years, several ontologies have been proposed for robotic applications, but they are not generic enough to fully address the needs of robotics and automation. The formalization offered by OntPercept facilitates the development, reproduction and comparison of experiments associated with social robotics. The validation of the proposed system occurs with support of the Robot House Simulator (RHS), which provides an environment where the robotic agent and the human character can interact socially with increasing levels of cognitive processing. All together, the elements: CMDE architecture, OntPercept ontology and RHS simulator, all freely available in GitHub, establish a complete environment that allows the dev
205

Human-humanoid collaborative object transportation / Transport collaboratif homme/humanoïde

Agravante, Don Joven 16 December 2015 (has links)
Les robots humanoïdes sont les plus appropriés pour travailler en coopération avec l'homme. En effet, puisque les humains sont naturellement habitués à collaborer entre eux, un robot avec des capacités sensorielles et de locomotion semblables aux leurs, sera le plus adapté. Cette thèse vise à rendre les robot humanoïdes capables d'aider l'homme, afin de concevoir des 'humanoïdes collaboratifs'. On considère ici la tâche de transport collaboratif d'objets. D'abord, on montre comment l'utilisation simultanée de vision et de données haptiques peut améliorer la collaboration. Une stratégie combinant asservissement visuel et commande en admittance est proposée, puis validée dans un scénario de transport collaboratif homme/humanoïde.Ensuite, on présente un algorithme de génération de marche, prenant intrinsèquement en compte la collaboration physique. Cet algorithme peut être spécifié suivant que le robot guide (leader) ou soit guidé (follower) lors de la tâche. Enfin, on montre comment le transport collaboratif d'objets peut être réalisé dans le cadre d'un schéma de commande optimale pour le corps complet. / Humanoid robots provide many advantages when working together with humans to perform various tasks. Since humans in general have alot of experience in physically collaborating with each other, a humanoid with a similar range of motion and sensing has the potential to do the same.This thesis is focused on enabling humanoids that can do such tasks together withhumans: collaborative humanoids. In particular, we use the example where a humanoid and a human collaboratively carry and transport objectstogether. However, there is much to be done in order to achieve this. Here, we first focus on utilizing vision and haptic information together forenabling better collaboration. More specifically the use of vision-based control together with admittance control is tested as a framework forenabling the humanoid to better collaborate by having its own notion of the task. Next, we detail how walking pattern generators can be designedtaking into account physical collaboration. For this, we create leader and follower type walking pattern generators. Finally,the task of collaboratively carrying an object together with a human is broken down and implemented within an optimization-based whole-bodycontrol framework.
206

Adapting robot behaviour in smart homes : a different approach using personas

Duque Garcia, Ismael January 2017 (has links)
A challenge in Human-Robot Interaction is tailoring the social skills of robot companions to match those expected by individual humans during their rst encounter. Currently, large amounts of user data are needed to con gure robot companions with these skills. This creates the need of running long-term Human-Robot Interaction studies in domestic environments. A new approach using personas is explored to alleviate this arduous data collection task without compromising the level of interaction currently shown by robot companions. The personas technique was created by Alan Cooper in 1999 as a tool to de ne user archetypes of a system in order to reduce the involvement of real users during the development process of a target system. This technique has proven bene cial in Human-Computer Interaction for years. Therefore, similar bene ts could be expected when applying personas to Human-Robot Interaction. Our novel approach de nes personas as the key component of a computational behaviour model used to adapt robot companions to individual user's needs. This approach reduces the amount of user data that must be collected before a Human-Robot Interaction study, by associating new users to pre-de ned personas that adapt the robot behaviours through their integration with the computational behaviour model. At the same time that the current robot social interaction level expected by humans during the rst encounter is preserved. The University of Hertfordshire Robot House provided the naturalistic domestic environment for the investigation. After incorporating a new module, an Activity Recognition System, to increase the overall context-awareness of the system, a computational behaviour model will be de ned through an iterative research process. The initial de nition of the model was evolved after each experiment based on the iii ndings. Two successive studies investigated personas and determined the steps to follow for their integration into the targeted model. The nal model presented was de ned from users' preferences and needs when interacting with a robot companion during activities of daily living at home. The main challenge was identifying the variables that match users to personas in our model. This approach open a new discussion in the Human-Robot Interaction eld to de ne tools that help reduce the amount of user data requiring collection prior to the rst interaction with a robot companion in a domestic environment. We conclude that modelling people's preferences when interacting with robot companions is a challenging approach. Integrating the Human-Computer Interaction technique into a computational behaviour model for Human-Robot Interaction studies was more di cult than anticipated. This investigation shows the advantages and disadvantages of introducing this technique into Human-Robot Interaction, and explores the challenges in de ning a personas-based computational behaviour model. The continuous learning process experienced helps clarify the steps that other researchers in the eld should follow when investigating a similar approach. Some interesting outcomes and trends were also found among users' data, which encourage the belief that the personas technique can be further developed to tackle some of the current di culties highlighted in the Human-Robot Interaction literature.
207

Adaptive neural architectures for intuitive robot control

Melidis, Christos January 2017 (has links)
This thesis puts forward a novel way of control for robotic morphologies. Taking inspiration from Behaviour Based robotics and self-organisation principles, we present an interfacing mechanism, capable of adapting both to the user and the robot, while enabling a paradigm of intuitive control for the user. A transparent mechanism is presented, allowing for a seamless integration of control signals and robot behaviours. Instead of the user adapting to the interface and control paradigm, the proposed architecture allows the user to shape the control motifs in their way of preference, moving away from the cases where the user has to read and understand operation manuals or has to learn to operate a specific device. The seminal idea behind the work presented is the coupling of intuitive human behaviours with the dynamics of a machine in order to control and direct the machine dynamics. Starting from a tabula rasa basis, the architectures presented are able to identify control patterns (behaviours) for any given robotic morphology and successfully merge them with control signals from the user, regardless of the input device used. We provide a deep insight in the advantages of behaviour coupling, investigating the proposed system in detail, providing evidence for and quantifying emergent properties of the models proposed. The structural components of the interface are presented and assessed both individually and as a whole, as are inherent properties of the architectures. The proposed system is examined and tested both in vitro and in vivo, and is shown to work even in cases of complicated environments, as well as, complicated robotic morphologies. As a whole, this paradigm of control is found to highlight the potential for a change in the paradigm of robotic control, and a new level in the taxonomy of human in the loop systems.
208

Evaluation of Multi-sensory Feedback in Virtual and Real Remote Environments in a USAR Robot Teleoperation Scenario

de Barros, Paulo 26 April 2014 (has links)
The area of Human-Robot Interaction deals with problems not only related to robots interacting with humans, but also with problems related to humans interacting and controlling robots. This dissertation focuses on the latter and evaluates multi-sensory (vision, hearing, touch, smell) feedback interfaces as a means to improve robot-operator cognition and performance. A set of four empirical studies using both simulated and real robotic systems evaluated a set of multi-sensory feedback interfaces with various levels of complexity. The task scenario for the robot in these studies involved the search for victims in a debris-filled environment after a fictitious catastrophic event (e.g., earthquake) took place. The results show that, if well-designed, multi-sensory feedback interfaces can indeed improve the robot operator data perception and performance. Improvements in operator performance were detected for navigation and search tasks despite minor increases in workload. In fact, some of the multi-sensory interfaces evaluated even led to a reduction in workload. The results also point out that redundant feedback is not always beneficial to the operator. While introducing the concept of operator omni-directional perception, that is, the operator’s capability of perceiving data or events coming from all senses and in all directions, this work explains that feedback redundancy is only beneficial when it enhances the operator omni-directional perception of data relevant to the task at hand. Last, the comprehensive methodology employed and refined over the course of the four studies is suggested as a starting point for the design of future HRI user studies. In summary, this work sheds some light on the benefits and challenges multi-sensory feedback interfaces bring, specifically on teleoperated robotics. It adds to our current understanding of these kinds of interfaces and provides a few insights to assist the continuation of research in the area.
209

Generating Engagement Behaviors in Human-Robot Interaction

Holroyd, Aaron 26 April 2011 (has links)
Based on a study of the engagement process between humans, I have developed models for four types of connection events involving gesture and speech: directed gaze, mutual facial gaze, adjacency pairs and backchannels. I have developed and validated a reusable Robot Operating System (ROS) module that supports engagement between a human and a humanoid robot by generating appropriate connection events. The module implements policies for adding gaze and pointing gestures to referring phrases (including deictic and anaphoric references), performing end-of-turn gazes, responding to human-initiated connection events and maintaining engagement. The module also provides an abstract interface for receiving information from a collaboration manager using the Behavior Markup Language (BML) and exchanges information with a previously developed engagement recognition module. This thesis also describes a Behavior Markup Language (BML) realizer that has been developed for use in robotic applications. Instead of the existing fixed-timing algorithms used with virtual agents, this realizer uses an event-driven architecture, based on Petri nets, to ensure each behavior is synchronized in the presence of unpredictable variability in robot motor systems. The implementation is robot independent, open-source and uses the Robot Operating System (ROS).
210

Robots that say 'no' : acquisition of linguistic behaviour in interaction games with humans

Förster, Frank January 2013 (has links)
Negation is a part of language that humans engage in pretty much from the onset of speech. Negation appears at first glance to be harder to grasp than object or action labels, yet this thesis explores how this family of ‘concepts’ could be acquired in a meaningful way by a humanoid robot based solely on the unconstrained dialogue with a human conversation partner. The earliest forms of negation appear to be linked to the affective or motivational state of the speaker. Therefore we developed a behavioural architecture which contains a motivational system. This motivational system feeds its state simultaneously to other subsystems for the purpose of symbol-grounding but also leads to the expression of the robot’s motivational state via a facial display of emotions and motivationally congruent body behaviours. In order to achieve the grounding of negative words we will examine two different mechanisms which provide an alternative to the established grounding via ostension with or without joint attention. Two large experiments were conducted to test these two mechanisms. One of these mechanisms is so called negative intent interpretation, the other one is a combination of physical and linguistic prohibition. Both mechanisms have been described in the literature on early child language development but have never been used in human-robot-interaction for the purpose of symbol grounding. As we will show, both mechanisms may operate simultaneously and we can exclude none of them as potential ontogenetic origin of negation.

Page generated in 0.0262 seconds