• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 20
  • 17
  • 15
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 302
  • 302
  • 302
  • 105
  • 91
  • 59
  • 51
  • 50
  • 41
  • 39
  • 39
  • 39
  • 36
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Joint Torque Feedback for Motion Training with an Elbow Exoskeleton

Kim, Hubert 28 October 2021 (has links)
Joint torque feedback (JTF) is a new and promising means of kinesthetic feedback to provide information to a person or guide them during a motion task. However, little work has been done to apply the torque feedback to a person. This project evaluates the properties of JTF as haptic feedback, starting from the fabrication of a lightweight elbow haptic exoskeleton. A cheap hobby motor and easily accessible hardware are introduced for manufacturing and open-sourced embedded architecture for data logging. The total cost and the weights are $500 and 509g. Also, as the prerequisite step to assess the JTF in guidance, human perceptual ability to detect JTF was quantified at the elbow during all possible static and dynamic joint statuses. JTF slopes per various joint conditions are derived using the Interweaving Staircase Method. For either directional torque feedback, flexional motion requires 1.89-2.27 times larger speed slope, in mNm/(°/s), than the extensional motion. In addition, we find that JTFs during the same directional muscle's isometric contraction yields a larger slope, in mNm/mNm, than the opposing direction (7.36 times and 1.02 times for extension torque and flexion torque). Finally, the guidance performance of the JTF was evaluated in terms of time delay and position error between the directed input and the wearer's arm. When studying how much the human arm travels with JTF, the absolute magnitude of the input shows more significance than the duration of the input (p-values of <0.0001 and 0.001). In the analysis of tracking the pulse input, the highest torque stiffness, 95 mNm/°, is responsible for the smallest position error, 6.102 ± 5.117°, despite the applied torque acting as compulsory stimuli. / Doctor of Philosophy / Joint torque feedback (JTF) is a new and promising means of haptic feedback to provide information to a person or guide them during a motion task. However, little work has been done to apply the torque feedback to a person, such as determining how well humans can detect external torques or how stiff the torque input should be to augment a human motion without interference with the voluntary movement. This project evaluates the properties of JTF as haptic feedback, starting from the fabrication of a lightweight elbow haptic exoskeleton. The novelty of the hardware is that we mask most of the skin receptors so that the joint receptors are primarily what the body will use to detect external sensations. A cheap hobby motor and easily accessible hardware are introduced for manufacturing and open-sourced software architecture for data logging. The total cost and the weight are $500 and 509g. Also, as the prerequisite step to assess the JTF in guidance, human perceptual ability to detect JTF was quantified at the elbow during all possible static and dynamic joint statuses. A psychophysics tool called Interweaving Staircase Method was implemented to derive torque slopes per various joint conditions. For either directional torque feedback, flexional motion requires 1.89-2.27 times larger speed slope, in mNm/(°/s) than the extensional motion. In addition, the muscles' isometric contraction with the aiding direction required a larger slope, in $mathrm{mNm/mNm}$ than the opposing direction (7.36 times and 1.02 times for extension torque and flexion torque). Finally, the guidance performance of the JTF was evaluated in terms of time delay and position error between the directed input and the wearer's arm. When studying how much the human arm travels with JTF, the absolute magnitude of the input shows more significance than the duration of the input (p-values of <0.0001 and 0.001). In the analysis of tracking the pulse input, the highest torque stiffness, 95 mNm/°, is responsible for the smallest position error, 6.102 ± 5.117°, despite the applied torque acting as compulsory stimuli.
152

Autonomous Cricothyroid Membrane Detection and Manipulation using Neural Networks and Robot Arm for First-Aid Airway Management

Han, Xiaoxue 02 June 2020 (has links)
The thesis focuses on applying deep learning and reinforcement learning techniques on human keypoint detection and robot arm manipulation. Inspired by Semi-Autonomous Victim Extraction Robot (SAVER), an autonomous first-aid airway-management robotic system designed to perform Cricothyrotomy on patients is proposed. Perception, decision-making, and control are embedded in the system. In this system, first, the location of the cricothyroid membrane (CTM)-the incision site of Cricothyrotomy- is detected; then, the robot arm is controlled to reach the detected position on a medical manikin. A hybrid neural network (HNNet) that can balance both speed and accuracy is proposed. HNNet is an ensemble-based network architecture that consists of two ensembles: the region proposal ensemble and the keypoint detection ensemble. This architecture can maintain the original high resolution of the input image without heavy computation and can meet the high-precision and real-time requirements at the same time. A dataset containing more than 16,000 images from 13 people, with a clear view of the neck area, and with CTM position labeled by a medical expert was built to train and validate the proposed model. It achieved a success rate of $99.6%$ to detect the position of the CTM with an error of less than 5mm. The robot arm manipulator was trained with the reinforcement learning model to reach the detected location. Finally, the detection neural network and the manipulation process are combined as an integrated system. The system was validated in real-life experiments on a human-sized medical manikin using a Kinect V2 camera and a MICO robot arm manipulator. / Master of Science / The thesis focuses on applying deep learning and reinforcement learning techniques on human keypoint detection and robot arm manipulation. Inspired by Semi-Autonomous Victim Extraction Robot (SAVER), an autonomous first-aid airway-management robotic system designed to perform Cricothyrotomy on patients is proposed. Perception, decision-making, and control are embedded in the system. In this system, first, the location of the cricothyroid membrane(CTM)-the incision site of Cricothyrotomy- is detected; then, the robot arm is controlled to reach the detected position on a medical manikin. A hybrid neural network (HNNet) that can balance both speed and accuracy is proposed. HNNet is an ensemble-based network architecture that consists of two ensembles: the region proposal ensemble and the keypoint detection ensemble. This architecture can maintain the original high resolution of the input image without heavy computation and can meet the high-precision and real-time requirements at the same time. Finally, the detection neural network and the manipulation process are combined as an integrated system. The robot arm manipulator was trained with the reinforcement learning model to reach the detected location. The system was validated in real-life experiments on a human-sized medical manikin using an RGB-D camera and a robot arm manipulator.
153

Autonomous Robotic Escort Incorporating Motion Prediction with Human Intention

Conte, Dean Edward 02 March 2021 (has links)
This thesis presents a framework for a mobile robot to escort a human to their destination successfully and efficiently. The proposed technique uses accurate path prediction incorporating human intention to locate the robot in front of the human while walking. Human intention is inferred by the head pose, an effective past-proven implicit indicator of intention, and fused with conventional physics-based motion prediction. The human trajectory is estimated and predicted using a particle filter because of the human's nonlinear and non-Gaussian behavior, and the robot control action is determined from the predicted human pose allowing for anticipative autonomous escorting. Experimental analysis shows that the incorporation of the proposed human intention model reduces human position prediction error by approximately 35% when turning. Furthermore, experimental validation with an omnidirectional mobile robotic platform shows escorting up to 50% more accurate compared to the conventional techniques, while achieving 97% success rate. / Master of Science / This thesis presents a method for a mobile robot to escort a human to their destination successfully and efficiently. The proposed technique uses human intention to predict the walk path allowing the robot to be in front of the human while walking. Human intention is inferred by the head direction, an effective past-proven indicator of intention, and is combined with conventional motion prediction. The robot motion is then determined from the predicted human position allowing for anticipative autonomous escorting. Experimental analysis shows that the incorporation of the proposed human intention reduces human position prediction error by approximately 35% when turning. Furthermore, experimental validation with an mobile robotic platform shows escorting up to 50% more accurate compared to the conventional techniques, while achieving 97% success rate. The unique escorting interaction method proposed has applications such as touch-less shopping cart robots, exercise companions, collaborative rescue robots, and sanitary transportation for hospitals.
154

Communication-Driven Robot Learning for Human-Robot Collaboration

Habibian, Soheil 25 July 2024 (has links)
The growing presence of modern learning robots necessitates a fundamental shift in design, as these robots must learn skills from human inputs. Two main components close the loop in a human-robot interaction: learning and communication. Learning derives robot behaviors from human inputs, and communication conveys information about the robot's learning to the human. This dissertation focuses on methods that enable robots to communicate their internal state clearly while learning precisely from human inputs. We first consider the information implicitly communicated by robot behavior during human interactions and whether it can be utilized to form human-robot teams. We investigate behavioral economics to identify biases and expectations in human team dynamics and incorporate them into human-robot teams. We develop and demonstrate an optimization approach that relates high-level subtask allocations to low-level robot actions, which implicitly communicates learning to encourage human participation in robot teams. We then study how communication helps humans teach tasks to robots using active learning and interactive imitation learning algorithms. Within the active learning approach, we develop a model that forms a belief over the human's mental model about the robot's learning. We validate that our algorithm enables the robot to balance between learning human preferences and implicitly communicating its learning through questions. Within the imitation learning approach, we integrate a wrapped haptic display that explicitly communicates representations from the robot's learned behavior to the user. We show that our framework helps the human teacher improve different aspects of the robot's learning during kinesthetic teaching. We then extend this system to a more comprehensive interactive learning architecture that provides multi-modal feedback through augmented reality and haptic interfaces. We present a case study with this closed-loop system and illustrate improved teaching, trust, and co-adaptation as the measured benefits of communicating robot learning. Overall, this dissertation demonstrates that bi-directional communication helps robots learn faster and adapt better, while humans experience a more intuitive and trust-based interaction. / Doctor of Philosophy / The growing presence of modern learning robots necessitates a fundamental shift in design, as these robots must learn skills from human inputs. This dissertation focuses on methods that enable robots to communicate their internal state clearly while learning precisely from human inputs. We first consider how robot behaviors during human interactions can be used to form human-robot teams. We investigate human-human teams in behavioral economics to better understand human expectations in human-robot teams. We develop a model that enables robots to distribute subtasks in a way that encourages their human partner to keep collaborating with them. We then study how communication helps human-in-the-loop robot teaching. Within active learning, we develop a model that infers what the human thinks about the robot's learning. We validate that, with our algorithm, the robot efficiently learns human preferences and keeps the human updated about what it has learned. Within imitation learning, we integrate a haptic device that explicitly communicates features from the robot's learned behavior to the user. We show that our framework helps users effectively improve their kinesthetic teaching. We then extend this system to a more comprehensive interactive robot learning architecture that provides feedback through augmented reality and haptic interfaces. We conduct a case study and illustrate that our framework improves robot teaching, human trust, and human-robot co-adaptation. Overall, this dissertation demonstrates that bi-directional communication helps robots learn faster and adapt better, while humans experience a more intuitive and trust-based interaction.
155

Wie kommt die Robotik zum Sozialen? Epistemische Praktiken der Sozialrobotik.

Bischof, Andreas 15 July 2016 (has links)
In zahlreichen Forschungsprojekten wird unter Einsatz großer finanzieller und personeller Ressourcen daran gearbeitet, dass Roboter die Fabrikhallen verlassen und Teil von Alltagswelten wie Krankenhäusern, Kindergärten und Privatwohnungen werden. Die Konstrukteurinnen und Konstrukteure stehen dabei vor einer nicht-trivialen Herausforderung: Sie müssen die Ambivalenzen und Kontingenzen alltäglicher Interaktion in die diskrete Sprache der Maschinen übersetzen. Wie sie dieser Herausforderung begegnen, welche Muster und Lösungen sie heranziehen und welche Implikationen für die Verwendung von Sozialrobotern dabei gelegt werden, ist der Gegenstand des Buches. Auf der Suche nach der Antwort, was Roboter sozial macht, hat Andreas Bischof Forschungslabore und Konferenzen in Europa und Nordamerika besucht und ethnografisch erforscht. Zu den wesentlichen Ergebnissen dieser Studie gehört die Typologisierung von Forschungszielen in der Sozialrobotik, eine epistemische Genealogie der Idee des Roboters in Alltagswelten, die Rekonstruktion der Bezüge zu 'echten' Alltagswelten in der Sozialrobotik-Entwicklung und die Analyse dreier Gattungen epistemischer Praktiken, derer sich die Ingenieurinnen und Ingenieure bedienen, um Roboter sozial zu machen.:EINLEITUNG 1. WAS IST SOZIALROBOTIK? 1.1 Roboter & Robotik zum Funktionieren bringen 1.2 Drei Problemdimensionen der Sozialrobotik 1.3 Forschungsstand Sozialrobotik 1.4 Problemstellung – Sozialrobotik als „wicked problem“ 2. FORSCHEN, TECHNISIEREN UND ENTWERFEN 2.1 Wissenschaft als (soziale) Praxis 2.2 Technisierung und Komplexitätsreduktion in Technik 2.3 Entwurf, Technik, Nutzung – Technik zwischen Herstellungs- und Wirkungszusammenhang 2.4 Sozialrobotik als Problemlösungshandeln 3. METHODOLOGIE UND METHODEN DER STUDIE 3.1 Forschungsstil Grounded Theory 3.2 Ethnografie und narrative Experteninterviews 3.3 Auswertungsmethoden und Generalisierung 3.4 Zusammenfassung 4. DER ROBOTER ALS UNIVERSALWERKZEUG 4.1 Roboter als fiktionale Apparate 4.2 Robotik als Lösungsversprechen 4.3 Computer Science zwischen Wissenschaft und Design 4.4 Fazit – Das Erbe des Universalwerkzeugs 5. FORSCHUNGS- UND ENTWICKLUNGSZIELE DER SOZIALROBOTIK 5.1 Bedingungen projektförmiger Forschung 5.2 Dimensionen und Typen der Ziele von Sozialrobotik 5.3 Beschreibung der Typen anhand der Verteilung der Fälle 5.4 Ko-Konstruktion der Anwendung an Fallbeispielen 5.5 Fazit – Typen von Sozialität in Entwicklungszielen 6. EPISTEMISCHE PRAKTIKEN UND INSTRUMENTE DER SOZIALROBOTIK 6.1 Praktiken der Laboratisierung des Sozialen 6.2 Alltägliche und implizite Heuristiken 6.3 Inszenierende Praktiken 6.4 Fazit – Wechselspiele des Erzeugens und Beobachtens 7. FAZIT 7.1 Phänomenstruktur der Sozialrobotik 7.2 Entwicklung als Komplexitätspendel 7.3 Methodologischer Vorschlag für den Entwicklungsprozess
156

Interactive concept acquisition for embodied artificial agents

de Greeff, Joachim January 2013 (has links)
An important capacity that is still lacking in intelligent systems such as robots, is the ability to use concepts in a human-like manner. Indeed, the use of concepts has been recognised as being fundamental to a wide range of cognitive skills, including classification, reasoning and memory. Intricately intertwined with language, concepts are at the core of human cognition; but despite a large body or research, their functioning is as of yet not well understood. Nevertheless it remains clear that if intelligent systems are to achieve a level of cognition comparable to humans, they will have to posses the ability to deal with the fundamental role that concepts play in cognition. A promising manner in which conceptual knowledge can be acquired by an intelligent system is through ongoing, incremental development. In this view, a system is situated in the world and gradually acquires skills and knowledge through interaction with its social and physical environment. Important in this regard is the notion that cognition is embodied. As such, both the physical body and the environment shape the manner in which cognition, including the learning and use of concepts, operates. Through active partaking in the interaction, an intelligent system might influence its learning experience as to be more effective. This work presents experiments which illustrate how these notions of interaction and embodiment can influence the learning process of artificial systems. It shows how an artificial agent can benefit from interactive learning. Rather than passively absorbing knowledge, the system actively partakes in its learning experience, yielding improved learning. Next, the influence of embodiment on perception is further explored in a case study concerning colour perception, which results in an alternative explanation for the question of why human colour experience is very similar amongst individuals despite physiological differences. Finally experiments, in which an artificial agent is embodied in a novel robot that is tailored for human-robot interaction, illustrate how active strategies are also beneficial in an HRI setting in which the robot learns from a human teacher.
157

Contribution au développement d'un dispositif de sécurité intelligente pour la cobotique / Contribution to the development of an intelligent safety device for cobotics

Ayoubi, Younsse 10 July 2018 (has links)
Au cours des dernières années, nous avons assisté à un changement de paradigme, passant de la fabrication de robots rigides à des robots compliants. Ceci est dû à plusieurs raisons telles que l'amélioration de l'efficacité des robots dans la réalisation des mouvements explosifs ou cycliques. En fait, l'une des premières motivations à l'origine de ce changement est la sécurité. Parlant de la sécurité à la fois du sujet humain et du robot, tout en s'engageant dans des tâches collaboratives. Ainsi la désignation des cobots. Les cobots peuvent aider un opérateur humain expérimenté dans plusieurs domaines où la précision est essentielle, comme les applications industrielles ou les tâches médicales. Jusqu'à présent, les cobots présentent toujours des problèmes de sécurité, même avec des recommandations réglementaires telles que ISO / TS 15066 et ISO 10218-1 et 2 qui limitent leurs avantages économiques. Dans cette vue, plusieurs projets de recherche ont été lancés dans le monde entier pour améliorer la dynamique des cobots par rapport à la sécurité, ANR-SISCob (Safety Intelligent Sensor for cobots) étant l'un de ces projets. Les travaux menés au cours de cette thèse ont pour but de concevoir des dispositifs de sécurité qui sécuriseront les robots en y introduisant l’aspect de compliance. En effet, nous avons développé deux dispositifs dans lesquels l'aspect sécurité est atteint avec deux approches différentes :- Prismatic Compliant Joint (PCJ) : qui vise à la mise en œuvre dans les articulations linéaires, car peu de travaux ont traité de tels systèmes d'actionnement. Ici, la sécurité est atteinte biomimétiquement tout en faisant face à d'autres critères de sécurité liés aux propriétés mécaniques du corps humain.- Variable Stiffness Safety Oriented Mechanism (V2SOM) : Contrairement au premier dispositif d'inspiration biomimétique qui sert aux systèmes d'actionnement linéaires, le profil de sécurité du V2SOM est axé sur la sécurité selon deux critères de sécurité: force d’impact et HIC. L'aspect ‘orienté sécurité’ est dû à ce que nous appelons la capacité de découplage d'inertie de son profil de rigidité. V2SOM est actuellement dans ses dernières étapes de brevetage.Ces deux appareils seront intégrés dans un robot sériel réalisé dans notre laboratoire. / In the recent years, we witnessed a paradigm shift from making stiff robots toward compliant ones. This is due to several reasons such as enhancing the efficiency of robots in making explosive or cyclic motion. In fact, one of the earliest motivations from which this change stems are safety. Speaking of safety of both the human subject and the robot alike, while engaging in a collaborative task. Thus, the designation of cobots. Cobots may assist well-experienced human operator in several domains where precision is a must, such as industrial applications or medical tasks. Until now cobots still display safety concerns, even with regulatory recommendations such as ISO/TS 15066 and ISO 10218-1 et 2 that limits their economic benefits. In this view, several research projects were launched worldwide to enhance the cobot’s dynamics vs safety, ANR-SISCob (Safety Intelligent Sensor for cobots) is one of these projects. The works conducted during this thesis aims at making safety devices that will make robots safe by introducing compliance aspect in them. Indeed, we developed two devices in which safety aspect is achieved with two different approaches: - Prismatic Compliant Joint (PCJ): is aimed at prismatic joint’s implementation, as few works have dealt with such actuation systems. Herein, safety is biomimetically attained while coping with other safety criteria related to the mechanical properties of human body. - Variable Stiffness Safety Oriented Mechanism (V2SOM): Unlike the first device that’s biomimetically inspired and serves at linear actuation systems, V2SOM’s safety profile is safety oriented according to two safety criteria Impact force and HIC, and is designed for rotary actuation. The safety oriented aspect is due to what we call inertia decoupling capacity of its stiffness profile. V2SOM is currently in its final patenting process.Both devices will be integrated in serial robot built in our lab.
158

Integração de sistemas cognitivo e robótico por meio de uma ontologia para modelar a percepção do ambiente / Integration of cognitive and robotic systems through an ontology to model the perception of the environment

Azevedo, Helio 01 August 2018 (has links)
A disseminação do uso de robôs na sociedade moderna é uma realidade. Do começo restrito às operações fabris como pintura e soldagem, até o início de seu uso nas residências, apenas algumas décadas se passaram. A robótica social é uma área de pesquisa que visa desenvolver modelos para que a interação direta de robôs com seres humanos ocorra de forma natural. Um dos fatores que compromete a rápida evolução da robótica social é a dificuldade em integrar sistemas cognitivos e robóticos, principalmente devido ao volume e complexidade da informação produzida por um mundo caótico repleto de dados sensoriais. Além disso, a existência de múltiplas configurações de robôs, com arquiteturas e interfaces distintas, dificulta a verificação e repetibilidade dos experimentos realizados pelos diversos grupos de pesquisa. Esta tese contribui para a evolução da robótica social ao definir uma arquitetura, denominada Cognitive Model Development Environment (CMDE) que simplifica a conexão entre sistemas cognitivos e robóticos. Essa conexão é formalizada com uma ontologia, denominada OntPercept, que modela a percepção do ambiente a partir de informações sensoriais captadas pelos sensores presentes no agente robótico. Nos últimos anos, diversas ontologias foram propostas para aplicações robóticas, mas elas não são genéricas o suficiente para atender completamente às necessidades das áreas de robótica e automação. A formalização oferecida pela OntPercept facilita o desenvolvimento, a reprodução e a comparação de experimentos associados a robótica social. A validação do sistema proposto ocorre com suporte do simulador Robot House Simulator (RHS), que fornece um ambiente onde, o agente robótico e o personagem humano podem interagir socialmente com níveis crescentes de processamento cognitivo. A proposta da CMDE viabiliza a utilização de qualquer sistema cognitivo, em particular, o experimento elaborado para validação desta pesquisa utiliza Soar como arquitetura cognitiva. Em conjunto, os elementos: arquitetura CMDE, ontologia OntPercept e simulador RHS, todos disponibilizados livremente no GitHub, estabelecem um ambiente completo que propiciam o desenvolvimento de experimentos envolvendo sistemas cognitivos dirigidos para a área de robótica social. / The use of robots in modern society is a reality. From the beginning restricted to the manufacturing operations like painting and welding, until the beginning of its use in the residences, only a few decades have passed. Social robotics is an area that aims to develop models so that the direct interaction of robots with humans occurs naturally. One of the factors that compromises the rapid evolution of social robotics is the difficulty in integrating cognitive and robotic systems, mainly due to the volume and complexity of the information produced by a chaotic world full of sensory data. In addition, the existence of multiple configurations of robots, with different architectures and interfaces, makes it difficult to verify and repeat the experiments performed by the different research groups. This research contributes to the evolution of social robotics by defining an architecture, called Cognitive Model Development Environment (CMDE), which simplifies the connection between cognitive and robotic systems. This connection is formalized with an ontology, called OntPercept, which models the perception of the environment from the sensory information captured by the sensors present in the robotic agent. In recent years, several ontologies have been proposed for robotic applications, but they are not generic enough to fully address the needs of robotics and automation. The formalization offered by OntPercept facilitates the development, reproduction and comparison of experiments associated with social robotics. The validation of the proposed system occurs with support of the Robot House Simulator (RHS), which provides an environment where the robotic agent and the human character can interact socially with increasing levels of cognitive processing. All together, the elements: CMDE architecture, OntPercept ontology and RHS simulator, all freely available in GitHub, establish a complete environment that allows the dev
159

Human-humanoid collaborative object transportation / Transport collaboratif homme/humanoïde

Agravante, Don Joven 16 December 2015 (has links)
Les robots humanoïdes sont les plus appropriés pour travailler en coopération avec l'homme. En effet, puisque les humains sont naturellement habitués à collaborer entre eux, un robot avec des capacités sensorielles et de locomotion semblables aux leurs, sera le plus adapté. Cette thèse vise à rendre les robot humanoïdes capables d'aider l'homme, afin de concevoir des 'humanoïdes collaboratifs'. On considère ici la tâche de transport collaboratif d'objets. D'abord, on montre comment l'utilisation simultanée de vision et de données haptiques peut améliorer la collaboration. Une stratégie combinant asservissement visuel et commande en admittance est proposée, puis validée dans un scénario de transport collaboratif homme/humanoïde.Ensuite, on présente un algorithme de génération de marche, prenant intrinsèquement en compte la collaboration physique. Cet algorithme peut être spécifié suivant que le robot guide (leader) ou soit guidé (follower) lors de la tâche. Enfin, on montre comment le transport collaboratif d'objets peut être réalisé dans le cadre d'un schéma de commande optimale pour le corps complet. / Humanoid robots provide many advantages when working together with humans to perform various tasks. Since humans in general have alot of experience in physically collaborating with each other, a humanoid with a similar range of motion and sensing has the potential to do the same.This thesis is focused on enabling humanoids that can do such tasks together withhumans: collaborative humanoids. In particular, we use the example where a humanoid and a human collaboratively carry and transport objectstogether. However, there is much to be done in order to achieve this. Here, we first focus on utilizing vision and haptic information together forenabling better collaboration. More specifically the use of vision-based control together with admittance control is tested as a framework forenabling the humanoid to better collaborate by having its own notion of the task. Next, we detail how walking pattern generators can be designedtaking into account physical collaboration. For this, we create leader and follower type walking pattern generators. Finally,the task of collaboratively carrying an object together with a human is broken down and implemented within an optimization-based whole-bodycontrol framework.
160

Adapting robot behaviour in smart homes : a different approach using personas

Duque Garcia, Ismael January 2017 (has links)
A challenge in Human-Robot Interaction is tailoring the social skills of robot companions to match those expected by individual humans during their rst encounter. Currently, large amounts of user data are needed to con gure robot companions with these skills. This creates the need of running long-term Human-Robot Interaction studies in domestic environments. A new approach using personas is explored to alleviate this arduous data collection task without compromising the level of interaction currently shown by robot companions. The personas technique was created by Alan Cooper in 1999 as a tool to de ne user archetypes of a system in order to reduce the involvement of real users during the development process of a target system. This technique has proven bene cial in Human-Computer Interaction for years. Therefore, similar bene ts could be expected when applying personas to Human-Robot Interaction. Our novel approach de nes personas as the key component of a computational behaviour model used to adapt robot companions to individual user's needs. This approach reduces the amount of user data that must be collected before a Human-Robot Interaction study, by associating new users to pre-de ned personas that adapt the robot behaviours through their integration with the computational behaviour model. At the same time that the current robot social interaction level expected by humans during the rst encounter is preserved. The University of Hertfordshire Robot House provided the naturalistic domestic environment for the investigation. After incorporating a new module, an Activity Recognition System, to increase the overall context-awareness of the system, a computational behaviour model will be de ned through an iterative research process. The initial de nition of the model was evolved after each experiment based on the iii ndings. Two successive studies investigated personas and determined the steps to follow for their integration into the targeted model. The nal model presented was de ned from users' preferences and needs when interacting with a robot companion during activities of daily living at home. The main challenge was identifying the variables that match users to personas in our model. This approach open a new discussion in the Human-Robot Interaction eld to de ne tools that help reduce the amount of user data requiring collection prior to the rst interaction with a robot companion in a domestic environment. We conclude that modelling people's preferences when interacting with robot companions is a challenging approach. Integrating the Human-Computer Interaction technique into a computational behaviour model for Human-Robot Interaction studies was more di cult than anticipated. This investigation shows the advantages and disadvantages of introducing this technique into Human-Robot Interaction, and explores the challenges in de ning a personas-based computational behaviour model. The continuous learning process experienced helps clarify the steps that other researchers in the eld should follow when investigating a similar approach. Some interesting outcomes and trends were also found among users' data, which encourage the belief that the personas technique can be further developed to tackle some of the current di culties highlighted in the Human-Robot Interaction literature.

Page generated in 0.1342 seconds