• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 20
  • 17
  • 15
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 302
  • 302
  • 302
  • 105
  • 91
  • 59
  • 51
  • 50
  • 41
  • 39
  • 39
  • 39
  • 36
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

AR-Supported Supervision of Conditional Autonomous Robots: Considerations for Pedicle Screw Placement in the Future

Schreiter, Josefine, Schott, Danny, Schwenderling, Lovis, Hansen, Christian, Heinrich, Florian, Joeres, Fabian 16 May 2024 (has links)
Robotic assistance is applied in orthopedic interventions for pedicle screw placement (PSP). While current robots do not act autonomously, they are expected to have higher autonomy under surgeon supervision in the mid-term. Augmented reality (AR) is promising to support this supervision and to enable human–robot interaction (HRI). To outline a futuristic scenario for robotic PSP, the current workflow was analyzed through literature review and expert discussion. Based on this, a hypothetical workflow of the intervention was developed, which additionally contains the analysis of the necessary information exchange between human and robot. A video see-through AR prototype was designed and implemented. A robotic arm with an orthopedic drill mock-up simulated the robotic assistance. The AR prototype included a user interface to enable HRI. The interface provides data to facilitate understanding of the robot’s ”intentions”, e.g., patient-specific CT images, the current workflow phase, or the next planned robot motion. Two-dimensional and three-dimensional visualization illustrated patient-specific medical data and the drilling process. The findings of this work contribute a valuable approach in terms of addressing future clinical needs and highlighting the importance of AR support for HRI.
192

GENTLE/A : adaptive robotic assistance for upper-limb rehabilitation

Gudipati, Radhika January 2014 (has links)
Advanced devices that can assist the therapists to offer rehabilitation are in high demand with the growing rehabilitation needs. The primary requirement from such rehabilitative devices is to reduce the therapist monitoring time. If the training device can autonomously adapt to the performance of the user, it can make the rehabilitation partly self-manageable. Therefore the main goal of our research is to investigate how to make a rehabilitation system more adaptable. The strategy we followed to augment the adaptability of the GENTLE/A robotic system was to (i) identify the parameters that inform about the contribution of the user/robot during a human-robot interaction session and (ii) use these parameters as performance indicators to adapt the system. Three main studies were conducted with healthy participants during the course of this PhD. The first study identified that the difference between the position coordinates recorded by the robot and the reference trajectory position coordinates indicated the leading/lagging status of the user with respect to the robot. Using the leadlag model we proposed two strategies to enhance the adaptability of the system. The first adaptability strategy tuned the performance time to suit the user’s requirements (second study). The second adaptability strategy tuned the task difficulty level based on the user’s leading or lagging status (third study). In summary the research undertaken during this PhD successfully enhanced the adaptability of the GENTLE/A system. The adaptability strategies evaluated were designed to suit various stages of recovery. Apart from potential use for remote assessment of patients, the work presented in this thesis is applicable in many areas of human-robot interaction research where a robot and human are involved in physical interaction.
193

A differential-based parallel force/velocity actuation concept : theory and experiments

Rabindran, Dinesh, 1978- 05 February 2010 (has links)
Robots are now moving from their conventional confined habitats such as factory floors to human environments where they assist and physically interact with people. The requirement for inherent mechanical safety is overarching in such human-robot interaction systems. We propose a dual actuator called Parallel Force/Velocity Actuator (PFVA) that combines a Force Actuator (FA) (low velocity input) and a Velocity Actuator (VA) (high velocity input) using a differential gear train. In this arrangement mechanical safety can be achieved by limiting the torque on the FA and thus making it a backdriveable input. In addition, the kinematic redundancy in the drive can be used to control output velocity while satisfying secondary operational objectives. Our research focus was on three areas: (i) scalable parametric design of the PFVA, (ii) analytical modeling of the PFVA and experimental testing on a single-joint prototype, and (iii) generalized model formulation for PFVA-driven serial robot manipulators. In our analysis, the ratio of velocity ratios between the FA and the VA, called the relative scale factor, emerged as a purely geometric and dominant design parameter. Based on a dimensionless parametric design of PFVAs using power-flow and load distributions between the inputs, a prototype was designed and built using commercial-off-the-shelf components. Using controlled experiments, two performance-limiting phenomena in our prototype, friction and dynamic coupling between the two inputs, were identified. Two other experiments were conducted to characterize the operational performance of the actuator in velocity-mode and in what we call ‘torque-limited’ mode (i.e. when the FA input can be backdriven). Our theoretical and experimental results showed that the PFVA can be mechanical safe to both slow collisions and impacts due to the backdriveability of the FA. Also, we show that its kinematic redundancy can be effectively utilized to mitigate low-velocity friction and backlash in geared mechanisms. The implication at the system level of our actuator level analytical and experimental work was studied using a generalized dynamic modeling framework based on kinematic influence coefficients. Based on this dynamic model, three design case studies for a PFVA-driven serial planar 3R manipulator were presented. The major contributions of this research include (i) mathematical models and physical understanding for over six fundamental design and operational parameters of the PFVA, based on which approximately ten design and five operational guidelines were laid out, (ii) analytical and experimental proof-of-concept for the mechanical safety feature of the PFVA and the effective utilization of its kinematic redundancy, (iii) an experimental methodology to characterize the dynamic coupling between the inputs in a differential-summing mechanism, and (iv) a generalized dynamic model formulation for PFVA-driven serial robot manipulators with emphasis on distribution of output loads between the FA and VA input-sets. / text
194

Apprendre à un robot à reconnaître des objets visuels nouveaux et à les associer à des mots nouveaux : le rôle de l’interface

Rouanet, Pierre 04 April 2012 (has links)
Cette thèse s’intéresse au rôle de l’interface dans l’interaction humain-robot pour l’apprentissage. Elle étudie comment une interface bien conçue peut aider les utilisateurs non-experts à guider l’apprentissage social d’un robot, notamment en facilitant les situations d’attention partagée. Nous étudierons comment l’interface peut rendre l’interaction plus robuste, plus intuitive, mais aussi peut pousser les humains à fournir les bons exemples d’apprentissage qui amélioreront les performances de l’ensemble du système. Nous examinerons cette question dans le cadre de la robotique personnelle où l’apprentissage social peut jouer un rôle clé dans la découverte et l’adaptation d’un robot à son environnement immédiat. Nous avons choisi d’étudier le rôle de l’interface sur une instance particulière d’apprentissage social : l’apprentissage conjoint d’objets visuels et de mots nouveaux par un robot en interaction avec un humain non-expert. Ce défi représente en effet un levier important du développement de la robotique personnelle, l’acquisition du langage chez les robots et la communication entre un humain et un robot. Nous avons particulièrement étudié les défis d’interaction tels que le pointage et l’attention partagée.Nous présenterons au chapitre 1 une description de notre contexte applicatif : la robotique personnelle. Nous décrirons ensuite au chapitre 2 les problématiques liées au développement de robots sociaux et aux interactions avec l’homme. Enfin, au chapitre 3 nous présenterons la question de l’interface dans l’acquisition des premiers mots du langage chez les robots. La démarche centrée utilisateur suivie tout au long du travail de cette thèse sera décrite au chapitre 4. Dans les chapitres suivants, nous présenterons les différentes contributions de cette thèse. Au chapitre 5, nous montrerons comment des interfaces basées sur des objets médiateurs peuvent permettre de guider un robot dans un environnement du quotidien encombré. Au chapitre 6, nous présenterons un système complet basé sur des interfaces humain-robot, des algorithmes de perception visuelle et des mécanismes d’apprentissage, afin d’étudier l’impact des interfaces sur la qualité des exemples d’apprentissage d’objets visuels collectés. Une évaluation à grande échelle de ces interfaces, conçue sous forme de jeu robotique afin de reproduire des conditions réalistes d’utilisation hors-laboratoire, sera décrite au chapitre 7. Au chapitre 8, nous présenterons une extension de ce système permettant la collecte semi-automatique d’exemples d’apprentissage d’objets visuels. Nous étudierons ensuite la question de l’acquisition conjointe de mots vocaux nouveaux associés aux objets visuels dans le chapitre 9. Nous montrerons comment l’interface peut permettre d’améliorer les performances du système de reconnaissance vocale, et de faire directement catégoriser les exemples d’apprentissage à l’utilisateur à travers des interactions simples et transparentes. Enfin, les limites et extensions possibles de ces contributions seront présentées au chapitre 10. / This thesis is interested in the role of interfaces in human-robot interactions for learning. In particular it studies how a well conceived interface can aid users, and more specifically non-expert users, to guide social learning of a robotic student, notably by facilitating situations of joint attention. We study how the interface can make the interaction more robust, more intuitive, but can also push the humans to provide good learning examples which permits the improvement of performance of the system as a whole. We examine this question in the realm of personal robotics where social learning can play a key role in the discovery and adaptation of a robot in its immediate environment. We have chosen to study this question of the role of the interface in social learning within a particular instance of learning : the combined learning of visual objects and new words by a robot in interactions with a non-expert human. Indeed this challenge represents an important an lever in the development of personal robotics, the acquisition of language for robots, and natural communication between a human and a robot. We have studied more particularly the challenge of human-robot interaction with respect to pointing and joint attention.We present first of all in Chapter 1 a description of our context : personal robotics. We then describe in Chapter 2 the problems which are more specifically linked to social robotic development and interactions with people. Finally, in Chapter 3, we present the question of interfaces in acquisition of the first words of language for a robot. The user centered approach followed throughout the work of this thesis will be described in Chapter 4. In the following chapters, we present the different contributions of this thesis. In Chapter 5, we show how some interfaces based on mediator objects can permit the guiding of a personal robot in a cluttered home environment. In Chapter 6, we present a complete system based on human-robot interfaces, the algorithms of visual perception and machine learning in order to study the impact of interfaces, and more specifically the role of different feedback of what the robot perceives, on the quality of collected learning examples of visual objects. A large scale user-study of these interfaces, designed in the form of a robotic game that reproduces realistic conditions of use outside of a laboratory, will be described in details in Chapter 7. In Chapter 8, we present an extension of the system which allows the collection of semi-automatic learning examples of visual objects. We then study the question of combined acquisition of new vocal words associated with visual objects in Chapter 9. We show that the interface can permit both the improvement of the performance of the speech recognition and direct categorization of the different learning examples through simple and transparent user’s interactions. Finally, a discussion of the limits and possible extensions of these contributions will be presented in Chapter 10.
195

Analyse acoustique de la voix émotionnelle de locuteurs lors d’une interaction humain-robot / Acoustic analysis of speakers emotional voices during a human-robot interaction

Tahon, Marie 15 November 2012 (has links)
Mes travaux de thèse s'intéressent à la voix émotionnelle dans un contexte d'interaction humain-robot. Dans une interaction réaliste, nous définissons au moins quatre grands types de variabilités : l'environnement (salle, microphone); le locuteur, ses caractéristiques physiques (genre, âge, type de voix) et sa personnalité; ses états émotionnels; et enfin le type d'interaction (jeu, situation d'urgence ou de vie quotidienne). A partir de signaux audio collectés dans différentes conditions, nous avons cherché, grâce à des descripteurs acoustiques, à imbriquer la caractérisation d'un locuteur et de son état émotionnel en prenant en compte ces variabilités.Déterminer quels descripteurs sont essentiels et quels sont ceux à éviter est un défi complexe puisqu'il nécessite de travailler sur un grand nombre de variabilités et donc d'avoir à sa disposition des corpus riches et variés. Les principaux résultats portent à la fois sur la collecte et l'annotation de corpus émotionnels réalistes avec des locuteurs variés (enfants, adultes, personnes âgées), dans plusieurs environnements, et sur la robustesse de descripteurs acoustiques suivant ces quatre variabilités. Deux résultats intéressants découlent de cette analyse acoustique: la caractérisation sonore d'un corpus et l'établissement d'une liste "noire" de descripteurs très variables. Les émotions ne sont qu'une partie des indices paralinguistiques supportés par le signal audio, la personnalité et le stress dans la voix ont également été étudiés. Nous avons également mis en oeuvre un module de reconnaissance automatique des émotions et de caractérisation du locuteur qui a été testé au cours d'interactions humain-robot réalistes. Une réflexion éthique a été menée sur ces travaux. / This thesis deals with emotional voices during a human-robot interaction. In a natural interaction, we define at least, four kinds of variabilities: environment (room, microphone); speaker, its physic characteristics (gender, age, voice type) and personality; emotional states; and finally the kind of interaction (game scenario, emergency, everyday life). From audio signals collected in different conditions, we tried to find out, with acoustic features, to overlap speaker and his emotional state characterisation taking into account these variabilities.To find which features are essential and which are to avoid is hard challenge because it needs to work with a high number of variabilities and then to have riche and diverse data to our disposal. The main results are about the collection and the annotation of natural emotional corpora that have been recorded with different kinds of speakers (children, adults, elderly people) in various environments, and about how reliable are acoustic features across the four variabilities. This analysis led to two interesting aspects: the audio characterisation of a corpus and the drawing of a black list of features which vary a lot. Emotions are ust a part of paralinguistic features that are supported by the audio channel, other paralinguistic features have been studied such as personality and stress in the voice. We have also built automatic emotion recognition and speaker characterisation module that we have tested during realistic interactions. An ethic discussion have been driven on our work.
196

Um simulador para robótica social aplicado a ambientes internos / A simulator for social robotics applied to indoor environments

Belo, José Pedro Ribeiro 26 March 2018 (has links)
A robótica social representa um ramo da interação humano-robô que visa desenvolver robôs para atuar em ambientes não estruturados em parceria direta com seres humanos. O relatório A Roadmap for U.S. Robotics From Internet to Robotics, de 2013, preconiza a obtenção de resultados promissores em 12 anos desde que condições apropriadas sejam disponibilizadas para a área. Uma das condições envolve a utilização de ambiente de referência para desenvolver, avaliar e comparar o desempenho de sistemas cognitivos. Este ambiente é denominado Robot City com atores, cenários (casas, ruas, cidade) e auditores. Até o momento esse complexo ambiente não se concretizou, possivelmente devido ao elevado custo de implantação e manutenção de uma instalação desse porte. Nesta dissertação é proposto um caminho alternativo através da definição e implementação do simulador de sistemas cognitivos denominado Robot House Simulator (RHS). O simulador RHS tem como objetivo disponibilizar um ambiente residencial composto por sala e cozinha, no qual convivem dois agentes, um robô humanoide e um avatar humano. O agente humano é controlado pelo usuário do sistema e o robô é controlado por uma arquitetura cognitiva que determina o comportamento do robô. A arquitetura cognitiva estabelece sua percepção do ambiente através de informações sensoriais supridas pelo RHS e modeladas por uma ontologia denominada OntSense. A utilização de uma ontologia garante rigidez formal aos dados sensoriais além de viabilizar um alto nivel de abstração. O RHS tem como base a ferramenta de desenvolvimento de jogos Unity sendo aderente ao conceito de código aberto com disponibilização pelo repositório online GitHub. A validação do sistema foi realizada através de experimentos que demonstraram a capacidade do simulador em prover um ambiente de validação para arquiteturas cognitivas voltadas à robótica social. O RHS é pioneiro na integração de um simulador e uma arquitetura cognitiva, além disto, é um dos poucos direcionados para robótica social provendo rica informação sensorial, destacando-se o modelamento inédito disponibilizado para os sentidos de olfato e paladar. / Social robotics represents a branch of human-robot interaction that aims to develop robots to work in unstructured environments in direct partnership with humans. The Roadmap for Robotics from the Internet to Robotics, 2013, predicts achieving promising results in 12 years as long as appropriate conditions are made available to the area. One of the conditions involves the use of a reference environment to develop, evaluate and compare the performance of cognitive systems. This environment is called Robot City with actors, scenarios (houses, streets, city) and auditors. To date, this complex environment has not been materialized, possibly due to its high cost of installing and maintaining. In this dissertation an alternative way is proposed through the definition and implementation of the simulator of cognitive systems called Robot House Simulator (RHS). The RHS simulator aims to provide a residential environment composed of living room and kitchen, in which two agents live together, a humanoid robot and a human avatar. The human avatar is controlled by the user of the system and the robot is controlled by a cognitive architecture that determines the behavior of the robot. The cognitive architecture establishes its perception of the environment through sensorial information supplied by the RHS and modeled by an ontology called OntSense. The use of an ontology guarantees formal rigidity to the sensory data in addition to enabling a high level of abstraction. The RHS simulator is based on the Unity game engine and is adheres to the open source concept, available on the GitHub online repository. The validation of the system was performed through experiments that demonstrated the simulators ability to provide a validation environment for cognitive architectures aimed at social robotics. The RHS simulator is a pioneer in the integration of a simulator and a cognitive architecture. In addition, it is one of the few for social robotics to provide rich sensory information where it is worth noting the unprecedented modeling available to the senses of smell and taste.
197

Reconhecimento visual de gestos para imitação e correção de movimentos em fisioterapia guiada por robô / Visual gesture recognition for mimicking and correcting movements in robot-guided physiotherapy

Gambirasio, Ricardo Fibe 16 November 2015 (has links)
O objetivo deste trabalho é tornar possível a inserção de um robô humanoide para auxiliar pacientes em sessões de fisioterapia. Um sistema robótico é proposto que utiliza um robô humanoide, denominado NAO, visando analisar os movimentos feitos pelos pacientes e corrigi-los se necessário, além de motivá-los durante uma sessão de fisioterapia. O sistema desenvolvido permite que o robô, em primeiro lugar, aprenda um exercício correto de fisioterapia observando sua execução por um fisioterapeuta; em segundo lugar, que ele demonstre o exercício para que um paciente possa imitá-lo; e, finalmente, corrija erros cometidos pelo paciente durante a execução do exercício. O exercício correto é capturado por um sensor Kinect e dividido em uma sequência de estados em dimensão espaço-temporal usando k-means clustering. Estes estados então formam uma máquina de estados finitos para verificar se os movimentos do paciente estão corretos. A transição de um estado para o próximo corresponde a movimentos parciais que compõem o movimento aprendido, e acontece somente quando o robô observa o mesmo movimento parcial executado corretamente pelo paciente; caso contrário o robô sugere uma correção e pede que o paciente tente novamente. O sistema foi testado com vários pacientes em tratamento fisioterapêutico para problemas motores. Os resultados obtidos, em termos de precisão e recuperação para cada movimento, mostraram-se muito promissores. Além disso, o estado emocional dos pacientes foi também avaliado por meio de um questionário aplicado antes e depois do tratamento e durante o tratamento com um software de reconhecimento facial de emoções e os resultados indicam um impacto emocional bastante positivo e que pode vir a auxiliar pacientes durante tratamento fisioterapêuticos. / This dissertation develops a robotic system to guide patients through physiotherapy sessions. The proposed system uses the humanoid robot NAO, and it analyses patients movements to guide, correct, and motivate them during a session. Firstly, the system learns a correct physiotherapy exercise by observing a physiotherapist perform it; secondly, it demonstrates the exercise so that the patient can reproduce it; and finally, it corrects any mistakes that the patient might make during the exercise. The correct exercise is captured via Kinect sensor and divided into a sequence of states in spatial-temporal dimension using k-means clustering. Those states compose a finite state machine that is used to verify whether the patients movements are correct. The transition from one state to the next corresponds to partial movements that compose the learned exercise. If the patient executes the partial movement incorrectly, the system suggests a correction and returns to the same state, asking that the patient try again. The system was tested with multiple patients undergoing physiotherapeutic treatment for motor impairments. Based on the results obtained, the system achieved high precision and recall across all partial movements. The emotional impact of treatment on patients was also measured, via before and after questionnaires and via a software that recognizes emotions from video taken during treatment, showing a positive impact that could help motivate physiotherapy patients, improving their motivation and recovery.
198

Prédiction du mouvement humain pour la robotique collaborative : du geste accompagné au mouvement corps entier / Movement Prediction for human-robot collaboration : from simple gesture to whole-body movement

Dermy, Oriane 17 December 2018 (has links)
Cette thèse se situe à l’intersection de l’apprentissage automatique et de la robotique humanoïde, dans le domaine de la robotique collaborative. Elle se focalise sur les interactions non verbales humain-robot, en particulier sur l’interaction gestuelle. La prédiction de l’intention, la compréhension et la reproduction de gestes sont les questions centrales de cette thèse. Dans un premier temps, le robot apprend des gestes par démonstration : un utilisateur prend le bras du robot et lui fait réaliser les gestes à apprendre plusieurs fois. Le robot doit alors reproduire ces différents mouvements tout en les généralisant pour les adapter au contexte. Pour cela, à l’aide de ses capteurs proprioceptifs, il interprète les signaux perçus pour comprendre le mouvement guidé par l’utilisateur, afin de pouvoir en générer des similaires. Dans un second temps, le robot apprend à reconnaître l’intention de l’humain avec lequel il interagit, à partir des gestes que ce dernier initie. Le robot produit ensuite des gestes adaptés à la situation et correspondant aux attentes de l’utilisateur. Cela nécessite que le robot comprenne la gestuelle de l’utilisateur. Pour cela, différentes modalités perceptives ont été explorées. À l’aide de capteurs proprioceptifs, le robot ressent les gestes de l’utilisateur au travers de son propre corps : il s’agit alors d’interaction physique humain-robot. À l’aide de capteurs visuels, le robot interprète le mouvement de la tête de l’utilisateur. Enfin, à l’aide de capteurs externes, le robot reconnaît et prédit le mouvement corps entier de l’utilisateur. Dans ce dernier cas, l’utilisateur porte lui-même des capteurs (vêtement X-Sens) qui transmettent sa posture au robot. De plus, le couplage de ces modalités a été étudié. D’un point de vue méthodologique, nous nous sommes focalisés sur les questions d’apprentissage et de reconnaissance de gestes. Une première approche permet de modéliser statistiquement des primitives de mouvements representant les gestes : les ProMPs. La seconde, ajoute à la première du Deep Learning, par l’utilisation d’auto-encodeurs, afin de modéliser des gestes corps entier contenant beaucoup d’informations, tout en permettant une prédiction en temps réel mou. Différents enjeux ont notamment été pris en compte, concernant la prédiction des durées des trajectoires, la réduction de la charge cognitive et motrice imposée à l’utilisateur, le besoin de rapidité (temps réel mou) et de précision dans les prédictions / This thesis lies at the intersection between machine learning and humanoid robotics, under the theme of human-robot interaction and within the cobotics (collaborative robotics) field. It focuses on prediction for non-verbal human-robot interactions, with an emphasis on gestural interaction. The prediction of the intention, understanding, and reproduction of gestures are therefore central topics of this thesis. First, the robots learn gestures by demonstration: a user grabs its arm and makes it perform the gestures to be learned several times. The robot must then be able to reproduce these different movements while generalizing them to adapt them to the situation. To do so, using its proprioceptive sensors, it interprets the perceived signals to understand the user's movement in order to generate similar ones later on. Second, the robot learns to recognize the intention of the human partner based on the gestures that the human initiates. The robot can then perform gestures adapted to the situation and corresponding to the user’s expectations. This requires the robot to understand the user’s gestures. To this end, different perceptual modalities have been explored. Using proprioceptive sensors, the robot feels the user’s gestures through its own body: it is then a question of physical human-robot interaction. Using visual sensors, the robot interprets the movement of the user’s head. Finally, using external sensors, the robot recognizes and predicts the user’s whole body movement. In that case, the user wears sensors (in our case, a wearable motion tracking suit by XSens) that transmit his posture to the robot. In addition, the coupling of these modalities was studied. From a methodological point of view, the learning and the recognition of time series (gestures) have been central to this thesis. In that aspect, two approaches have been developed. The first is based on the statistical modeling of movement primitives (corresponding to gestures) : ProMPs. The second adds Deep Learning to the first one, by using auto-encoders in order to model whole-body gestures containing a lot of information while allowing a prediction in soft real time. Various issues were taken into account during this thesis regarding the creation and development of our methods. These issues revolve around: the prediction of trajectory durations, the reduction of the cognitive and motor load imposed on the user, the need for speed (soft real-time) and accuracy in predictions
199

Gestures in human-robot interaction

Bodiroža, Saša 16 February 2017 (has links)
Gesten sind ein Kommunikationsweg, der einem Betrachter Informationen oder Absichten übermittelt. Daher können sie effektiv in der Mensch-Roboter-Interaktion, oder in der Mensch-Maschine-Interaktion allgemein, verwendet werden. Sie stellen eine Möglichkeit für einen Roboter oder eine Maschine dar, um eine Bedeutung abzuleiten. Um Gesten intuitiv benutzen zukönnen und Gesten, die von Robotern ausgeführt werden, zu verstehen, ist es notwendig, Zuordnungen zwischen Gesten und den damit verbundenen Bedeutungen zu definieren -- ein Gestenvokabular. Ein Menschgestenvokabular definiert welche Gesten ein Personenkreis intuitiv verwendet, um Informationen zu übermitteln. Ein Robotergestenvokabular zeigt welche Robotergesten zu welcher Bedeutung passen. Ihre effektive und intuitive Benutzung hängt von Gestenerkennung ab, das heißt von der Klassifizierung der Körperbewegung in diskrete Gestenklassen durch die Verwendung von Mustererkennung und maschinellem Lernen. Die vorliegende Dissertation befasst sich mit beiden Forschungsbereichen. Als eine Voraussetzung für die intuitive Mensch-Roboter-Interaktion wird zunächst ein Aufmerksamkeitsmodell für humanoide Roboter entwickelt. Danach wird ein Verfahren für die Festlegung von Gestenvokabulare vorgelegt, das auf Beobachtungen von Benutzern und Umfragen beruht. Anschliessend werden experimentelle Ergebnisse vorgestellt. Eine Methode zur Verfeinerung der Robotergesten wird entwickelt, die auf interaktiven genetischen Algorithmen basiert. Ein robuster und performanter Gestenerkennungsalgorithmus wird entwickelt, der auf Dynamic Time Warping basiert, und sich durch die Verwendung von One-Shot-Learning auszeichnet, das heißt durch die Verwendung einer geringen Anzahl von Trainingsgesten. Der Algorithmus kann in realen Szenarien verwendet werden, womit er den Einfluss von Umweltbedingungen und Gesteneigenschaften, senkt. Schließlich wird eine Methode für das Lernen der Beziehungen zwischen Selbstbewegung und Zeigegesten vorgestellt. / Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.
200

Ankle torque estimation for lower-limb robotic rehabilitation / Estimativa de torque no tornozelo para reabilitação robótica de membros inferiores

Campo Jaimes, Jonathan 15 June 2018 (has links)
In robotic rehabilitation therapies, knowledge of human joint torques is important for patient safety, to provide a reliable data for clinical assessment and to increase control performance of the device, nevertheless, its measurement can be complex or have a highcost implementation. The most of techniques for torque estimation have been developed for upper limb robotic rehabilitation devices, in addition, they typically require detailed anthropometric and musculoskeletal models. In this dissertation is presented the ankle torque estimation for the Anklebot robot, the estimation uses an ankle/Anklebot dynamic model that consider the ankle joint angular displacement and velocity measurement, its mechanical impedance parameters are obtained through a second-order modeled mechanical impedance of the ankle and an identification of frictional and gravitational torques. Three approaches for the ankle torque estimation were proposed to be implemented in the Anklebot robot, the Generalized Momentum, the Kalman filter and finally a combination of both the above mentioned approaches. The validation of such approaches was developed first on a physical mockup configured to reproduce the human ankle joint movement, by assessing its performances, the Kalman filter approach was selected to be implemented on a voluntary subject. A set of experiments were performed considering the physical activity that the subject may realize when interacting with the Anklebot, the developed ankle torque estimation proved to be successful for passive torque and in most of the proposed scenarios where active torque is performed. / Em terapias de reabilitação robótica, o conhecimento dos torques da articulação humana é importante para a segurança do paciente, para fornecer dados confiáveis na avaliação clínica e aumentar o desempenho de controle do dispositivo, no entanto, sua medição pode ser complexa ou costoso de implementar. A maioria das técnicas de estimativa de torque tem sido desenvolvidas para dispositivos de reabilitação robótica de membros superiores, além disso, eles normalmente requerem modelos antropométricos e musculoesqueléticos detalhados. Nesta dissertação é apresentada a estimativa do torque do tornozelo no robô Anklebot, a estimação utiliza um modelo dinâmico tornozelo + Anklebot o qual considera a medição da posição e velocidade angular do tornozelo, os parametros de impedancia mecânica do tornozelo são obtidos por meio de um modelo simples de segunda ordem e são identificados os torques gravitacionais e de atrito. Três abordagens para a estimativa de torque de tornozelo foram propostas para serem implementadas, o momento generalizado, o filtro de Kalman e, finalmente, uma abordagem que combina tanto o momento generalizado e o filtro de Kalman. A validação de tais abordagens foi desenvolvida primeiro em um mock-up físico configurado para reproduzir o movimento articular do tornozelo humano, avaliando seus desempenhos. A segunda abordagem proposta foi selecionada para ser implementada em um usuário voluntário. Um conjunto de experimentos foi realizado considerando a atividade física que o sujeito pode realizar ao interagir com o Anklebot, a estimativa desenvolvida de torque de tornozelo demostrou ser bem sucedida para o torque passivo e na maioria dos cenários propostos onde o torque ativo é realizado.

Page generated in 0.1149 seconds