• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 214
  • 24
  • 18
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 382
  • 382
  • 310
  • 126
  • 107
  • 69
  • 63
  • 63
  • 57
  • 52
  • 50
  • 49
  • 45
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Um simulador para robótica social aplicado a ambientes internos / A simulator for social robotics applied to indoor environments

Belo, José Pedro Ribeiro 26 March 2018 (has links)
A robótica social representa um ramo da interação humano-robô que visa desenvolver robôs para atuar em ambientes não estruturados em parceria direta com seres humanos. O relatório A Roadmap for U.S. Robotics From Internet to Robotics, de 2013, preconiza a obtenção de resultados promissores em 12 anos desde que condições apropriadas sejam disponibilizadas para a área. Uma das condições envolve a utilização de ambiente de referência para desenvolver, avaliar e comparar o desempenho de sistemas cognitivos. Este ambiente é denominado Robot City com atores, cenários (casas, ruas, cidade) e auditores. Até o momento esse complexo ambiente não se concretizou, possivelmente devido ao elevado custo de implantação e manutenção de uma instalação desse porte. Nesta dissertação é proposto um caminho alternativo através da definição e implementação do simulador de sistemas cognitivos denominado Robot House Simulator (RHS). O simulador RHS tem como objetivo disponibilizar um ambiente residencial composto por sala e cozinha, no qual convivem dois agentes, um robô humanoide e um avatar humano. O agente humano é controlado pelo usuário do sistema e o robô é controlado por uma arquitetura cognitiva que determina o comportamento do robô. A arquitetura cognitiva estabelece sua percepção do ambiente através de informações sensoriais supridas pelo RHS e modeladas por uma ontologia denominada OntSense. A utilização de uma ontologia garante rigidez formal aos dados sensoriais além de viabilizar um alto nivel de abstração. O RHS tem como base a ferramenta de desenvolvimento de jogos Unity sendo aderente ao conceito de código aberto com disponibilização pelo repositório online GitHub. A validação do sistema foi realizada através de experimentos que demonstraram a capacidade do simulador em prover um ambiente de validação para arquiteturas cognitivas voltadas à robótica social. O RHS é pioneiro na integração de um simulador e uma arquitetura cognitiva, além disto, é um dos poucos direcionados para robótica social provendo rica informação sensorial, destacando-se o modelamento inédito disponibilizado para os sentidos de olfato e paladar. / Social robotics represents a branch of human-robot interaction that aims to develop robots to work in unstructured environments in direct partnership with humans. The Roadmap for Robotics from the Internet to Robotics, 2013, predicts achieving promising results in 12 years as long as appropriate conditions are made available to the area. One of the conditions involves the use of a reference environment to develop, evaluate and compare the performance of cognitive systems. This environment is called Robot City with actors, scenarios (houses, streets, city) and auditors. To date, this complex environment has not been materialized, possibly due to its high cost of installing and maintaining. In this dissertation an alternative way is proposed through the definition and implementation of the simulator of cognitive systems called Robot House Simulator (RHS). The RHS simulator aims to provide a residential environment composed of living room and kitchen, in which two agents live together, a humanoid robot and a human avatar. The human avatar is controlled by the user of the system and the robot is controlled by a cognitive architecture that determines the behavior of the robot. The cognitive architecture establishes its perception of the environment through sensorial information supplied by the RHS and modeled by an ontology called OntSense. The use of an ontology guarantees formal rigidity to the sensory data in addition to enabling a high level of abstraction. The RHS simulator is based on the Unity game engine and is adheres to the open source concept, available on the GitHub online repository. The validation of the system was performed through experiments that demonstrated the simulators ability to provide a validation environment for cognitive architectures aimed at social robotics. The RHS simulator is a pioneer in the integration of a simulator and a cognitive architecture. In addition, it is one of the few for social robotics to provide rich sensory information where it is worth noting the unprecedented modeling available to the senses of smell and taste.
252

Reconhecimento visual de gestos para imitação e correção de movimentos em fisioterapia guiada por robô / Visual gesture recognition for mimicking and correcting movements in robot-guided physiotherapy

Gambirasio, Ricardo Fibe 16 November 2015 (has links)
O objetivo deste trabalho é tornar possível a inserção de um robô humanoide para auxiliar pacientes em sessões de fisioterapia. Um sistema robótico é proposto que utiliza um robô humanoide, denominado NAO, visando analisar os movimentos feitos pelos pacientes e corrigi-los se necessário, além de motivá-los durante uma sessão de fisioterapia. O sistema desenvolvido permite que o robô, em primeiro lugar, aprenda um exercício correto de fisioterapia observando sua execução por um fisioterapeuta; em segundo lugar, que ele demonstre o exercício para que um paciente possa imitá-lo; e, finalmente, corrija erros cometidos pelo paciente durante a execução do exercício. O exercício correto é capturado por um sensor Kinect e dividido em uma sequência de estados em dimensão espaço-temporal usando k-means clustering. Estes estados então formam uma máquina de estados finitos para verificar se os movimentos do paciente estão corretos. A transição de um estado para o próximo corresponde a movimentos parciais que compõem o movimento aprendido, e acontece somente quando o robô observa o mesmo movimento parcial executado corretamente pelo paciente; caso contrário o robô sugere uma correção e pede que o paciente tente novamente. O sistema foi testado com vários pacientes em tratamento fisioterapêutico para problemas motores. Os resultados obtidos, em termos de precisão e recuperação para cada movimento, mostraram-se muito promissores. Além disso, o estado emocional dos pacientes foi também avaliado por meio de um questionário aplicado antes e depois do tratamento e durante o tratamento com um software de reconhecimento facial de emoções e os resultados indicam um impacto emocional bastante positivo e que pode vir a auxiliar pacientes durante tratamento fisioterapêuticos. / This dissertation develops a robotic system to guide patients through physiotherapy sessions. The proposed system uses the humanoid robot NAO, and it analyses patients movements to guide, correct, and motivate them during a session. Firstly, the system learns a correct physiotherapy exercise by observing a physiotherapist perform it; secondly, it demonstrates the exercise so that the patient can reproduce it; and finally, it corrects any mistakes that the patient might make during the exercise. The correct exercise is captured via Kinect sensor and divided into a sequence of states in spatial-temporal dimension using k-means clustering. Those states compose a finite state machine that is used to verify whether the patients movements are correct. The transition from one state to the next corresponds to partial movements that compose the learned exercise. If the patient executes the partial movement incorrectly, the system suggests a correction and returns to the same state, asking that the patient try again. The system was tested with multiple patients undergoing physiotherapeutic treatment for motor impairments. Based on the results obtained, the system achieved high precision and recall across all partial movements. The emotional impact of treatment on patients was also measured, via before and after questionnaires and via a software that recognizes emotions from video taken during treatment, showing a positive impact that could help motivate physiotherapy patients, improving their motivation and recovery.
253

Human-help in automated planning under uncertainty / Ajuda humana em planejamento automatizado sob incerteza

Franch, Ignasi Andrés 21 September 2018 (has links)
Planning is the sub-area of artificial intelligence that studies the process of selecting actions to lead an agent, e.g. a robot or a softbot, to a goal state. In many realistic scenarios, any choice of actions can lead the robot into a dead-end state, that is, a state from which the goal cannot be reached. In such cases, the robot can, pro-actively, resort to human help in order to reach the goal, an approach called symbiotic autonomy. In this work, we propose two different approaches to tackle this problem: (I) contingent planning, where the initial state is partially observable, configuring a belief state, and the outcomes of the robot actions are non-deterministic; and (II) probabilistic planning, where the initial state may be partially or totally observable and the actions have probabilistic outcomes. In both approaches, the human help is considered a scarce resource that should be used only when necessary. In contingent planning, the problem is to find a policy (a function mapping belief states into actions) that: (i) guarantees the agent will always reach the goal (strong policy); (ii) guarantees that the agent will eventually reach the goal (strong cyclic policy), or (iii) does not guarantee achieving the goal (weak policy). In this scenario, we propose a contingent planning system that considers human help to transform weak policies into strong (cyclic) policies. To do so, two types of human help are included: (i) human actions that modify states and/or belief states; and (ii) human observations that modify belief states. In probabilistic planning, the problem is to find a policy (a function mapping between world states and actions) that can be one of these two types: a proper policy, where the agent has probability 1 of reaching the goal; or an improper policy, in the case of unavoidable dead-ends. In general, the goal of the agent is to find a policy that minimizes the expected accumulated cost of the actions while maximizes the probability of reaching the goal. In this scenario, this work proposes probabilistic planners that consider human help to transform improper policies into proper policies however, considering two new (alternative) criteria: either to minimize the probability of using human actions or to minimize the expected number of human actions. Furthermore, we show that optimal policies under these criteria can be efficiently computed either by increasing human action costs or given a penalty when a human help is used. Solutions proposed in both scenarios, contingent planning and probabilistic planning with human help, were evaluated over a collection of planning problems with dead-ends. The results show that: (i) all generated policies (strong (cyclic) or proper) include human help only when necessary; and (ii) we were able to find policies for contingent planning problems with up to 10^15000 belief states and for probabilistic planning problems with more than 3*10^18 physical states. / Planejamento é a subárea de Inteligência Artificial que estuda o processo de selecionar ações que levam um agente, por exemplo um robô, de um estado inicial a um estado meta. Em muitos cenários realistas, qualquer escolha de ações pode levar o robô para um estado que é um beco-sem-saída, isto é, um estado a partir do qual a meta não pode ser alcançada. Nestes casos, o robô pode, pró-ativamente, pedir ajuda humana para alcançar a meta, uma abordagem chamada autonomia simbiótica. Neste trabalho, propomos duas abordagens diferentes para tratar este problema: (I) planejamento contingente, em que o estado inicial é parcialmente observável, configurando um estado de crença, e existe não-determinismo nos resultados das ações; e (II) planejamento probabilístico, em que o estado inicial é totalmente observável e as ações tem efeitos probabilísticos. Em ambas abordagens a ajuda humana é considerada um recurso escasso e deve ser usada somente quando estritamente necessária. No planejamento contingente, o problema é encontrar uma política (mapeamento entre estados de crença e ações) com: (i) garantia de alcançar a meta (política forte); (ii) garantia de eventualmente alcançar a meta (política forte-cíclica), ou (iii) sem garantia de alcançar a meta (política fraca). Neste cenário, uma das contribuições deste trabalho é propor sistemas de planejamento contingente que considerem ajuda humana para transformar políticas fracas em políticas fortes (cíclicas). Para isso, incluímos ajuda humana de dois tipos: (i) ações que modificam estados do mundo e/ou estados de crença; e (ii) observações que modificam estados de crenças. Em planejamento probabilístico, o problema é encontrar uma política (mapeamento entre estados do mundo e ações) que pode ser de dois tipos: política própria, na qual o agente tem probabilidade 1 de alcançar a meta; ou política imprópria, caso exista um beco-sem-saída inevitável. O objetivo do agente é, em geral, encontrar uma política que minimize o custo esperado acumulado das ações enquanto maximize a probabilidade de alcançar a meta. Neste cenário, este trabalho propõe sistemas de planejamento probabilístico que considerem ajuda humana para transformar políticas impróprias em políticas próprias, porém considerando dois novos critérios: minimizar a probabilidade de usar ações do humano e minimizar o número esperado de ações do humano. Mostramos ainda que políticas ótimas sob esses novos critérios podem ser computadas de maneira eficiente considerando que ações humanas possuem um custo alto ou penalizando o agente ao pedir ajuda humana. Soluções propostas em ambos cenários, planejamento contingente e planejamento probabilístico com ajuda humana, foram empiricamente avaliadas sobre um conjunto de problemas de planejamento com becos-sem-saida. Os resultados mostram que: (i) todas as políticas geradas (fortes (cíclicas) ou próprias) incluem ajuda humana somente quando necessária; e (ii) foram encontradas políticas para problemas de planejamento contingente com até 10^15000 estados de crença e para problemas de planejamento probabilístico com até 3*10^18 estados do mundo.
254

Prédiction du mouvement humain pour la robotique collaborative : du geste accompagné au mouvement corps entier / Movement Prediction for human-robot collaboration : from simple gesture to whole-body movement

Dermy, Oriane 17 December 2018 (has links)
Cette thèse se situe à l’intersection de l’apprentissage automatique et de la robotique humanoïde, dans le domaine de la robotique collaborative. Elle se focalise sur les interactions non verbales humain-robot, en particulier sur l’interaction gestuelle. La prédiction de l’intention, la compréhension et la reproduction de gestes sont les questions centrales de cette thèse. Dans un premier temps, le robot apprend des gestes par démonstration : un utilisateur prend le bras du robot et lui fait réaliser les gestes à apprendre plusieurs fois. Le robot doit alors reproduire ces différents mouvements tout en les généralisant pour les adapter au contexte. Pour cela, à l’aide de ses capteurs proprioceptifs, il interprète les signaux perçus pour comprendre le mouvement guidé par l’utilisateur, afin de pouvoir en générer des similaires. Dans un second temps, le robot apprend à reconnaître l’intention de l’humain avec lequel il interagit, à partir des gestes que ce dernier initie. Le robot produit ensuite des gestes adaptés à la situation et correspondant aux attentes de l’utilisateur. Cela nécessite que le robot comprenne la gestuelle de l’utilisateur. Pour cela, différentes modalités perceptives ont été explorées. À l’aide de capteurs proprioceptifs, le robot ressent les gestes de l’utilisateur au travers de son propre corps : il s’agit alors d’interaction physique humain-robot. À l’aide de capteurs visuels, le robot interprète le mouvement de la tête de l’utilisateur. Enfin, à l’aide de capteurs externes, le robot reconnaît et prédit le mouvement corps entier de l’utilisateur. Dans ce dernier cas, l’utilisateur porte lui-même des capteurs (vêtement X-Sens) qui transmettent sa posture au robot. De plus, le couplage de ces modalités a été étudié. D’un point de vue méthodologique, nous nous sommes focalisés sur les questions d’apprentissage et de reconnaissance de gestes. Une première approche permet de modéliser statistiquement des primitives de mouvements representant les gestes : les ProMPs. La seconde, ajoute à la première du Deep Learning, par l’utilisation d’auto-encodeurs, afin de modéliser des gestes corps entier contenant beaucoup d’informations, tout en permettant une prédiction en temps réel mou. Différents enjeux ont notamment été pris en compte, concernant la prédiction des durées des trajectoires, la réduction de la charge cognitive et motrice imposée à l’utilisateur, le besoin de rapidité (temps réel mou) et de précision dans les prédictions / This thesis lies at the intersection between machine learning and humanoid robotics, under the theme of human-robot interaction and within the cobotics (collaborative robotics) field. It focuses on prediction for non-verbal human-robot interactions, with an emphasis on gestural interaction. The prediction of the intention, understanding, and reproduction of gestures are therefore central topics of this thesis. First, the robots learn gestures by demonstration: a user grabs its arm and makes it perform the gestures to be learned several times. The robot must then be able to reproduce these different movements while generalizing them to adapt them to the situation. To do so, using its proprioceptive sensors, it interprets the perceived signals to understand the user's movement in order to generate similar ones later on. Second, the robot learns to recognize the intention of the human partner based on the gestures that the human initiates. The robot can then perform gestures adapted to the situation and corresponding to the user’s expectations. This requires the robot to understand the user’s gestures. To this end, different perceptual modalities have been explored. Using proprioceptive sensors, the robot feels the user’s gestures through its own body: it is then a question of physical human-robot interaction. Using visual sensors, the robot interprets the movement of the user’s head. Finally, using external sensors, the robot recognizes and predicts the user’s whole body movement. In that case, the user wears sensors (in our case, a wearable motion tracking suit by XSens) that transmit his posture to the robot. In addition, the coupling of these modalities was studied. From a methodological point of view, the learning and the recognition of time series (gestures) have been central to this thesis. In that aspect, two approaches have been developed. The first is based on the statistical modeling of movement primitives (corresponding to gestures) : ProMPs. The second adds Deep Learning to the first one, by using auto-encoders in order to model whole-body gestures containing a lot of information while allowing a prediction in soft real time. Various issues were taken into account during this thesis regarding the creation and development of our methods. These issues revolve around: the prediction of trajectory durations, the reduction of the cognitive and motor load imposed on the user, the need for speed (soft real-time) and accuracy in predictions
255

Gestures in human-robot interaction

Bodiroža, Saša 16 February 2017 (has links)
Gesten sind ein Kommunikationsweg, der einem Betrachter Informationen oder Absichten übermittelt. Daher können sie effektiv in der Mensch-Roboter-Interaktion, oder in der Mensch-Maschine-Interaktion allgemein, verwendet werden. Sie stellen eine Möglichkeit für einen Roboter oder eine Maschine dar, um eine Bedeutung abzuleiten. Um Gesten intuitiv benutzen zukönnen und Gesten, die von Robotern ausgeführt werden, zu verstehen, ist es notwendig, Zuordnungen zwischen Gesten und den damit verbundenen Bedeutungen zu definieren -- ein Gestenvokabular. Ein Menschgestenvokabular definiert welche Gesten ein Personenkreis intuitiv verwendet, um Informationen zu übermitteln. Ein Robotergestenvokabular zeigt welche Robotergesten zu welcher Bedeutung passen. Ihre effektive und intuitive Benutzung hängt von Gestenerkennung ab, das heißt von der Klassifizierung der Körperbewegung in diskrete Gestenklassen durch die Verwendung von Mustererkennung und maschinellem Lernen. Die vorliegende Dissertation befasst sich mit beiden Forschungsbereichen. Als eine Voraussetzung für die intuitive Mensch-Roboter-Interaktion wird zunächst ein Aufmerksamkeitsmodell für humanoide Roboter entwickelt. Danach wird ein Verfahren für die Festlegung von Gestenvokabulare vorgelegt, das auf Beobachtungen von Benutzern und Umfragen beruht. Anschliessend werden experimentelle Ergebnisse vorgestellt. Eine Methode zur Verfeinerung der Robotergesten wird entwickelt, die auf interaktiven genetischen Algorithmen basiert. Ein robuster und performanter Gestenerkennungsalgorithmus wird entwickelt, der auf Dynamic Time Warping basiert, und sich durch die Verwendung von One-Shot-Learning auszeichnet, das heißt durch die Verwendung einer geringen Anzahl von Trainingsgesten. Der Algorithmus kann in realen Szenarien verwendet werden, womit er den Einfluss von Umweltbedingungen und Gesteneigenschaften, senkt. Schließlich wird eine Methode für das Lernen der Beziehungen zwischen Selbstbewegung und Zeigegesten vorgestellt. / Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.
256

Ankle torque estimation for lower-limb robotic rehabilitation / Estimativa de torque no tornozelo para reabilitação robótica de membros inferiores

Campo Jaimes, Jonathan 15 June 2018 (has links)
In robotic rehabilitation therapies, knowledge of human joint torques is important for patient safety, to provide a reliable data for clinical assessment and to increase control performance of the device, nevertheless, its measurement can be complex or have a highcost implementation. The most of techniques for torque estimation have been developed for upper limb robotic rehabilitation devices, in addition, they typically require detailed anthropometric and musculoskeletal models. In this dissertation is presented the ankle torque estimation for the Anklebot robot, the estimation uses an ankle/Anklebot dynamic model that consider the ankle joint angular displacement and velocity measurement, its mechanical impedance parameters are obtained through a second-order modeled mechanical impedance of the ankle and an identification of frictional and gravitational torques. Three approaches for the ankle torque estimation were proposed to be implemented in the Anklebot robot, the Generalized Momentum, the Kalman filter and finally a combination of both the above mentioned approaches. The validation of such approaches was developed first on a physical mockup configured to reproduce the human ankle joint movement, by assessing its performances, the Kalman filter approach was selected to be implemented on a voluntary subject. A set of experiments were performed considering the physical activity that the subject may realize when interacting with the Anklebot, the developed ankle torque estimation proved to be successful for passive torque and in most of the proposed scenarios where active torque is performed. / Em terapias de reabilitação robótica, o conhecimento dos torques da articulação humana é importante para a segurança do paciente, para fornecer dados confiáveis na avaliação clínica e aumentar o desempenho de controle do dispositivo, no entanto, sua medição pode ser complexa ou costoso de implementar. A maioria das técnicas de estimativa de torque tem sido desenvolvidas para dispositivos de reabilitação robótica de membros superiores, além disso, eles normalmente requerem modelos antropométricos e musculoesqueléticos detalhados. Nesta dissertação é apresentada a estimativa do torque do tornozelo no robô Anklebot, a estimação utiliza um modelo dinâmico tornozelo + Anklebot o qual considera a medição da posição e velocidade angular do tornozelo, os parametros de impedancia mecânica do tornozelo são obtidos por meio de um modelo simples de segunda ordem e são identificados os torques gravitacionais e de atrito. Três abordagens para a estimativa de torque de tornozelo foram propostas para serem implementadas, o momento generalizado, o filtro de Kalman e, finalmente, uma abordagem que combina tanto o momento generalizado e o filtro de Kalman. A validação de tais abordagens foi desenvolvida primeiro em um mock-up físico configurado para reproduzir o movimento articular do tornozelo humano, avaliando seus desempenhos. A segunda abordagem proposta foi selecionada para ser implementada em um usuário voluntário. Um conjunto de experimentos foi realizado considerando a atividade física que o sujeito pode realizar ao interagir com o Anklebot, a estimativa desenvolvida de torque de tornozelo demostrou ser bem sucedida para o torque passivo e na maioria dos cenários propostos onde o torque ativo é realizado.
257

Etude de la direction du regard dans le cadre d'interactions sociales incluant un robot / Gaze direction in the context of social human-robot interaction

Massé, Benoît 29 October 2018 (has links)
Les robots sont de plus en plus utilisés dans un cadre social. Il ne suffit plusde partager l’espace avec des humains, mais aussi d’interagir avec eux. Dansce cadre, il est attendu du robot qu’il comprenne un certain nombre de signauxambiguës, verbaux et visuels, nécessaires à une interaction humaine. En particulier, on peut extraire beaucoup d’information, à la fois sur l’état d’esprit despersonnes et sur la dynamique de groupe à l’œuvre, en connaissant qui ou quoichaque personne regarde. On parle de la Cible d’attention visuelle, désignéepar l’acronyme anglais VFOA. Dans cette thèse, nous nous intéressons auxdonnées perçues par un robot humanoı̈de qui participe activement à une in-teraction sociale, et à leur utilisation pour deviner ce que chaque personneregarde.D’une part, le robot doit “regarder les gens”, à savoir orienter sa tête(et donc la caméra) pour obtenir des images des personnes présentes. Nousprésentons une méthode originale d’apprentissage par renforcement pourcontrôler la direction du regard d’un robot. Cette méthode utilise des réseauxde neurones récurrents. Le robot s’entraı̂ne en autonomie à déplacer sa tête enfonction des données visuelles et auditives. Il atteint une stratégie efficace, quilui permet de cibler des groupes de personnes dans un environnement évolutif.D’autre part, les images du robot peuvent être utilisée pour estimer lesVFOAs au cours du temps. Pour chaque visage visible, nous calculons laposture 3D de la tête (position et orientation dans l’espace) car très fortementcorrélée avec la direction du regard. Nous l’utilisons dans deux applications.Premièrement, nous remarquons que les gens peuvent regarder des objets quine sont pas visible depuis le point de vue du robot. Sous l’hypothèse quelesdits objets soient regardés au moins une partie du temps, nous souhaitonsestimer leurs positions exclusivement à partir de la direction du regard despersonnes visibles. Nous utilisons une représentation sous forme de carte dechaleur. Nous avons élaboré et entraı̂né plusieurs réseaux de convolutions afinde d’estimer la régression entre une séquence de postures des têtes, et les posi-tions des objets. Dans un second temps, les positions des objets d’intérêt, pou-vant être ciblés, sont supposées connues. Nous présentons alors un modèleprobabiliste, suggéré par des résultats en psychophysique, afin de modéliserla relation entre les postures des têtes, les positions des objets, la directiondu regard et les VFOAs. La formulation utilise un modèle markovien à dy-namiques multiples. En appliquant une approches bayésienne, nous obtenonsun algorithme pour calculer les VFOAs au fur et à mesure, et une méthodepour estimer les paramètres du modèle.Nos contributions reposent sur la possibilité d’utiliser des données, afind’exploiter des approches d’apprentissage automatique. Toutes nos méthodessont validées sur des jeu de données disponibles publiquement. De plus, lagénération de scénarios synthétiques permet d’agrandir à volonté la quantitéde données disponibles; les méthodes pour simuler ces données sont explicite-ment détaillée. / Robots are more and more used in a social context. They are required notonly to share physical space with humans but also to interact with them. Inthis context, the robot is expected to understand some verbal and non-verbalambiguous cues, constantly used in a natural human interaction. In particular,knowing who or what people are looking at is a very valuable information tounderstand each individual mental state as well as the interaction dynamics. Itis called Visual Focus of Attention or VFOA. In this thesis, we are interestedin using the inputs from an active humanoid robot – participating in a socialinteraction – to estimate who is looking at whom or what.On the one hand, we want the robot to look at people, so it can extractmeaningful visual information from its video camera. We propose a novelreinforcement learning method for robotic gaze control. The model is basedon a recurrent neural network architecture. The robot autonomously learns astrategy for moving its head (and camera) using audio-visual inputs. It is ableto focus on groups of people in a changing environment.On the other hand, information from the video camera images are used toinfer the VFOAs of people along time. We estimate the 3D head poses (lo-cation and orientation) for each face, as it is highly correlated with the gazedirection. We use it in two tasks. First, we note that objects may be lookedat while not being visible from the robot point of view. Under the assump-tion that objects of interest are being looked at, we propose to estimate theirlocations relying solely on the gaze direction of visible people. We formulatean ad hoc spatial representation based on probability heat-maps. We designseveral convolutional neural network models and train them to perform a re-gression from the space of head poses to the space of object locations. Thisprovide a set of object locations from a sequence of head poses. Second, wesuppose that the location of objects of interest are known. In this context, weintroduce a Bayesian probabilistic model, inspired from psychophysics, thatdescribes the dependency between head poses, object locations, eye-gaze di-rections, and VFOAs, along time. The formulation is based on a switchingstate-space Markov model. A specific filtering procedure is detailed to inferthe VFOAs, as well as an adapted training algorithm.The proposed contributions use data-driven approaches, and are addressedwithin the context of machine learning. All methods have been tested on pub-licly available datasets. Some training procedures additionally require to sim-ulate synthetic scenarios; the generation process is then explicitly detailed.
258

Segmentation et reconaissance des gestes pour l'interaction homme-robot cognitive / Gesture Segmentation and Recognition for Cognitive Human-Robot Interaction

Simao, Miguel 17 December 2018 (has links)
Cette thèse présente un cadre formel pour l'interaction Homme-robot (HRI), qui reconnaître un important lexique de gestes statiques et dynamiques mesurés par des capteurs portatifs. Gestes statiques et dynamiques sont classés séparément grâce à un processus de segmentation. Les tests expérimentaux sur la base de données de gestes UC2017 ont montré une haute précision de classification. La classification pas à pas en ligne utilisant des données brutes est fait avec des réseaux de neurones profonds « Long-Short Term Memory » (LSTM) et à convolution (CNN), et sont plus performants que les modèles statiques entraînés avec des caractéristiques spécialement conçues, au détriment du temps d'entraînement et d'inférence. La classification en ligne des gestes permet une classification prédictive avec réussit. Le rejet des gestes hors vocabulaire est proposé par apprentissage semi-supervisé par un réseau de neurones du type « Auxiliary Conditional Generative Adversarial Networks ». Le réseau propose a atteint une haute précision de rejet de les gestes non entraînés de la base de données UC2018 DualMyo. / This thesis presents a human-robot interaction (HRI) framework to classify large vocabularies of static and dynamic hand gestures, captured with wearable sensors. Static and dynamic gestures are classified separately thanks to the segmentation process. Experimental tests on the UC2017 hand gesture dataset showed high accuracy. In online frame-by-frame classification using raw incomplete data, Long Short-Term Memory (LSTM) deep networks and Convolutional Neural Networks (CNN) performed better than static models with specially crafted features at the cost of training and inference time. Online classification of dynamic gestures allows successful predictive classification. The rejection of out-of-vocabulary gestures is proposed to be done through semi-supervised learning of a network in the Auxiliary Conditional Generative Adversarial Networks framework. The proposed network achieved a high accuracy on the rejection of untrained patterns of the UC2018 DualMyo dataset.
259

Arquitetura de controle inteligente para interação humano-robô / Control architecture for human-robot interaction

Alves, Silas Franco dos Reis 01 April 2016 (has links)
Supondo-se que os robôs coexistirão conosco num futuro próximo, é então evidente a necessidade de Arquiteturas de Controle Inteligentes voltadas para a Interação Humano-Robô. Portanto, este trabalho desenvolveu uma Organização de Arquitetura de Controle Inteligente comportamental, cujo objetivo principal é permitir que o robô interaja com as pessoas de maneira intuitiva e que motive a colaboração entre pessoas e robôs. Para isso, um módulo emocional sintético, embasado na teoria bidimensional de emoções, foi integrado para promover a adaptação dos comportamentos do robô, implementados por Esquemas Motores, e a comunicação de seu estado interno de modo inteligível. Esta Organização subsidiou a implantação da Arquitetura de Controle em uma aplicação voltada para a área assistencial da saúde, consistindo, destarte, em um estudo de caso em robótica social assistiva como ferramenta auxiliar para educação especial. Os experimentos realizados demonstraram que a arquitetura de controle desenvolvida é capaz de atender aos requisitos da aplicação, conforme estipulado pelos especialistas consultados. Isto posto, esta tese contribui com o projeto de uma arquitetura de controle capaz de agir mediante a avaliação subjetiva baseada em crenças cognitivas das emoções, o desenvolvimento de um robô móvel de baixo-custo, e a elaboração do estudo de caso em educação especial. / Assuming that robots will coexist with humans in the near future, it is conspicuous the need of Intelligent Control Architectures suitable for Human-Robot Interaction. Henceforth, this research has developed a behavioral Control Architecture Organization, whose main purpose is to allow the intuitive interaction of robot and people, thus fostering the collaboration between them. To this end, a synthetic emotional module, based on the Circumplex emotion theory, promoted the adaptation of the robot behaviors, implemented using Motor Schema theory, and the communication of its internal state. This Organization supported the adoption of the Control Architecture into an assistive application, which consists of the case study of assistive social robots as an auxiliary tool for special education. The experiments have demonstrated that the developed control architecture was able to meet the requirements of the application, which were conceived according to the opinion of the consulted experts. Thereafter, this thesis contributes with the design of a control architecture that is able to act upon the subjective evaluation based on cognitive beliefs of emotions, the development of a low-cost mobile robot, and the development of the case study in special education.
260

Reconhecimento visual de gestos para imitação e correção de movimentos em fisioterapia guiada por robô / Visual gesture recognition for mimicking and correcting movements in robot-guided physiotherapy

Ricardo Fibe Gambirasio 16 November 2015 (has links)
O objetivo deste trabalho é tornar possível a inserção de um robô humanoide para auxiliar pacientes em sessões de fisioterapia. Um sistema robótico é proposto que utiliza um robô humanoide, denominado NAO, visando analisar os movimentos feitos pelos pacientes e corrigi-los se necessário, além de motivá-los durante uma sessão de fisioterapia. O sistema desenvolvido permite que o robô, em primeiro lugar, aprenda um exercício correto de fisioterapia observando sua execução por um fisioterapeuta; em segundo lugar, que ele demonstre o exercício para que um paciente possa imitá-lo; e, finalmente, corrija erros cometidos pelo paciente durante a execução do exercício. O exercício correto é capturado por um sensor Kinect e dividido em uma sequência de estados em dimensão espaço-temporal usando k-means clustering. Estes estados então formam uma máquina de estados finitos para verificar se os movimentos do paciente estão corretos. A transição de um estado para o próximo corresponde a movimentos parciais que compõem o movimento aprendido, e acontece somente quando o robô observa o mesmo movimento parcial executado corretamente pelo paciente; caso contrário o robô sugere uma correção e pede que o paciente tente novamente. O sistema foi testado com vários pacientes em tratamento fisioterapêutico para problemas motores. Os resultados obtidos, em termos de precisão e recuperação para cada movimento, mostraram-se muito promissores. Além disso, o estado emocional dos pacientes foi também avaliado por meio de um questionário aplicado antes e depois do tratamento e durante o tratamento com um software de reconhecimento facial de emoções e os resultados indicam um impacto emocional bastante positivo e que pode vir a auxiliar pacientes durante tratamento fisioterapêuticos. / This dissertation develops a robotic system to guide patients through physiotherapy sessions. The proposed system uses the humanoid robot NAO, and it analyses patients movements to guide, correct, and motivate them during a session. Firstly, the system learns a correct physiotherapy exercise by observing a physiotherapist perform it; secondly, it demonstrates the exercise so that the patient can reproduce it; and finally, it corrects any mistakes that the patient might make during the exercise. The correct exercise is captured via Kinect sensor and divided into a sequence of states in spatial-temporal dimension using k-means clustering. Those states compose a finite state machine that is used to verify whether the patients movements are correct. The transition from one state to the next corresponds to partial movements that compose the learned exercise. If the patient executes the partial movement incorrectly, the system suggests a correction and returns to the same state, asking that the patient try again. The system was tested with multiple patients undergoing physiotherapeutic treatment for motor impairments. Based on the results obtained, the system achieved high precision and recall across all partial movements. The emotional impact of treatment on patients was also measured, via before and after questionnaires and via a software that recognizes emotions from video taken during treatment, showing a positive impact that could help motivate physiotherapy patients, improving their motivation and recovery.

Page generated in 0.4773 seconds