• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1207
  • 263
  • 235
  • 204
  • 181
  • 114
  • 36
  • 34
  • 20
  • 19
  • 13
  • 13
  • 9
  • 9
  • 7
  • Tagged with
  • 2789
  • 570
  • 549
  • 527
  • 485
  • 416
  • 408
  • 396
  • 352
  • 291
  • 261
  • 253
  • 216
  • 215
  • 213
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1051

Modèles profonds de régression et applications à la vision par ordinateur pour l'interaction homme-robot / Deep Regression Models and Computer Vision Applications for Multiperson Human-Robot Interaction

Lathuiliere, Stéphane 22 May 2018 (has links)
Dans le but d’interagir avec des êtres humains, les robots doivent effectuer destâches de perception basique telles que la détection de visage, l’estimation dela pose des personnes ou la reconnaissance de la parole. Cependant, pour interagir naturellement, avec les hommes, le robot doit modéliser des conceptsde haut niveau tels que les tours de paroles dans un dialogue, le centre d’intérêtd’une conversion, ou les interactions entre les participants. Dans ce manuscrit,nous suivons une approche ascendante (dite “top-down”). D’une part, nousprésentons deux méthodes de haut niveau qui modélisent les comportementscollectifs. Ainsi, nous proposons un modèle capable de reconnatre les activitésqui sont effectuées par différents des groupes de personnes conjointement, telsque faire la queue, discuter. Notre approche gère le cas général où plusieursactivités peuvent se dérouler simultanément et en séquence. D’autre part,nous introduisons une nouvelle approche d’apprentissage par renforcement deréseau de neurones pour le contrôle de la direction du regard du robot. Notreapproche permet à un robot d’apprendre et d’adapter sa stratégie de contrôledu regard dans le contexte de l’interaction homme-robot. Le robot est ainsicapable d’apprendre à concentrer son attention sur des groupes de personnesen utilisant seulement ses propres expériences (sans supervision extérieur).Dans un deuxième temps, nous étudions en détail les approchesd’apprentissage profond pour les problèmes de régression. Les problèmesde régression sont cruciaux dans le contexte de l’interaction homme-robotafin d’obtenir des informations fiables sur les poses de la tête et du corpsdes personnes faisant face au robot. Par conséquent, ces contributions sontvraiment générales et peuvent être appliquées dans de nombreux contextesdifférents. Dans un premier temps, nous proposons de coupler un mélangegaussien de régressions inverses linéaires avec un réseau de neurones convolutionnels. Deuxièmement, nous introduisons un modèle de mélange gaussien-uniforme afin de rendre l’algorithme d’apprentissage plus robuste aux annotations bruitées. Enfin, nous effectuons une étude à grande échelle pour mesurerl’impact de plusieurs choix d’architecture et extraire des recommandationspratiques lors de l’utilisation d’approches d’apprentissage profond dans destâches de régression. Pour chacune de ces contributions, une intense validation expérimentale a été effectuée avec des expériences en temps réel sur lerobot NAO ou sur de larges et divers ensembles de données. / In order to interact with humans, robots need to perform basic perception taskssuch as face detection, human pose estimation or speech recognition. However, in order have a natural interaction with humans, the robot needs to modelhigh level concepts such as speech turns, focus of attention or interactions between participants in a conversation. In this manuscript, we follow a top-downapproach. On the one hand, we present two high-level methods that model collective human behaviors. We propose a model able to recognize activities thatare performed by different groups of people jointly, such as queueing, talking.Our approach handles the general case where several group activities can occur simultaneously and in sequence. On the other hand, we introduce a novelneural network-based reinforcement learning approach for robot gaze control.Our approach enables a robot to learn and adapt its gaze control strategy inthe context of human-robot interaction. The robot is able to learn to focus itsattention on groups of people from its own audio-visual experiences.Second, we study in detail deep learning approaches for regression prob-lems. Regression problems are crucial in the context of human-robot interaction in order to obtain reliable information about head and body poses or theage of the persons facing the robot. Consequently, these contributions are really general and can be applied in many different contexts. First, we proposeto couple a Gaussian mixture of linear inverse regressions with a convolutionalneural network. Second, we introduce a Gaussian-uniform mixture model inorder to make the training algorithm more robust to noisy annotations. Finally,we perform a large-scale study to measure the impact of several architecturechoices and extract practical recommendations when using deep learning approaches in regression tasks. For each of these contributions, a strong experimental validation has been performed with real-time experiments on the NAOrobot or on large and diverse data-sets.
1052

Sistema de visão computacional aplicado a um robô cilíndrico acionado pneumaticamente

Medina, Betânia Vargas Oliveira January 2015 (has links)
O reconhecimento da posição e orientação de objetos em uma imagem é importante para diversos segmentos da engenharia, como robótica, automação industrial e processos de fabricação, permitindo às linhas de produção que utilizam sistemas de visão, melhorias na qualidade e redução do tempo de produção. O presente trabalho consiste na elaboração de um sistema de visão computacional para um robô cilíndrico de cinco graus de liberdade acionado pneumaticamente. Como resultado da aplicação do método desenvolvido, obtêm-se a posição e orientação de peças a fim de que as mesmas possam ser capturadas corretamente pelo robô. Para a obtenção da posição e orientação das peças, utilizou-se o método de cálculo dos momentos para extração de características de uma imagem, além da relação entre suas coordenadas em pixels com o sistema de coordenadas do robô. O desenvolvimento do presente trabalho visou também a integrar a esse sistema de visão computacional, um algoritmo de planejamento de trajetórias do robô, o qual, após receber os valores das coordenadas necessárias, gera a trajetória a ser seguida pelo robô, de forma que este possa pegar a peça em uma determinada posição e deslocá-la até outra posição pré-determinada. Também faz parte do escopo deste trabalho, a integração do sistema de visão, incluindo o planejamento de trajetórias, a um algoritmo de controle dos atuadores com compensação de atrito e a realização de testes experimentais com manipulação de peças. Para a demonstração da aplicação do método através de testes experimentais, foi montada uma estrutura para suportar as câmeras e as peças a serem manipuladas, levando em conta o espaço de trabalho do robô. Os resultados obtidos mostram que o algoritmo proposto de visão computacional determina a posição e orientação das peças permitindo ao robô a captação e manipulação das mesmas. / The recognition of the position and orientation of objects in an image is important for several technological areas in engineering, such as robotics, industrial automation and manufacturing processes, allowing production lines using vision systems, improvements in quality and reduction in production time. The present work consists of the development of a computer vision system for a pneumatically actuated cylindrical robot with five degrees of freedom. The application of the proposed method furnishes the position and orientation of pieces in a way that the robot could properly capture them. Position and orientation of the pieces are determined by means of a technique based on the method of calculating the moments for an image feature extraction and the relationship between their pixels coordinates with the robot coordinate system. The scope of the present work also comprises the integration of the computer vision system with a (previously developed) robot trajectory planning algorithm that use key-point coordinates (transmitted by the vision system) to generate the trajectory that must be followed by the robot, so that, departing from a given position, it moves suitably to another predetermined position. It is also object of this work, the integration of both vision system and trajectory planning algorithm with a (also previously developed) nonlinear control algorithm with friction compensation. Aiming at to demonstrate experimentally the application of the method, a special apparatus was mounted to support cameras and the pieces to be manipulated, taking into account the robot workspace. To validate the proposed algorithm, a case study was performed, with the results showing that the proposed computer vision algorithm determines the position and orientation of the pieces allowing the robot to capture and manipulation thereof.
1053

Odjehlování vnitřních prostor ventilových bloků / Deburring of inside space of hydraulic valves

Hanuska, Ján January 2014 (has links)
This thesis is solving issues concerning deburring of hydraulic valve blocks with industrial robot. Thesis is focused on deburring of inside space of hydraulic valve blocks, although deburring of outer edges is marginally mentioned due to determining of approximate deburring time of all edges on the valve block. Search of deburring methods and tools suitable for deburring of inside and outside edges is made on the basis of valve blocks´ analysis. Paths of tools chosen for deburring of valve block B1 are programmed in CAM program. CAD program ProEngineer is used to create simplified model of robotic workplace and its layout. According to customer´s requirements, deburring method, which allows creating universal robotic workplace for deburring of hydraulic valve blocks, was chosen. Approximate deburring time of all edges on B1 block was set on the basis of tools´ paths and these were checked in the simplified model of robotic workplace in PowerMill Robot Interface. Deburring procedure, estimating of approximate deburring time of B1 block and layout of robotic workplace are main results of this thesis.
1054

Localisation temps-réel d'un robot par vision monoculaire et fusion multicapteurs / Real-time robot location by monocular vision and multi-sensor fusion

Charmette, Baptiste 14 December 2012 (has links)
Ce mémoire présente un système de localisation par vision pour un robot mobile circulant dans un milieu urbain. Pour cela, une première phase d’apprentissage où le robot est conduit manuellement est réalisée pour enregistrer une séquence vidéo. Les images ainsi acquises sont ensuite utilisées dans une phase hors ligne pour construire une carte 3D de l’environnement. Par la suite, le véhicule peut se déplacer dans la zone, de manière autonome ou non, et l’image reçue par la caméra permet de le positionner dans la carte. Contrairement aux travaux précédents, la trajectoire suivie peut être différente de la trajectoire d’apprentissage. L’algorithme développé permet en effet de conserver la localisation malgré des changements de point de vue importants par rapport aux images acquises initialement. Le principe consiste à modéliser les points de repère sous forme de facettes localement planes, surnommées patchs plan, dont l’orientation est connue. Lorsque le véhicule se déplace, une prédiction de la position courante est réalisée et la déformation des facettes induite par le changement de point de vue est reproduite. De cette façon la recherche des amers revient à comparer des images pratiquement identiques, facilitant ainsi leur appariement. Lorsque les positions sur l’image de plusieurs amers sont connues, la connaissance de leur position 3D permet de déduire la position du robot. La transformation de ces patchs plan est complexe et demande un temps de calcul important, incompatible avec une utilisation temps-réel. Pour améliorer les performances de l’algorithme, la localisation a été implémentée sur une architecture GPU offrant de nombreux outils permettant d’utiliser cet algorithme avec des performances utilisables en temps-réel. Afin de prédire la position du robot de manière aussi précise que possible, un modèle de mouvement du robot a été mis en place. Il utilise, en plus de la caméra, les informations provenant des capteurs odométriques. Cela permet d’améliorer la prédiction et les expérimentations montrent que cela fournit une plus grande robustesse en cas de pertes d’images lors du traitement. Pour finir ce mémoire détaille les différentes performances de ce système à travers plusieurs expérimentations en conditions réelles. La précision de la position a été mesurée en comparant la localisation avec une référence enregistrée par un GPS différentiel. / This dissertation presents a vision-based localization system for a mobile robot in an urban context. In this goal, the robot is first manually driven to record a learning image sequence. These images are then processed in an off-line way to build a 3D map of the area. Then vehicle can be —either automatically or manually— driven in the area and images seen by the camera are used to compute the position in the map. In contrast to previous works, the trajectory can be different from the learning sequence. The algorithm is indeed able to keep localization in spite of important viewpoint changes from the learning images. To do that, the features are modeled as locally planar features —named patches— whose orientation is known. While the vehicle is moving, its position is predicted and patches are warped to model the viewpoint change. In this way, matching the patches with points in the image is eased because their appearances are almost the same. After the matching, 3D positions of the patches associated with 2D points on the image are used to compute robot position. The warp of the patch is computationally expensive. To achieve real-time performance, the algorithm has been implemented on GPU architecture and many improvements have been done using tools provided by the GPU. In order to have a pose prediction as precise as possible, a motion model of the robot has been developed. This model uses, in addition to the vision-based localization, information acquired from odometric sensors. Experiments using this prediction model show that the system is more robust especially in case of image loss. Finally many experiments in real situations are described in the end of this dissertation. A differential GPS is used to evaluate the localization result of the algorithm.
1055

Sistema de visão computacional aplicado a um robô cilíndrico acionado pneumaticamente

Medina, Betânia Vargas Oliveira January 2015 (has links)
O reconhecimento da posição e orientação de objetos em uma imagem é importante para diversos segmentos da engenharia, como robótica, automação industrial e processos de fabricação, permitindo às linhas de produção que utilizam sistemas de visão, melhorias na qualidade e redução do tempo de produção. O presente trabalho consiste na elaboração de um sistema de visão computacional para um robô cilíndrico de cinco graus de liberdade acionado pneumaticamente. Como resultado da aplicação do método desenvolvido, obtêm-se a posição e orientação de peças a fim de que as mesmas possam ser capturadas corretamente pelo robô. Para a obtenção da posição e orientação das peças, utilizou-se o método de cálculo dos momentos para extração de características de uma imagem, além da relação entre suas coordenadas em pixels com o sistema de coordenadas do robô. O desenvolvimento do presente trabalho visou também a integrar a esse sistema de visão computacional, um algoritmo de planejamento de trajetórias do robô, o qual, após receber os valores das coordenadas necessárias, gera a trajetória a ser seguida pelo robô, de forma que este possa pegar a peça em uma determinada posição e deslocá-la até outra posição pré-determinada. Também faz parte do escopo deste trabalho, a integração do sistema de visão, incluindo o planejamento de trajetórias, a um algoritmo de controle dos atuadores com compensação de atrito e a realização de testes experimentais com manipulação de peças. Para a demonstração da aplicação do método através de testes experimentais, foi montada uma estrutura para suportar as câmeras e as peças a serem manipuladas, levando em conta o espaço de trabalho do robô. Os resultados obtidos mostram que o algoritmo proposto de visão computacional determina a posição e orientação das peças permitindo ao robô a captação e manipulação das mesmas. / The recognition of the position and orientation of objects in an image is important for several technological areas in engineering, such as robotics, industrial automation and manufacturing processes, allowing production lines using vision systems, improvements in quality and reduction in production time. The present work consists of the development of a computer vision system for a pneumatically actuated cylindrical robot with five degrees of freedom. The application of the proposed method furnishes the position and orientation of pieces in a way that the robot could properly capture them. Position and orientation of the pieces are determined by means of a technique based on the method of calculating the moments for an image feature extraction and the relationship between their pixels coordinates with the robot coordinate system. The scope of the present work also comprises the integration of the computer vision system with a (previously developed) robot trajectory planning algorithm that use key-point coordinates (transmitted by the vision system) to generate the trajectory that must be followed by the robot, so that, departing from a given position, it moves suitably to another predetermined position. It is also object of this work, the integration of both vision system and trajectory planning algorithm with a (also previously developed) nonlinear control algorithm with friction compensation. Aiming at to demonstrate experimentally the application of the method, a special apparatus was mounted to support cameras and the pieces to be manipulated, taking into account the robot workspace. To validate the proposed algorithm, a case study was performed, with the results showing that the proposed computer vision algorithm determines the position and orientation of the pieces allowing the robot to capture and manipulation thereof.
1056

Sistema de visão computacional aplicado a um robô cilíndrico acionado pneumaticamente

Medina, Betânia Vargas Oliveira January 2015 (has links)
O reconhecimento da posição e orientação de objetos em uma imagem é importante para diversos segmentos da engenharia, como robótica, automação industrial e processos de fabricação, permitindo às linhas de produção que utilizam sistemas de visão, melhorias na qualidade e redução do tempo de produção. O presente trabalho consiste na elaboração de um sistema de visão computacional para um robô cilíndrico de cinco graus de liberdade acionado pneumaticamente. Como resultado da aplicação do método desenvolvido, obtêm-se a posição e orientação de peças a fim de que as mesmas possam ser capturadas corretamente pelo robô. Para a obtenção da posição e orientação das peças, utilizou-se o método de cálculo dos momentos para extração de características de uma imagem, além da relação entre suas coordenadas em pixels com o sistema de coordenadas do robô. O desenvolvimento do presente trabalho visou também a integrar a esse sistema de visão computacional, um algoritmo de planejamento de trajetórias do robô, o qual, após receber os valores das coordenadas necessárias, gera a trajetória a ser seguida pelo robô, de forma que este possa pegar a peça em uma determinada posição e deslocá-la até outra posição pré-determinada. Também faz parte do escopo deste trabalho, a integração do sistema de visão, incluindo o planejamento de trajetórias, a um algoritmo de controle dos atuadores com compensação de atrito e a realização de testes experimentais com manipulação de peças. Para a demonstração da aplicação do método através de testes experimentais, foi montada uma estrutura para suportar as câmeras e as peças a serem manipuladas, levando em conta o espaço de trabalho do robô. Os resultados obtidos mostram que o algoritmo proposto de visão computacional determina a posição e orientação das peças permitindo ao robô a captação e manipulação das mesmas. / The recognition of the position and orientation of objects in an image is important for several technological areas in engineering, such as robotics, industrial automation and manufacturing processes, allowing production lines using vision systems, improvements in quality and reduction in production time. The present work consists of the development of a computer vision system for a pneumatically actuated cylindrical robot with five degrees of freedom. The application of the proposed method furnishes the position and orientation of pieces in a way that the robot could properly capture them. Position and orientation of the pieces are determined by means of a technique based on the method of calculating the moments for an image feature extraction and the relationship between their pixels coordinates with the robot coordinate system. The scope of the present work also comprises the integration of the computer vision system with a (previously developed) robot trajectory planning algorithm that use key-point coordinates (transmitted by the vision system) to generate the trajectory that must be followed by the robot, so that, departing from a given position, it moves suitably to another predetermined position. It is also object of this work, the integration of both vision system and trajectory planning algorithm with a (also previously developed) nonlinear control algorithm with friction compensation. Aiming at to demonstrate experimentally the application of the method, a special apparatus was mounted to support cameras and the pieces to be manipulated, taking into account the robot workspace. To validate the proposed algorithm, a case study was performed, with the results showing that the proposed computer vision algorithm determines the position and orientation of the pieces allowing the robot to capture and manipulation thereof.
1057

Otimização de um sistema de patrulhamento por múltiplos robôs utilizando algoritmo genético

Sá, Rafael José Fonseca de 09 September 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-09T12:13:05Z No. of bitstreams: 1 rafaeljosefonsecadesa.pdf: 2699281 bytes, checksum: ca2455c138265324b1a8fcbb6075da41 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-10T12:58:01Z (GMT) No. of bitstreams: 1 rafaeljosefonsecadesa.pdf: 2699281 bytes, checksum: ca2455c138265324b1a8fcbb6075da41 (MD5) / Made available in DSpace on 2017-03-10T12:58:01Z (GMT). No. of bitstreams: 1 rafaeljosefonsecadesa.pdf: 2699281 bytes, checksum: ca2455c138265324b1a8fcbb6075da41 (MD5) Previous issue date: 2016-09-09 / Com a evolução da tecnologia, estão aumentando as aplicabilidades dos robôs em nosso meio. Em alguns casos, a utilização de sistemas com múltiplos robôs autônomos trabalhando em cooperação se torna uma ótima alternativa. Há várias pesquisas em andamento na área de robótica com o intuito de aprimorar estas tarefas. Entre estas pesquisas estão os sistemas de patrulhamento. Neste trabalho, o sistema de patrulhamento utilizando múltiplos robôs é implementado considerando a série de chegada de alertas nas estações de monitoramento e o robô pode andar somente em uma única direção. Devido ao número de estações que podem entrar em alerta e ao número de robôs, o controle desse sistema se torna complexo. Como a finalidade de um sistema de patrulhamento é atender possíveis alertas de invasores, é imprescindível que haja uma resposta rápida do controlador responsável para que um robô logo seja encaminhado com o propósito de atender a esse alerta. No caso de sistemas com múltiplos robôs, é necessário que haja uma coordenação do controlador para que os robôs possam atender o máximo de alertas possíveis em um menor instante de tempo. Para resolver esse problema, foi utilizado um controlador composto por uma técnica inteligente de otimização bioinspirada chamada de “Algoritmo Genético” (AG). Este controlador centraliza todas as decisões de controle dos robôs, sendo responsável por orientá-los em relação aos movimentos e captação de informação. As decisões são tomadas com o intuito de maximizar a recompensa do sistema. Esta recompensa é composta pelo ganho de informação do sistema e por uma penalização gerada pela demora em atender aos alertas ativados. Foram feitas simulações com a intenção de verificar a eficácia desse controlador, comparando-o com um controlador utilizando heurísticas pré-definidas. Essas simulações comprovaram a eficiência do controlador via Algoritmo Genético. Devido ao fato do controlador via AG analisar o sistema como um todo enquanto que o controlador heurístico analisa apenas o estágio atual, foi possível observar que a distribuição dos robôs no mapa permitia um atendimento mais ágil às estações com alerta ativados, assim como uma maior aquisição de informações do local. Outro fato importante foi em relação à complexidade do sistema. Foi notado que quanto maior a complexidade do sistema, ou seja, quanto maior o número de robôs e de estações, melhor era a eficiência do controlador via Algoritmo Genético em relação ao controlador heurístico. / New technologies have been considerable advances, and consequently, thus allows the robot appearance as an integral part of our daily lives. In recent years, the design of cooperative multi-robot systems has become a highly active research area within robotics. Cooperative multi-robot systems (MRS) have received significant attention by the robotics community for the past two decades, because their successful deployment have unquestionable social and economical relevance in many application domain. There are several advantages of using multi-robot systems in different application and task. The development and conception of patrolling methods using multi-robot systems is a scientific area which has a growing interest. This work, the patrol system using multiple robots is implemented considering the series of arrival of alerts in the monitoring stations known and the robot was limited to move in one direction. Due to the large number of stations that can assume alert condition and due to the large number of robots, the system control becomes extremely complex. Patrol systems are usually designed for surveillance. An efficient controller permits a patrol in a way that maximizes their chances of detecting an adversary trying to penetrate through the patrol path. The obvious advantage of multi-robot exploration is its concurrency, which can greatly reduce the time needed for the mission. Coordination among multiple robots is necessary to achieve efficiency in robotic explorations. When working in groups, robots need to coordinate their activities. However, a Genetic Algorithm approach was implemented to carryout an optimized control action provided from the controller. In fact the controller determines the robot's behavior. The decision strategies are implemented in order to maximize the system response. The present work deals with a computational study of controller based on Genetic Algorithm and it comparison with another controller based pre-defined heuristics. The simulation results show the efficiency of the proposed controller based on Genetic Algorithm, when compared with the controller based on heuristics. The right decisions from the controller based on Genetic Algorithm allowed a better distribution of the robots on the map leading to fast service stations with active alert, as well as increased acquisition of location information. Another important fact was regarding the complexity of the system. Also, as a result, it was noticed an excellent efficiency of the controller based on Genetic Algorithm when the existence of the large number of robots and stations.
1058

De l'Autonomie des Robots Humanoïdes : Planification de Contacts pour Mouvements de Locomotion et Tâches de Manipulation / On Autonomous Behaviour of Humanoid Robots : Contact Planning for Locomotion and Manipulation

Bouyarmane, Karim 22 November 2011 (has links)
Nous proposons une approche de planification unifiée pour robots humanoïdes réalisant des tâches de locomotion et de manipulation nécessitant une dextérité propre aux systèmes anthropomorphes. Ces tâches sont basées sur des transitions de contacts ; contacts entre les extrémités des membres locomoteurs et l'environnement dans le cas du problème de locomotion par exemple, ou entre les extrémités de l'organe préhensible effecteur et l'objet manipulé dans le cas du problème de manipulation. Nous planifions ces transitions de contacts pour des systèmes abstraits constitués d'autant de robots, d'objets, et de supports dans l'environnement que désiré/nécessaire pour la modélisation du problème. Cette approche permet de s'affranchir de la distinction de nature entre tâches de locomotion et de manipulation et s'étend à une variété d'autres problèmes tels que la coopération entre plusieurs agents. Nous introduisons notre paradigme de planification non-découplée de locomotion et de manipulation en exhibant la stratification induite dans l'espace des configurations de systèmes simplifiés pour lesquels nous résolvons analytiquement le problème en comparant des méthodes de planification géométrique, non-holonome, et dynamique. Nous présentons ensuite l'algorithme de planification de contacts basé sur une recherche best-first. Cet algorithme fait appel à un solveur de cinématique inverse qui prend en compte des configurations de contacts générales dans l'espace pouvant être établis entre robots, objets, et environnement dans toutes les combinaisons possibles, le tout sous contraintes d'équilibre statique et de respect des limitations mécaniques des robots. La génération de mouvement respectant l'équation de dynamique Lagrangienne est obtenue par une formulation en programme quadratique. Enfin nous envisageons une extension à des supports de contact déformables en considérant des comportements linéaires-élastiques résolus par éléments finis. / We propose a unified planning approach for autonomous humanoid robots that perform dexterous locomotion and manipulation tasks. These tasks are based on contact transitions; for instance between the locomotion limbs of the robot and the environment, or between the manipulation end-effector of the robot and the manipulated object. We plan these contact transitions for general abstract systems made of arbitrary numbers of robots, manipulated objects, and environment supports. This approach allows us to erase distinction between the locomotion and manipulation nature of the tasks and to extend the method to various other planning problems such as collaborative manipulation and locomotion between multiple agents. We introduce our non-decoupled locomotion-and-manipulation planning paradigm by exhibiting the induced stratification of the configuration space of example simplified systems for which we analytically solve the problem comparing geometric path planning, kinematic non-holonomic planning, and dynamic trajectory planning methods. We then present the contact planning algorithm based on best-first search. The algorithm relies on an inverse kinematics solver that handles general robot-robot, robot-object, robot-environment, object-environment, non-horizontal, non-coplanar, friction-based, multi-contact configurations, under static equilibrium and physical limitation constraints. The continuous dynamics-consistent motion is generated in the locomotion case using a quadratic programming formulation. We finally envision the extension to deformable environment contact support by considering linear elasticity behaviours solved using the finite element method.
1059

Commande hybride position/force robuste d’un robot manipulateur utilisé en usinageet/ou en soudage / Robust hybrid position/force control of a manipulator used in machiningand/or in FSW

Qin, Jinna 02 December 2013 (has links)
La problématique traitée dans cette thèse concerne la commande de robots manipulateurs industriels légèrement flexibles utilisés pour la robotisation de procédés d'usinage et de soudage FSW. Le premier objectif est la modélisation des robots et des procédés. Les modèles développés concernant la cinématique et la dynamique de robots 6 axes à architecture série et à flexibilité localisées aux articulations. Les paramètres du modèle dynamique et les raideurs sont identifiés avec la méthode à erreur de sortie qui donne une bonne précision d'estimation. La norme relative du résidu du modèle après identification est de 3,2%. Le deuxième objectif est l'amélioration des performances de la robotisation des procédés. Un simulateur a été développé qui intègre le modèle dynamique du robot flexible, les modèles de procédés et le modèle du contrôleur de robot y compris les lois de commande en temps réel des axes et le générateur de trajectoires. Un observateur non-linéaire à grands gains est proposé pour estimer l'état complet du robot flexible ainsi que les efforts d'interaction. Ensuite, un compensateur basé sur l'observateur est proposé pour corriger les erreurs de positionnement en temps réel. La validation expérimentale sur un robot industriel Kuka, montre une très bonne estimation de l'état complet par l'observateur. Un soudage FSW précis grâce à la compensation en temps réel de la flexibilité du manipulateur a pu être effectué avec succès. / The problem addressed in this thesis concerns the control of industrial robot manipulators which are slightly flexible and used for machining and FSW processes. The first objective is to model the robots and processes. The developed models concern the kinematics and dynamics models of 6-axis robots with serial architecture and flexibility localized at joints. The dynamic model parameters and a part of the joint stiffnesses are identified with the approach of output error which gives a satisfy estimation accuracy. According to identification, the RMS residue of the model is 3.2%. The second objective is to improve the robotization performance of manufacturing processes. A simulator was developed that contains the dynamic model of the flexible robot, the process models and the model of the robot controller including control laws in real time of axes and the trajectory generator. A nonlinear high-gains observer is proposed to estimate the complete states of robot system as well as the operation wrenches. Then the observer-based compensator is proposed to correct the positioning errors in real time. The experimental validation of industrial robots shows a satisfactory estimating performance of the observer. A precise FSW welding owing to the real-time compensation for the flexibility of manipulator has been done successfully.
1060

Analyse par intervalles pour la détection de boucles dans la trajectoire d'un robot mobile / Interval analysis for loop detection in the trajectory of a mobile robot

Aubry, Clément 03 October 2014 (has links)
Le travail de thèse présenté dans ce mémoire a permis le développement d’une nouvelle méthode de détection de boucles dans la trajectoire d’un robot mobile. Celle-ci se base, contrairement à celles existantes, sur l’utilisation de données proprioceptives et est ainsi totalement découplée des problématiques de localisation et cartographie auxquelles elle est généralement associée. L’utilisation de l’analyse par intervalles a permis, dans notre contexte de mesures à erreurs bornées, de calculer deux temps pour lesquels la position du robot est identique alors qu’un mouvement significatif a pu être observé entre ces deux instants. La méthode est automatique et ne nécessite que la connaissance des équations d’état du robot ainsi que les mesures de vitesse effectuées au cours de sa mission. Les résultats de l’algorithme sur données réelles, issues d’une expérience effectuée en milieu sous-marin, sont très satisfaisants et ont permis de prouver l’existence et l’unicité de plus de la moitié des boucles observées dans la trajectoire du robot. / The work presented in this thesis deals with a new method for loop detection in the trajectory of a mobile robot. It is based on the use of proprioceptive measures, whereas other methods use exteroceptive measures. This makes this method totally independent from the simultaneous localisation and mapping problems that they are generally coupled with. The use of interval analysis allowed us, in this bounded error context, to compute two instants for which the position of the robot is identical although a significant movement has been observed between these moments. The method is automatic and only requires the knowledge of the state equations of the robot as well as the velocity measures carried out during its mission. The results of the true data algorithm, derived from an experiment carried out underwater, are very satisfactory and made it possible to prove the existence and uniqueness of more than half of the loops observed in the trajectory of the robot.

Page generated in 0.082 seconds