• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 14
  • 8
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 33
  • 28
  • 28
  • 22
  • 17
  • 15
  • 15
  • 12
  • 12
  • 10
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Development of Integration Algorithms for Vision/Force Robot Control with Automatic Decision System

Bdiwi, Mohamad 10 June 2014 (has links)
In advanced robot applications, the challenge today is that the robot should perform different successive subtasks to achieve one or more complicated tasks similar to human. Hence, this kind of tasks required to combine different kind of sensors in order to get full information about the work environment. However, from the point of view of control, more sensors mean more possibilities for the structure of the control system. As shown previously, vision and force sensors are the most common external sensors in robot system. As a result, in scientific papers it can be found numerous control algorithms and different structures for vision/force robot control, e.g. shared, traded control etc. The lacks in integration of vision/force robot control could be summarized as follows: • How to define which subspaces should be vision, position or force controlled? • When the controller should switch from one control mode to another one? • How to insure that the visual information could be reliably used? • How to define the most appropriated vision/force control structure? In many previous works, during performing a specified task one kind of vision/force control structure has been used which is pre-defined by the programmer. In addition to that, if the task is modified or changed, it would be much complicated for the user to describe the task and to define the most appropriated vision/force robot control especially if the user is inexperienced. Furthermore, vision and force sensors are used only as simple feedback (e.g. vision sensor is used usually as position estimator) or they are intended to avoid the obstacles. Accordingly, much useful information provided by the sensors which help the robot to perform the task autonomously is missed. In our opinion, these lacks of defining the most appropriate vision/force robot control and the weakness in the utilization from all the information which could be provided by the sensors introduce important limits which prevent the robot to be versatile, autonomous, dependable and user-friendly. For this purpose, helping to increase autonomy, versatility, dependability and user-friendly in certain area of robotics which requires vision/force integration is the scope of this thesis. More concretely: 1. Autonomy: In the term of an automatic decision system which defines the most appropriated vision/force control modes for different kinds of tasks and chooses the best structure of vision/force control depending on the surrounding environments and a priori knowledge. 2. Versatility: By preparing some relevant scenarios for different situations, where both the visual servoing and force control are necessary and indispensable. 3. Dependability: In the term of the robot should depend on its own sensors more than on reprogramming and human intervention. In other words, how the robot system can use all the available information which could be provided by the vision and force sensors, not only for the target object but also for the features extraction of the whole scene. 4. User-friendly: By designing a high level description of the task, the object and the sensor configuration which is suitable also for inexperienced user. If the previous properties are relatively achieved, the proposed robot system can: • Perform different successive and complex tasks. • Grasp/contact and track imprecisely placed objects with different poses. • Decide automatically the most appropriate combination of vision/force feedback for every task and react immediately to the changes from one control cycle to another because of occurrence of some unforeseen events. • Benefit from all the advantages of different vision/force control structures. • Benefit from all the information provided by the sensors. • Reduce the human intervention or reprogramming during the execution of the task. • Facilitate the task description and entering of a priori-knowledge for the user, even if he/she is inexperienced.
42

Commande d'un robot de télé-échographie par asservissement visuel / Control a robot tele-echography by visual servoing

Li, Tao 14 February 2013 (has links)
Les robots légers utilisés pour la télé-échographie robotisée permettent, à l'expert médical, d'orienter à distance une sonde ultrasonore 2D. L'analyse en temps réel de l'image ultrasonore du patient, reçue via un lien de communication, permet à l'expert de définir un diagnostic. Les validations cliniques du concept de télé-échographie robotisée montrent qu'il est ainsi possible de pallier le manque d'experts en ultrasonographie sur des sites médicalement isolés. Le robot porte-sonde est positionné et maintenu sur le corps du patient par un assistant à partir des informations communiquées par le spécialiste via visioconférence. Cependant, la faible masse du robot, le fait qu'il soit maintenu par un assistant sur le corps du patient et les mouvements physiologiques du patient provoquent des perturbations dans la position de la sonde et engendrent ainsi des pertes des sections d'intérêt des organes étudiés. Les travaux de cette thèse ont consisté à développer une approche par asservissement visuel basé sur les moments d'image ultrasonore 2D. Le calcul des moments 2D étant basé sur les points du contour de la section d'intérêt, un algorithme de traitement d'images efficace est nécessaire pour détecter et suivre le contour d'intérêt en mouvement. Pour cela, une méthode de contour actif paramétrique basée sur les descripteurs de Fourier est présentée. Les lois de commandes correspondant à trois tâches autonomes autorisant la recherche et le maintien de visibilité d'un organe lors de l'acte médical télé-opéré sont implémentées et validées sur la plateforme robotique du projet ANR Prosit. / The light weight robots used for robotized tele-echography allow the medical expert to remotely operate a 2D-ultrasound probe. The real-time analysis of the patient's ultrasound images, received via a standard communication link, provides the expert with relevant information to define a diagnosis. The clinical validations of the robotized tele-echography concept show that it is possible to overcome the lack of sonographers in medically isolated sites. The robot probe-holder is usually positioned and held on the patient's body by a paramedical staff based on information provided by the specialist via videoconferencing. However, the small mass of the robot, the fact that it is held by an assistant on the patient's body and the patient's physiological movements cause disturbances in the position of the probe ; this thus can generate a loss of the region of interest of the organ being under investigation during the teleoperated medical act. This thesis work focuses on the development of a visual servoing approach based on 2D ultrasound image moments. 2D moments calculation is based on the contour points of the image section of interest, therefore an image-processing algorithm is needed to effectively detect and follow the contour of interest in motion. For this reason, a parametric active contour method based on Fourier descriptors is presented. The control laws, corresponding to three independent autonomous tasks to search and maintain the visibility of an organ within a given ultrasound plane during the tele-operated medical act are implemented and validated on robotic platform project ANR Prosit.
43

Going further with direct visual servoing / Aller plus loin avec les asservissements visuels directs

Bateux, Quentin 12 February 2018 (has links)
Dans cette thèse, nous nous concentrons sur les techniques d'asservissement visuel (AV), critiques pour de nombreuses applications de vision robotique et insistons principalement sur les AV directs. Afin d'améliorer l'état de l'art des méthodes directes, nous nous intéressons à plusieurs composantes des lois de contrôle d'AV traditionnelles. Nous proposons d'abord un cadre générique pour considérer l'histogramme comme une nouvelle caractéristique visuelle. Cela permet de définir des lois de contrôle efficaces en permettant de choisir parmi n'importe quel type d'histogramme pour décrire des images, depuis l'histogramme d'intensité à l'histogramme couleur, en passant par les histogrammes de Gradients Orientés. Une nouvelle loi d'asservissement visuel direct est ensuite proposée, basée sur un filtre particulaire pour remplacer la partie optimisation des tâches d'AV classiques, permettant d'accomplir des tâches associées à des fonctions de coûts hautement non linéaires et non convexes. L'estimation du filtre particulaire peut être calculée en temps réel à l'aide de techniques de transfert d'images permettant d'évaluer les mouvements de caméra associés aux déplacements des caractéristiques visuelles considérées dans l'image. Enfin, nous présentons une nouvelle manière de modéliser le problème de l'AV en utilisant l'apprentissage profond et les réseaux neuronaux convolutifs pour pallier à la difficulté de modélisation des problèmes non convexes via les méthodes analytiques classiques. En utilisant des techniques de transfert d'images, nous proposons une méthode permettant de générer rapidement des ensembles de données d'apprentissage de grande taille afin d'affiner des architectures de réseau pré-entraînés sur des tâches connexes, et résoudre des tâches d'AV. Nous montrons que cette méthode peut être appliquée à la fois pour modéliser des scènes connues, et plus généralement peut être utilisée pour modéliser des estimations de pose relative entre des couples de points de vue pris de scènes arbitraires. / In this thesis we focus on visual servoing (VS) techniques, critical for many robotic vision applications and we focus mainly on direct VS. In order to improve the state-of-the-art of direct methods, we tackle several components of traditional VS control laws. We first propose a method to consider histograms as a new visual servoing feature. It allows the definition of efficient control laws by allowing to choose from any type of his tograms to describe images, from intensity to color histograms, or Histograms of Oriented Gradients. A novel direct visual servoing control law is then proposed, based on a particle filter to perform the optimization part of visual servoing tasks, allowing to accomplish tasks associated with highly non-linear and non-convex cost functions. The Particle Filter estimate can be computed in real-time through the use of image transfer techniques to evaluate camera motions associated to suitable displacements of the considered visual features in the image. Lastly, we present a novel way of modeling the visual servoing problem through the use of deep learning and Convolutional Neural Networks to alleviate the difficulty to model non-convex problems through classical analytic methods. By using image transfer techniques, we propose a method to generate quickly large training datasets in order to fine-tune existing network architectures to solve VS tasks.We shows that this method can be applied both to model known static scenes, or more generally to model relative pose estimations between couples of viewpoints from arbitrary scenes.
44

Asservissement visuel direct utilisant les décompositions en shearlets et en ondelettes de l'image / Direct visual servoingusing shearlet and wavelet transforms of the image

Duflot, Lesley-Ann 13 July 2018 (has links)
L'asservissement visuel est un procédé consistant à utiliser l'information visuelle obtenue par un capteur afin de commander un système robotique. Ces informations, appelées primitives visuelles peuvent être d'ordre 2D ou 3D. Le travail présenté ici porte sur une nouvelle approche 2D utilisant des primitives directes : les décompositions de l'image en ondelettes ou en shearlets. Ces représentations présentent en effet l'avantage de décrire l'image sous différentes formes, mettant l'accent soit sur les basses fréquences de l'image, soit sur les hautes fréquences selon plusieurs directions. Les zones de l'image contenant beaucoup d'information, comme les contours ou les points singuliers, possèdent alors de forts coefficients dans la transformée en ondelettes ou en shearlets de l'image, tandis que les zones uniformes possèdent des coefficients proches de zéro. Les travaux de cette thèse montrent la précision et la robustesse de l'approche utilisant la décomposition en shearlets dans le cadre de l'imagerie échographique. Néanmoins, sa contribution majeure est l'élaboration d'une commande permettant d'utiliser au choix les ondelettes ou les shearlets ainsi que la validation de cette méthode sur caméra monoculaire et sur capteur de type tomographie par cohérence optique dans différentes conditions d'utilisation. Cette méthode présente des performances significatives en termes de précision et de robustesse et ouvre la porte vers une utilisation couplée de l'asservissement visuel et de l'acquisition comprimée. / A visual servoing scheme consists of a closed-loop control approach which uses visual information feedback to control the movement of a robotic system. This data, called visual features, can be 2D or 3D. This thesis deals with the development of a new generation of 2D direct visual servoing methods in which the signal control inputs are the coefficients of a multiscale image representation. Specially, we consider the use of multiscale image representations that are based on discrete wavelet and shearlet transformations. This kind of representations allows us to obtain several descriptions of the image based on either low or high frequencies levels. Indeed, high coefficients in the wavelet or in the shearlet transformation of the image correspond to image singularities. This thesis has begun with the development of a shearlet-based visual servoing for ultrasound imaging that has performed well in precision and robustness for this medical application. Nevertheless, the main contribution is a framework allowing us to use several multi-scale representations of the image. It was then tested with conventional white light camera and with an optical coherence tomography imaging system with nominal and unfavorable conditions. Then, the wavelet and the shearlet based methods showed their accuracy and their robustness in several conditions and led to the use of both visual servoing and compressed sensing as the main perspective of this work.
45

Mobile Magnetic Microrobots Control and Study in Microfluidic Environment : New Tools for Biomedical Applications / Contrôle et étude de microrobots magnétiques mobiles en milieu microfluidique : nouveaux outils pour le biomédicale

Salmon, Hugo 07 October 2014 (has links)
Dans le domaine du développement d'outils de micromanipulation de haute précision pour le biomédical, les microrobots mobiles immergés font figures de technologie émergente prometteuse pour des applications in-vitro, puis à plus long terme pour l'in-vivo. Mes travaux portent sur l'étude de la propulsion de microrobots par voie magnétique dans des fluides circulant dans des microcanaux, à une échelle où les phénomènes d'adhérence et d'amortissement prévalent. Leur application pour des opérations de transduction est développée dans un deuxième volet.Un dispositif d'asservissement par vision à haute fréquence d’échantillonnage (~5kHz) a été développé rendant possible le contrôle sous champ magnétique uniforme ou gradient. Les performances du système ont notamment demandé l’implémentation d'une interface multi-tâches afin de pouvoir acquérir et traiter les images en parallèle de l'actuation du robot. L'analyse de la dynamique permet de mieux appréhender les phénomènes parfois imprévisibles liés au déplacement du robot, MagPol, intégré dans une puce microfluidique. Il peut réciproquement servir de capteur dans son environnement fluidique.Ce design original de robot a été conçu pour la micromanipulation et permet également d'explorer des nouvelles stratégies de déplacement. Ces capacités ont été éprouvées sur des objets de même taille qu'en biologie cellulaire (billes, bulles).Enfin, une démonstration de l'asservissement visuel en planification de tâche a été effectuée. Sous réserve de posséder un algorithme suffisamment performant, l'échantillonnage haute fréquence en temps réel devient possible et l'observation de performances sur des trajectoires complexes est démontrée. Les performances, la portabilité et la reproductibilité du système démontrent des capacités de transduction à haut débit qui sont très prometteuses pour l'aspect applicatif. / In the research for new high performances tool for micrometric scale manipulation, mobile microrobots immersed are considered as a promising technology for in-vitro applications, and with a long term view in-vivo. My work focuses on the propulsion study of mobile microrobots immersed in microfluidic channels controlled through electromagnets. At this scale, surface and damping phenomena predominates. Application for transduction operation is developed in a second part.A high sampling rate (≈5kHz) visual servoing setup have been developed making a control possible through uniform and gradient magnetic field. Performances of the system have notably required a multi-thread programmed user interface to acquire and analyze the frame in parallel of the robot actuation. Dynamic analysis allow to better apprehend the perturbation dynamics of the robot MagPol, integrated in a microfluidic chip. It can reciprocally serve as a sensor for in fluidic environment.MagPol design has been originally conceived for micromanipulation, and also allows to explore new displacement strategies. Its capacities have been tested on beads and bubbles equivalent to cell biology characteristic size (10µm – 100µm).Finally, a demonstration of planned trajectory using visual servoing was accomplished. Though it has required an algorithm sufficiently efficient, high frequency real-time sampling is possible and lead to control and post observation on complex trajectory. Global performances, repeatability and portability of our system has demonstrated its capacities as a high-throughput transducer, promising for single microagent applications.
46

Etalonnage d'un système de lumière structurée par asservissement visuel / Structured light system calibration using visual servoing

Mosnier, Jérémie 12 December 2011 (has links)
Cette thèse s'inscrit dans le cadre d'un projet national nommé SRDViand dont le but fut de développer un système robotisé pour le désossage et la découpe des animaux de boucherie. Afin de déterminer les trajectoires de découpe de manière intelligente, un système de lumière structurée a été développé. Il se réfère à des systèmes de vision qui utilisent des modèles de projection de lumière pour des tâches de reconstruction 3D. Afin d'obtenir les meilleurs résultats, la définition d'une nouvelle méthode d'étalonnage pour les systèmes de lumière structurée a été établie. Basé sur un large état de l'art et également sur la proposition d'une classification de ces méthodes, il a été proposé d'étalonner une paire caméra projecteur en utilisant l'asservissement visuel. La validité et les résultats de cette méthode ont été éprouvés sur la base de nombreux tests expérimentaux menés dans le cadre du projet SRDViand. Suite à l'élaboration de cette méthode, un prototype permettant la découpe des bovins a été réalisé. / This thesis is part of a national project named SRDViand whose aim was to develop a robotic system for the deboning and cutting of animals meat. To determine the cut paths, a structured light system has been developed. It refers to vision systems that use light projection models for 3D reconstruction tasks. To achieve best results, the definition of a new calibration method for structured light systems was established . Based on a large state of the art and also with a proposed classification of these methods, it has been proposed to calibrate a camera projector pair using visual servoing . The validity and the results of this method were tested on the basis of numerous experimental tests conducted under the SRDViand project. Following the development of this method, a prototype bovine cutting was performed .
47

Automatic guidance of robotized 2D ultrasound probes with visual servoing based on image moments.

Mebarki, Rafik 25 March 2010 (has links) (PDF)
This dissertation presents a new 2D ultrasound-based visual servoing method. The main goal is to automatically guide a robotized 2D ultrasound probe held by a medical robot in order to reach a desired cross-section ultrasound image of an object of interest. This method allows to control both the in-plane and out-of-plane motions of a 2D ultrasound probe. It makes direct use of the 2D ultrasound image in the visual servo scheme, where the feed-back visual features are combinations of image moments. To build the servo scheme, we develop the analytical form of the interaction matrix that relates the image moments time variation to the probe velocity. That modeling is theoretically verified on simple shapes like spherical and cylindrical objects. In order to be able to automatically position the 2D ultrasound probe with respect to an observed object, we propose six relevant independent visual features to control the 6 degrees of freedom of the robotic system. Then, the system is endowed with the capability of automatically interacting with objects without any prior information about their shape, 3D parameters, nor 3D location. To do so, we develop on-line estimation methods that identify the parameters involved in the built visual servo scheme. We conducted both simulation and experimental trials respectively on simulated volumetric objects, and on both objects and soft tissues immersed in a water-filled tank. Successful results have been obtained, which show the validity of the developed methods and their robustness to different errors and perturbations especially those inherent to the ultrasound modality. Keywords: Medical robotics, visual servoing, 2D ultrasound imaging, kinematics modeling, model-free servoing.
48

Multi-camera uncalibrated visual servoing

Marshall, Matthew Q. 20 September 2013 (has links)
Uncalibrated visual servoing (VS) can improve robot performance without needing camera and robot parameters. Multiple cameras improve uncalibrated VS precision, but no works exist simultaneously using more than two cameras. The first data for uncalibrated VS simultaneously using more than two cameras are presented. VS performance is also compared for two different camera models: a high-cost camera and a low-cost camera, the difference being image noise magnitude and focal length. A Kalman filter based control law for uncalibrated VS is introduced and shown to be stable under the assumptions that robot joint level servo control can reach commanded joint offsets and that the servoing path goes through at least one full column rank robot configuration. Adaptive filtering by a covariance matching technique is applied to achieve automatic camera weighting, prioritizing the best available data. A decentralized sensor fusion architecture is utilized to assure continuous servoing with camera occlusion. The decentralized adaptive Kalman filter (DAKF) control law is compared to a classical method, Gauss-Newton, via simulation and experimentation. Numerical results show that DAKF can improve average tracking error for moving targets and convergence time to static targets. DAKF reduces system sensitivity to noise and poor camera placement, yielding smaller outliers than Gauss-Newton. The DAKF system improves visual servoing performance, simplicity, and reliability.
49

Estudo e implementação de um método de cinemática inversa baseado em busca heurística para robôs manipuladores = aplicação em robôs redundantes e controle servo visual / Heuristic search based inverse kinematics for robotic manipulators : application to redundant robots and visual servoing

Nicolato, Fabricio 06 January 2007 (has links)
Orientador: Marconi Kolm Madrid / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-15T23:54:05Z (GMT). No. of bitstreams: 1 Nicolato_Fabricio_D.pdf: 1516280 bytes, checksum: 96229803f3bca54f669d4dcc22108c02 (MD5) Previous issue date: 2007 / Resumo: Esta tese trata o problema da resolução do modelo cinemático inverso para manipuladores industriais redundantes ou não. O problema foi abordado por um método de busca heurística no qual a solução da cinemática inversa é construída passo a passo calculando-se a contribuição do movimento de apenas uma junta a cada iteração. Dessa forma, o problema n-dimensional é transformado em problemas unidimensionais mais simples, cuja solução analítica tanto para juntas rotacionais quanto para juntas prismáticas é apresentada em termos da representação de Denavit-Hartenberg. O método proposto não possui singularidades internas. Além disso, o método foi expandido para incorporar informações de sensores externos visando fazer com que o processo seja mais robusto a incertezas nas modelagens envolvidas. Foram realizadas diversas simulações e comparações com técnicas tradicionais que evidenciaram as vantagens da abordagem proposta. O trabalho também englobou o projeto e a construção de um ambiente experimental e a implementação das técnicas desenvolvidas na parte teórica. Desenvolveu-se um sistema com um robô planar redundante de 3 DOF, assim como seus sistemas de controle, acionamento e interfaceamento usando técnicas de sistemas hardware-inthe-loop e lógica programável. As técnicas desenvolvidas foram aplicadas no ambiente experimental demonstrando características como: facilidade de lidar com redundâncias, capacidade de resolução em tempo real, robustez a incertezas de parâmetros etc / Abstract: This thesis deals with the problem of solving the inverse kinematics model of redundant and nonredundant industrial manipulators. The work was developed in a theoretical and a practical part. The problem was approached by an heuristic search method in which the solution of the inverse kinematics is built step by step calculating the movement contribution of just a single joint for each iteration. In that way, the n-dimensional problem is transformed in simpler one-dimensional problems, whose analytic solution for both rotational joints and prismatic joints is presented in terms of the Denavit and Hartenberg representation. The proposed method does not possess internal singularities. Furthermore, the method was expanded to incorporate information of external sensor in order to make the process more robust to uncertainties in the involved modelings. Several results of simulations and comparisons with traditional techniques, which evidence the advantages of the proposed approach, are presented. The work also included the construction of an experimental environment and the implementation of the techniques developed in the theoretical part. The details of a system with a 3-DOF redundant robot as well as its control system, drivers and interfaces using hardware-in-theloop techniques and programmable logic are presented. The developed techniques were applied in the experimental environment are demonstrating their efficiency and evidencing characteristics like: easiness of dealing with redundancies, real time capacity, robustness for parameters uncertainties etc / Doutorado / Automação
50

Navigation autonome et commande référencée capteurs de robots d'assistance à la personne / Autonomous navigation and sensor based control of personal assistance robots

Ben Said, Hela 23 March 2018 (has links)
L’autonomie d’un agent mobile se définit par sa capacité à naviguer dans un environnement sans intervention humaine. Cette tâche s’avère très demandée pour les robots d’assistance à la personne. C’est pour cela que notre contribution s’est portée en particulier sur l’instrumentation et l’augmentation de l’autonomie d’un fauteuil roulant pour les personnes à mobilité réduite. L’objectif de ce travail est de concevoir des lois de commande qui permettent à un robot de naviguer en temps réel et en toute autonomie dans un environnement inconnu. Un cadre de perception virtuelle unifié est introduit et permet de projeter l’espace navigable obtenu par des observations éventuellement multiples. Une approche de navigation autonome et sûre a été conçue pour se déplacer dans un environnement peu encombré dont la structure peut être assimilée à un couloir (lignes au sol, murs, délimitation herbes, routes...). La problématique a été résolue en utilisant le formalisme de l’asservissement visuel. Les caractéristiques visuelles utilisées dans la loi de commande ont été construites à partir de la représentation virtuelle (à savoir la position du point de fuite et l’orientation de la ligne médiane du couloir). Pour assurer une navigation sûre et lisse, même lorsque ces paramètres ne peuvent pas être extraits, nous avons conçu un observateur d’état pour estimer les caractéristiques visuelles dans le but de maintenir la commande fonctionnelle du robot. Cette approche permet de faire naviguer un robot mobile dans un couloir même en cas de défaillance sensorielle (données non fiables) et/ou de perte de mesure. La première contribution de cette thèse a été étendue en traitant tout type d’environnement encombré statique ou dynamique. Cela a été réalisé en utilisant le diagramme de Voronoï. Le diagramme de Voronoï généralisé, également appelé squelette, est une représentation puissante de l’environnement. Il définit un ensemble de chemins à la distance maximale des obstacles. Dans ce travail, une approche d’asservissement visuel basée sur le squelette extrait en temps réel était proposée pour une navigation autonome et sûre des robots mobiles. La commande est basée sur une approximation du DVG local en utilisant le Delta Medial axis, un algorithme de squelettisation rapide et robuste. Ce dernier produit un squelette filtré de l’espace libre entourant le robot en utilisant un paramètre qui prend en compte la taille du robot. Cette approche peut faire face aux bruits de mesure au niveau de la perception et au niveau de la commande à cause des glissement des roues. C’est pour cela que nous avons conçu une approche d’asservissement visuel sur une prédiction d’une linéarisation du DVG. Une analyse complète a été réalisée pour montrer la stabilité des lois de commandes proposées. Des simulations et des tests expérimentaux valident l’approche proposée. / The autonomy of a mobile agent is defined by its ability to navigate in an environment without human intervention. This task is very required for personal assistance robots. That’s why our contribution has been particularly focused on instrumentation and increasing the autonomy of a wheelchair for reduced mobility peaple. The objective of this work is to design control laws that allow a robot to navigate in real time and independently in an unknown environment. A unified virtual perception framework is introduced and allows to project the navigable space obtained by possibly multiple observations. First we designed an autonomous and safe navigation approach in environment whose structure can be assimilated to a corridor (lines on the ground, walls, delimitation of grasses, roads ...). We have solved this problem by using the formalism of visual servoing. The visual characteristics used in the control law were constructed from the virtual representation (ie the position of the vanishing point and the orientation of the center line of the corridor). To ensure safe and smooth navigation, even when these parameters can not be extracted, we have designed a finite-time state observer to estimate the visual characteristics in order to maintain the robot’s control efficient. This approach let a mobile robot navigate in a corridor even in in the case of sensory failure (unreliable data) and/or loss of measurement. We have extended the first contribution of this work with dealing with any type of static or dynamic environment. This was done using the Voronoi diagram. The Generalized Voronoi Diagram (GVD), also named skeleton, is a powerful environment representation, since, among other reasons, it defines a set of paths at maximal distance from the obstacles. In this work, a real time skeleton based visual servoing approach is proposed for a safe autonomous navigation of mobile robots. The control is based on an approximation of the local GVD using the Delta Medial Axis, a fast and robust skeletonization algorithm. The latter produces a filtered skeleton of the free space surrounding the robot using a pruning parameter that takes into account the robot size. This approach can cope with measurement noises at the perception and control with the wheel slip. This is why we have designed a visual servoing approach on a prediction of a GVD linearization. A complete analysis was performed to show the stability of the proposed control laws. Simulations and experimental tests validate the proposed approach.

Page generated in 0.0631 seconds