• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 18
  • 8
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 85
  • 80
  • 39
  • 32
  • 28
  • 25
  • 19
  • 17
  • 16
  • 14
  • 14
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Sistema de controle servo visual de uma câmera pan-tilt com rastreamento de uma região de referência. / Visual servoing system of a pan-tilt camera using region template tracking.

Davi Yoshinobu Kikuchi 19 April 2007 (has links)
Uma câmera pan-tilt é capaz de se movimentar em torno de dois eixos de rotação (pan e tilt), permitindo que sua lente possa ser apontada para um ponto qualquer no espaço. Uma aplicação possível dessa câmera é mantê-la apontada para um determinado alvo em movimento, através de posicionamentos angulares pan e tilt adequados. Este trabalho apresenta uma técnica de controle servo visual, em que, inicialmente, as imagens capturadas pela câmera são utilizadas para determinar a posição do alvo. Em seguida, calculam-se as rotações necessárias para manter a projeção do alvo no centro da imagem, em um sistema em tempo real e malha fechada. A técnica de rastreamento visual desenvolvida se baseia em comparação de uma região de referência, utilizando a soma dos quadrados das diferenças (SSD) como critério de correspondência. Sobre essa técnica, é adicionada uma extensão baseada no princípio de estimação incremental e, em seguida, o algoritmo é mais uma vez modificado através do princípio de estimação em multiresolução. Para cada uma das três configurações, são realizados testes para comparar suas performances. O sistema é modelado através do princípio de fluxo óptico e dois controladores são apresentados para realimentar o sistema: um proporcional integral (PI) e um proporcional com estimação de perturbações externas através de um filtro de Kalman (LQG). Ambos são calculados utilizando um critério linear quadrático e os desempenhos deles também são analisados comparativamente. / A pan-tilt camera can move around two rotational axes (pan and tilt), allowing its lens to be pointed to any point in space. A possible application of the camera is to keep it pointed to a certain moving target, through appropriate angular pan-tilt positioning. This work presents a visual servoing technique, which uses first the images captured by the camera to determinate the target position. Then the method calculates the proper rotations to keep the target position in image center, establishing a real-time and closed-loop system. The developed visual tracking technique is based on template region matching, and makes use of the sum of squared differences (SSD) as similarity criterion. An extension based on incremental estimation principle is added to the technique, and then the algorithm is modified again by multiresolution estimation method. Experimental results allow a performance comparison between the three configurations. The system is modeled through optical flow principle and this work presents two controllers to accomplish the system feedback: a proportional integral (PI) and a proportional with external disturbances estimation by a Kalman filter (LQG). Both are determined using a linear quadratic method and their performances are also analyzed comparatively.
32

Visual Servoing Based on Learned Inverse Kinematics

Larsson, Fredrik January 2007 (has links)
Initially an analytical closed-form inverse kinematics solution for a 5 DOF robotic arm was developed and implemented. This analytical solution proved not to meet the accuracy required for the shape sorting puzzle setup used in the COSPAL (COgnitiveSystems using Perception-Action Learning) project [2]. The correctness of the analytic model could be confirmed through a simulated ideal robot and the source of the problem was deemed to be nonlinearities introduced by weak servos unable to compensate for the effect of gravity. Instead of developing a new analytical model that took the effect of gravity into account, which would be erroneous when the characteristics of the robotic arm changed, e.g. when picking up a heavy object, a learning approach was selected. As learning method Locally Weighted Projection Regression (LWPR) [27] is used. It is an incremental supervised learning method and it is considered a state-ofthe-art method for function approximation in high dimensional spaces. LWPR is further combined with visual servoing. This allows for an improvement in accuracy by the use of visual feedback and the problems introduced by the weak servos can be solved. By combining the trained LWPR model with visual servoing, a high level of accuracy is reached, which is sufficient for the shape sorting puzzle setup used in COSPAL.
33

Vison and visual servoing for nanomanipulation and nanocharacterization using scanning electron microscope / Vision et asservissement visuel pour la nanomanipulation et la nanocarectérisation sous microscope électrique à balayage.

Marturi, Naresh 19 November 2013 (has links)
Avec les dernières avancées en matière de nanotechnologies, il est devenu possible de concevoir, avec une grande efficacité, de nouveaux dispositifs et systèmes nanométriques. Il en résulte la nécessité de développer des méthodes de pointe fiables pour la nano manipulation et la nano caractérisation. La d´étection directe par l’homme n’ étant pas une option envisageable à cette échelle, les tâches sont habituellement effectuées par un opérateur humain expert `a l’aide de microscope électronique à balayage équipé de dispositifs micro nano robotiques. Toutefois, en raison de l’absence de méthodes efficaces, ces tâches sont toujours difficiles et souvent fastidieuses à réaliser. Grâce à ce travail, nous montrons que ce problème peut être résolu efficacement jusqu’ à une certaine mesure en utilisant les informations extraites des images. Le travail porte sur l’utilisation des images électroniques pour développer des méthodes automatiques fiables permettant d’effectuer des tâches de nano manipulation et nano caractérisation précises et efficaces. En premier lieu, puisque l’imagerie électronique à balayage est affectée par les instabilités de la colonne électronique, des méthodes fonctionnant en temps réel pour surveiller la qualité des images et compenser leur distorsion dynamique ont été développées. Ensuite des lois d’asservissement visuel ont été développées pour résoudre deux problèmes. La mise au point automatique utilisant l’asservissement visuel, développée, assure une netteté constante tout au long des processus. Elle a permis d’estimer la profondeur inter-objet, habituellement très difficile à calculer dans un microscope électronique à balayage. Deux schémas d’asservissement visuel ont été développés pour le problème du nano positionnement dans un microscope électronique. Ils sont fondés sur l’utilisation directe des intensités des pixels et l’information spectrale, respectivement. Les précisions obtenues par les deux méthodes dans diff érentes conditions expérimentales ont été satisfaisantes. Le travail réalisé ouvre la voie à la réalisation d’applications précises et fiables telles que l’analyse topographique,le sondage de nanostructures ou l’extraction d’ échantillons pour microscope électronique en transmission. / With the latest advances in nanotechnology, it became possible to design novel nanoscale devicesand systems with increasing efficiency. The consequence of this fact is an increase in the need for developing reliable and cutting edge processes for nanomanipulation and nanocharacterization. Since the human direct sensing is not a feasible option at this particular scale, the tasks are usually performedby an expert human operator using a scanning electron microscope (SEM) equipped withmicro-nanorobotic devices. However, due to the lack of effective processes, these tasks are always challenging and often tiresome to perform. Through this work we show that, this problem can be tackle deffectively up to an extent using the microscopic vision information. It is concerned about using the SEM vision to develop reliable automated methods in order to perform accurate and efficient nanomanipulation and nano characterization. Since, SEM imaging is affected by the non-linearities and instabilities present in the electron column, real time methods to monitor the imaging quality and to compensate the time varying distortion were developed. Later, these images were used in the development of visual servoing control laws. The developed visual servoing-based autofocusing method ensures a constant focus throughout the process and was used for estimating the inter-object depth that is highly challenging to compute using a SEM. Two visual servoing schemes were developed toperform accurate nanopositioning using a nanorobotic station positioned inside SEM. They are basedon the direct use of global pixel intensities and Fourier spectral information respectively. The positioning accuracies achieved by both the methods at different experimental conditions were satisfactory.The achieved results facilitate in developing accurate and reliable applications such as topographic analysis, nanoprobing and sample lift-out using SEM.
34

Contributions to dense visual tracking and visual servoing using robust similarity criteria / Contributions au suivi visuel et à l'asservissement visuel denses basées sur des critères de similarité robustes

Delabarre, Bertrand 23 December 2014 (has links)
Dans cette thèse, nous traitons les problèmes de suivi visuel et d'asservissement visuel, qui sont des thèmes essentiels dans le domaine de la vision par ordinateur. La plupart des techniques de suivi et d'asservissement visuel présentes dans la littérature se basent sur des primitives géométriques extraites dans les images pour estimer le mouvement présent dans la séquence. Un problème inhérent à ce type de méthode est le fait de devoir extraire et mettre en correspondance des primitives à chaque nouvelle image avant de pouvoir estimer un déplacement. Afin d'éviter cette couche algorithmique et de considérer plus d'information visuelle, de récentes approches ont proposé d'utiliser directement la totalité des informations fournies par l'image. Ces algorithmes, alors qualifiés de directs, se basent pour la plupart sur l'observation des intensités lumineuses de chaque pixel de l'image. Mais ceci a pour effet de limiter le domaine d'utilisation de ces approches, car ce critère de comparaison est très sensibles aux perturbations de la scène (telles que les variations de luminosité ou les occultations). Pour régler ces problèmes nous proposons de nous baser sur des travaux récents qui ont montré que des mesures de similarité comme la somme des variances conditionnelles ou l'information mutuelle permettaient d'accroître la robustesse des approches directes dans des conditions perturbées. Nous proposons alors plusieurs algorithmes de suivi et d'asservissement visuels directs qui utilisent ces fonctions de similarité afin d'estimer le mouvement présents dans des séquences d'images et de contrôler un robot grâce aux informations fournies par une caméra. Ces différentes méthodes sont alors validées et analysées dans différentes conditions qui viennent démontrer leur efficacité. / In this document, we address the visual tracking and visual servoing problems. They are crucial thematics in the domain of computer and robot vision. Most of these techniques use geometrical primitives extracted from the images in order to estimate a motion from an image sequences. But using geometrical features means having to extract and match them at each new image before performing the tracking or servoing process. In order to get rid of this algorithmic step, recent approaches have proposed to use directly the information provided by the whole image instead of extracting geometrical primitives. Most of these algorithms, referred to as direct techniques, are based on the luminance values of every pixel in the image. But this strategy limits their use, since the criteria is very sensitive to scene perturbations such as luminosity shifts or occlusions. To overcome this problem, we propose in this document to use robust similarity measures, the sum of conditional variance and the mutual information, in order to perform robust direct visual tracking and visual servoing processes. Several algorithms are then proposed that are based on these criteria in order to be robust to scene perturbations. These different methods are tested and analyzed in several setups where perturbations occur which allows to demonstrate their efficiency.
35

Stereo visual servoing from straight lines / Asservissement visuel stéréo à partir de droites

Alkhalil, Fadi 24 September 2012 (has links)
L'emploi d'un retour visuel dans le but d'effectuer une commande en boucle fermée de robot s'est largement répandu et concerne de nos jours tous les domaines de la robotique. Un tel retour permet d'effectuer une comparaison entre un état désiré et l'état actuel, à l'aide de mesures visuelles. L'objectif principal de cette thèse consiste à concevoir plusieurs types de lois de commande cinématiques par vision stéréo. Ceci concerne aussi l'étude de la stabilité du système en boucle fermée et la convergence des fonctions de tâche. C'est essentiellement le découplage des lois de commandes cinématiques en rotation et en translation qui est recherché ici, selon le nombre d'indices visuels considérés.Les mesures visuelles utilisées dans cette thèse sont les lignes droites 3D. Les intérêts apportés à ce type de mesures visuelles sont la robustesse contre le bruit, et la possibilité de représenter d'autres primitives comme des couples de points ou de plans par la modélisation de Plücker. / Closing the control loop of a manipulator robot with vision feedback is widelyknown. It concerns nowadays all areas of robotics. Such a return can make a comparison between a desired state and current state, using visual measurements. The main objective of this doctoral thesis is to design several types of kinematic control laws for stereo visual servoing. It strongly involves the formalism of the task function which is a well-known and useful mathematical tool to express the visual error as a function of state vectors.We have investigated the decoupling between the rotational and translational velocities control laws together with the epipolar constraint with a stereo visual feedback.That is why, the visual measurements and features used in this thesis are the 3Dstraight lines.The interests of this type of visual features rely on the robustness against the noise, and the possibility to represent straight lines or other features like points or planes pairs by the Plücker coordinates, as a 3D straight line can be represented as well by two points or the intersection of two planes. This makes all the control laws designed in this thesis valid for another visual features like points
36

Ultra Low Latency Visual Servoing for High Speed Object Tracking Using Multi Focal Length Camera Arrays

McCown, Alexander Steven 01 July 2019 (has links)
In high speed applications of visual servoing, latency from the recognition algorithm can cause significant degradation of in response time. Hardware acceleration allows for recognition algorithms to be applied directly during the raster scan from the image sensor, thereby removing virtually all video processing latency. This paper examines one such method, along with an analysis of design decisions made to optimize for use during high speed airborne object tracking tests for the US military. Designing test equipment for defense use involves working around unique challenges that arise from having many details being deemed classified or highly sensitive information. Designing tracking system without knowing any exact numbers for speeds, mass, distance or nature of the objects being tracked requires a flexible control system that can be easily tuned after installation. To further improve accuracy and allow rapid tuning to a yet undisclosed set of parameters, a machine learning powered auto-tuner is developed and implemented as a control loop optimizer.
37

Dynamic visual servoing of robot manipulators: optimal framework with dynamic perceptibility and chaos compensation

Pérez Alepuz, Javier 01 September 2017 (has links)
This Thesis presents an optimal framework with dynamic perceptibility and chaos compensation for the control of robot manipulators. The fundamental objective of this framework is to obtain a variety of control laws for implementing dynamic visual servoing systems. In addition, this Thesis presents different contributions like the concept of dynamic perceptibility that is used to avoid image and robot singularities, the framework itself, that implements a delayed feedback controller for chaos compensation, and the extension of the framework for space robotic systems. Most of the image-based visual servoing systems implemented to date are indirect visual controllers in which the control action is joint or end-effector velocities to be applied to the robot in order to achieve a given desired location with respect to an observed object. The direct control of the motors for each joint of the robot is performed by the internal controller of the robot, which translates these velocities into joint torques. This Thesis mainly addresses the direct image-based visual servoing systems for trajectory tracking. In this case, in order to follow a given trajectory previously specified in the image space, the control action is defined as a vector of joint torques. The framework detailed in the Thesis allows for obtaining different kind of control laws for direct image-based visual servoing systems. It also integrates the dynamic perceptibility concept into the framework for avoiding image and robot singularities. Furthermore, a delayed feedback controller is also integrated so the chaotic behavior of redundant systems is compensated and thus, obtaining a smoother and efficient movement of the system. As an extension of the framework, the dynamics of free-based space systems is considered when determining the control laws, being able to determine trajectories for systems that do not have the base attached to anything. All these different steps are described throughout the Thesis. This Thesis describes in detail all the calculations for developing the visual servoing framework and the integration of the described optimization techniques. Simulation and experimental results are shown for each step, developing the controllers in an FPGA for further optimization, since this architecture allows to reduce latency and can be easily adapted for controlling of any joint robot by simply modifying certain modules that are hardware dependents. This architecture is modular and can be adapted to possible changes that may occur as a consequence of the incorporation or modification of a control driver, or even changes in the configuration of the data acquisition system or its control. This implementation, however, is not a contribution of this Thesis, but is necessary to briefly describe the architecture to understand the framework’s potential. These are the main objectives of the Thesis, and two robots where used for experimental results. A commercial industrial seven-degrees-of-freedom robot: Mitsubishi PA10, and another three-degrees-of-freedom robot. This last one’s design and implementation has been developed in the research group where the Thesis is written.
38

Vision based navigation in a dynamic environment / Navigation référencée vision dans un environnement dynamique

Futterlieb, Marcus 10 July 2017 (has links)
Cette thèse s'intéresse au problème de la navigation autonome au long cours de robots mobiles à roues dans des environnements dynamiques. Elle s'inscrit dans le cadre du projet FUI Air-Cobot. Ce projet, porté par Akka Technologies, a vu collaborer plusieurs entreprises (Akka, Airbus, 2MORROW, Sterela) ainsi que deux laboratoires de recherche, le LAAS et Mines Albi. L'objectif est de développer un robot collaboratif (ou cobot) capable de réaliser l'inspection d'un avion avant le décollage ou en hangar. Différents aspects ont donc été abordés : le contrôle non destructif, la stratégie de navigation, le développement du système robotisé et de son instrumentation, etc. Cette thèse répond au second problème évoqué, celui de la navigation. L'environnement considéré étant aéroportuaire, il est hautement structuré et répond à des normes de déplacement très strictes (zones interdites, etc.). Il peut être encombré d'obstacles statiques (attendus ou non) et dynamiques (véhicules divers, piétons, ...) qu'il conviendra d'éviter pour garantir la sécurité des biens et des personnes. Cette thèse présente deux contributions. La première porte sur la synthèse d'un asservissement visuel permettant au robot de se déplacer sur de longues distances (autour de l'avion ou en hangar) grâce à une carte topologique et au choix de cibles dédiées. De plus, cet asservissement visuel exploite les informations fournies par toutes les caméras embarquées. La seconde contribution porte sur la sécurité et l'évitement d'obstacles. Une loi de commande basée sur les spirales équiangulaires exploite seulement les données sensorielles fournies par les lasers embarqués. Elle est donc purement référencée capteur et permet de contourner tout obstacle, qu'il soit fixe ou mobile. Il s'agit donc d'une solution générale permettant de garantir la non collision. Enfin, des résultats expérimentaux, réalisés au LAAS et sur le site d'Airbus à Blagnac, montrent l'efficacité de la stratégie développée. / This thesis is directed towards the autonomous long range navigation of wheeled robots in dynamic environments. It takes place within the Air-Cobot project. This project aims at designing a collaborative robot (cobot) able to perform the preflight inspection of an aircraft. The considered environment is then highly structured (airport runway and hangars) and may be cluttered with both static and dynamic unknown obstacles (luggage or refueling trucks, pedestrians, etc.). Our navigation framework relies on previous works and is based on the switching between different control laws (go to goal controller, visual servoing, obstacle avoidance) depending on the context. Our contribution is twofold. First of all, we have designed a visual servoing controller able to make the robot move over a long distance thanks to a topological map and to the choice of suitable targets. In addition, multi-camera visual servoing control laws have been built to benefit from the image data provided by the different cameras which are embedded on the Air-Cobot system. The second contribution is related to obstacle avoidance. A control law based on equiangular spirals has been designed to guarantee non collision. This control law, based on equiangular spirals, is fully sensor-based, and allows to avoid static and dynamic obstacles alike. It then provides a general solution to deal efficiently with the collision problem. Experimental results, performed both in LAAS and in Airbus hangars and runways, show the efficiency of the developed techniques.
39

Robot visual servoing with iterative learning control

Jiang, Ping, Unbehauen, R. January 2002 (has links)
Yes / This paper presents an iterative learning scheme for vision guided robot trajectory tracking. At first, a stability criterion for designing iterative learning controller is proposed. It can be used for a system with initial resetting error. By using the criterion, one can convert the design problem into finding a positive definite discrete matrix kernel and a more general form of learning control can be obtained. Then, a three-dimensional (3-D) trajectory tracking system with a single static camera to realize robot movement imitation is presented based on this criterion.
40

Multistage Localization for High Precision Mobile Manipulation Tasks

Mobley, Christopher James 03 March 2017 (has links)
This paper will present a multistage localization approach for an autonomous industrial mobile manipulator (AIMM). This approach allows tasks with an operational scope outside the range of the robot's manipulator to be completed without having to recalibrate the position of the end-effector each time the robot's mobile base moves to another position. This is achieved by localizing the AIMM within its area of operation (AO) using adaptive Monte Carlo localization (AMCL), which relies on the fused odometry and sensor messages published by the robot, as well as a 2-D map of the AO, which is generated using an optimization-based smoothing simultaneous localization and mapping (SLAM) technique. The robot navigates to a predefined start location in the map incorporating obstacle avoidance through the use of a technique called trajectory rollout. Once there, the robot uses its RGB-D sensor to localize an augmented reality (AR) tag in the map frame. Once localized, the identity and the 3-D position and orientation, collectively known as pose, of the tag are used to generate a list of initial feature points and their locations based on a priori knowledge. After the end-effector moves to the approximate location of a feature point provided by the AR tag localization, the feature point's location, as well as the end-effector's pose are refined to within a user specified tolerance through the use of a control loop, which utilizes images from a calibrated machine vision camera and a laser pointer, simulating stereo vision, to localize the feature point in 3-D space using computer vision techniques and basic geometry. This approach was implemented on two different ROS enabled robots, the Clearpath Robotics' Husky and the Fetch Robotics' Fetch, in order to show the utility of the multistage localization approach in executing two tasks which are prevalent in both manufacturing and construction: drilling and sealant application. The proposed approach was able to achieve an average accuracy of ± 1 mm in these operations, verifying its efficacy for tasks which have a larger operational scope than that of the range of the AIMM's manipulator and its robustness to general applications in manufacturing. / Master of Science

Page generated in 0.0532 seconds