• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 14
  • 8
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 80
  • 80
  • 36
  • 28
  • 28
  • 25
  • 19
  • 17
  • 15
  • 14
  • 13
  • 10
  • 10
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Vison and visual servoing for nanomanipulation and nanocharacterization using scanning electron microscope / Vision et asservissement visuel pour la nanomanipulation et la nanocarectérisation sous microscope électrique à balayage.

Marturi, Naresh 19 November 2013 (has links)
Avec les dernières avancées en matière de nanotechnologies, il est devenu possible de concevoir, avec une grande efficacité, de nouveaux dispositifs et systèmes nanométriques. Il en résulte la nécessité de développer des méthodes de pointe fiables pour la nano manipulation et la nano caractérisation. La d´étection directe par l’homme n’ étant pas une option envisageable à cette échelle, les tâches sont habituellement effectuées par un opérateur humain expert `a l’aide de microscope électronique à balayage équipé de dispositifs micro nano robotiques. Toutefois, en raison de l’absence de méthodes efficaces, ces tâches sont toujours difficiles et souvent fastidieuses à réaliser. Grâce à ce travail, nous montrons que ce problème peut être résolu efficacement jusqu’ à une certaine mesure en utilisant les informations extraites des images. Le travail porte sur l’utilisation des images électroniques pour développer des méthodes automatiques fiables permettant d’effectuer des tâches de nano manipulation et nano caractérisation précises et efficaces. En premier lieu, puisque l’imagerie électronique à balayage est affectée par les instabilités de la colonne électronique, des méthodes fonctionnant en temps réel pour surveiller la qualité des images et compenser leur distorsion dynamique ont été développées. Ensuite des lois d’asservissement visuel ont été développées pour résoudre deux problèmes. La mise au point automatique utilisant l’asservissement visuel, développée, assure une netteté constante tout au long des processus. Elle a permis d’estimer la profondeur inter-objet, habituellement très difficile à calculer dans un microscope électronique à balayage. Deux schémas d’asservissement visuel ont été développés pour le problème du nano positionnement dans un microscope électronique. Ils sont fondés sur l’utilisation directe des intensités des pixels et l’information spectrale, respectivement. Les précisions obtenues par les deux méthodes dans diff érentes conditions expérimentales ont été satisfaisantes. Le travail réalisé ouvre la voie à la réalisation d’applications précises et fiables telles que l’analyse topographique,le sondage de nanostructures ou l’extraction d’ échantillons pour microscope électronique en transmission. / With the latest advances in nanotechnology, it became possible to design novel nanoscale devicesand systems with increasing efficiency. The consequence of this fact is an increase in the need for developing reliable and cutting edge processes for nanomanipulation and nanocharacterization. Since the human direct sensing is not a feasible option at this particular scale, the tasks are usually performedby an expert human operator using a scanning electron microscope (SEM) equipped withmicro-nanorobotic devices. However, due to the lack of effective processes, these tasks are always challenging and often tiresome to perform. Through this work we show that, this problem can be tackle deffectively up to an extent using the microscopic vision information. It is concerned about using the SEM vision to develop reliable automated methods in order to perform accurate and efficient nanomanipulation and nano characterization. Since, SEM imaging is affected by the non-linearities and instabilities present in the electron column, real time methods to monitor the imaging quality and to compensate the time varying distortion were developed. Later, these images were used in the development of visual servoing control laws. The developed visual servoing-based autofocusing method ensures a constant focus throughout the process and was used for estimating the inter-object depth that is highly challenging to compute using a SEM. Two visual servoing schemes were developed toperform accurate nanopositioning using a nanorobotic station positioned inside SEM. They are basedon the direct use of global pixel intensities and Fourier spectral information respectively. The positioning accuracies achieved by both the methods at different experimental conditions were satisfactory.The achieved results facilitate in developing accurate and reliable applications such as topographic analysis, nanoprobing and sample lift-out using SEM.
32

Contributions to dense visual tracking and visual servoing using robust similarity criteria / Contributions au suivi visuel et à l'asservissement visuel denses basées sur des critères de similarité robustes

Delabarre, Bertrand 23 December 2014 (has links)
Dans cette thèse, nous traitons les problèmes de suivi visuel et d'asservissement visuel, qui sont des thèmes essentiels dans le domaine de la vision par ordinateur. La plupart des techniques de suivi et d'asservissement visuel présentes dans la littérature se basent sur des primitives géométriques extraites dans les images pour estimer le mouvement présent dans la séquence. Un problème inhérent à ce type de méthode est le fait de devoir extraire et mettre en correspondance des primitives à chaque nouvelle image avant de pouvoir estimer un déplacement. Afin d'éviter cette couche algorithmique et de considérer plus d'information visuelle, de récentes approches ont proposé d'utiliser directement la totalité des informations fournies par l'image. Ces algorithmes, alors qualifiés de directs, se basent pour la plupart sur l'observation des intensités lumineuses de chaque pixel de l'image. Mais ceci a pour effet de limiter le domaine d'utilisation de ces approches, car ce critère de comparaison est très sensibles aux perturbations de la scène (telles que les variations de luminosité ou les occultations). Pour régler ces problèmes nous proposons de nous baser sur des travaux récents qui ont montré que des mesures de similarité comme la somme des variances conditionnelles ou l'information mutuelle permettaient d'accroître la robustesse des approches directes dans des conditions perturbées. Nous proposons alors plusieurs algorithmes de suivi et d'asservissement visuels directs qui utilisent ces fonctions de similarité afin d'estimer le mouvement présents dans des séquences d'images et de contrôler un robot grâce aux informations fournies par une caméra. Ces différentes méthodes sont alors validées et analysées dans différentes conditions qui viennent démontrer leur efficacité. / In this document, we address the visual tracking and visual servoing problems. They are crucial thematics in the domain of computer and robot vision. Most of these techniques use geometrical primitives extracted from the images in order to estimate a motion from an image sequences. But using geometrical features means having to extract and match them at each new image before performing the tracking or servoing process. In order to get rid of this algorithmic step, recent approaches have proposed to use directly the information provided by the whole image instead of extracting geometrical primitives. Most of these algorithms, referred to as direct techniques, are based on the luminance values of every pixel in the image. But this strategy limits their use, since the criteria is very sensitive to scene perturbations such as luminosity shifts or occlusions. To overcome this problem, we propose in this document to use robust similarity measures, the sum of conditional variance and the mutual information, in order to perform robust direct visual tracking and visual servoing processes. Several algorithms are then proposed that are based on these criteria in order to be robust to scene perturbations. These different methods are tested and analyzed in several setups where perturbations occur which allows to demonstrate their efficiency.
33

Stereo visual servoing from straight lines / Asservissement visuel stéréo à partir de droites

Alkhalil, Fadi 24 September 2012 (has links)
L'emploi d'un retour visuel dans le but d'effectuer une commande en boucle fermée de robot s'est largement répandu et concerne de nos jours tous les domaines de la robotique. Un tel retour permet d'effectuer une comparaison entre un état désiré et l'état actuel, à l'aide de mesures visuelles. L'objectif principal de cette thèse consiste à concevoir plusieurs types de lois de commande cinématiques par vision stéréo. Ceci concerne aussi l'étude de la stabilité du système en boucle fermée et la convergence des fonctions de tâche. C'est essentiellement le découplage des lois de commandes cinématiques en rotation et en translation qui est recherché ici, selon le nombre d'indices visuels considérés.Les mesures visuelles utilisées dans cette thèse sont les lignes droites 3D. Les intérêts apportés à ce type de mesures visuelles sont la robustesse contre le bruit, et la possibilité de représenter d'autres primitives comme des couples de points ou de plans par la modélisation de Plücker. / Closing the control loop of a manipulator robot with vision feedback is widelyknown. It concerns nowadays all areas of robotics. Such a return can make a comparison between a desired state and current state, using visual measurements. The main objective of this doctoral thesis is to design several types of kinematic control laws for stereo visual servoing. It strongly involves the formalism of the task function which is a well-known and useful mathematical tool to express the visual error as a function of state vectors.We have investigated the decoupling between the rotational and translational velocities control laws together with the epipolar constraint with a stereo visual feedback.That is why, the visual measurements and features used in this thesis are the 3Dstraight lines.The interests of this type of visual features rely on the robustness against the noise, and the possibility to represent straight lines or other features like points or planes pairs by the Plücker coordinates, as a 3D straight line can be represented as well by two points or the intersection of two planes. This makes all the control laws designed in this thesis valid for another visual features like points
34

Ultra Low Latency Visual Servoing for High Speed Object Tracking Using Multi Focal Length Camera Arrays

McCown, Alexander Steven 01 July 2019 (has links)
In high speed applications of visual servoing, latency from the recognition algorithm can cause significant degradation of in response time. Hardware acceleration allows for recognition algorithms to be applied directly during the raster scan from the image sensor, thereby removing virtually all video processing latency. This paper examines one such method, along with an analysis of design decisions made to optimize for use during high speed airborne object tracking tests for the US military. Designing test equipment for defense use involves working around unique challenges that arise from having many details being deemed classified or highly sensitive information. Designing tracking system without knowing any exact numbers for speeds, mass, distance or nature of the objects being tracked requires a flexible control system that can be easily tuned after installation. To further improve accuracy and allow rapid tuning to a yet undisclosed set of parameters, a machine learning powered auto-tuner is developed and implemented as a control loop optimizer.
35

Dynamic visual servoing of robot manipulators: optimal framework with dynamic perceptibility and chaos compensation

Pérez Alepuz, Javier 01 September 2017 (has links)
This Thesis presents an optimal framework with dynamic perceptibility and chaos compensation for the control of robot manipulators. The fundamental objective of this framework is to obtain a variety of control laws for implementing dynamic visual servoing systems. In addition, this Thesis presents different contributions like the concept of dynamic perceptibility that is used to avoid image and robot singularities, the framework itself, that implements a delayed feedback controller for chaos compensation, and the extension of the framework for space robotic systems. Most of the image-based visual servoing systems implemented to date are indirect visual controllers in which the control action is joint or end-effector velocities to be applied to the robot in order to achieve a given desired location with respect to an observed object. The direct control of the motors for each joint of the robot is performed by the internal controller of the robot, which translates these velocities into joint torques. This Thesis mainly addresses the direct image-based visual servoing systems for trajectory tracking. In this case, in order to follow a given trajectory previously specified in the image space, the control action is defined as a vector of joint torques. The framework detailed in the Thesis allows for obtaining different kind of control laws for direct image-based visual servoing systems. It also integrates the dynamic perceptibility concept into the framework for avoiding image and robot singularities. Furthermore, a delayed feedback controller is also integrated so the chaotic behavior of redundant systems is compensated and thus, obtaining a smoother and efficient movement of the system. As an extension of the framework, the dynamics of free-based space systems is considered when determining the control laws, being able to determine trajectories for systems that do not have the base attached to anything. All these different steps are described throughout the Thesis. This Thesis describes in detail all the calculations for developing the visual servoing framework and the integration of the described optimization techniques. Simulation and experimental results are shown for each step, developing the controllers in an FPGA for further optimization, since this architecture allows to reduce latency and can be easily adapted for controlling of any joint robot by simply modifying certain modules that are hardware dependents. This architecture is modular and can be adapted to possible changes that may occur as a consequence of the incorporation or modification of a control driver, or even changes in the configuration of the data acquisition system or its control. This implementation, however, is not a contribution of this Thesis, but is necessary to briefly describe the architecture to understand the framework’s potential. These are the main objectives of the Thesis, and two robots where used for experimental results. A commercial industrial seven-degrees-of-freedom robot: Mitsubishi PA10, and another three-degrees-of-freedom robot. This last one’s design and implementation has been developed in the research group where the Thesis is written.
36

Vision based navigation in a dynamic environment / Navigation référencée vision dans un environnement dynamique

Futterlieb, Marcus 10 July 2017 (has links)
Cette thèse s'intéresse au problème de la navigation autonome au long cours de robots mobiles à roues dans des environnements dynamiques. Elle s'inscrit dans le cadre du projet FUI Air-Cobot. Ce projet, porté par Akka Technologies, a vu collaborer plusieurs entreprises (Akka, Airbus, 2MORROW, Sterela) ainsi que deux laboratoires de recherche, le LAAS et Mines Albi. L'objectif est de développer un robot collaboratif (ou cobot) capable de réaliser l'inspection d'un avion avant le décollage ou en hangar. Différents aspects ont donc été abordés : le contrôle non destructif, la stratégie de navigation, le développement du système robotisé et de son instrumentation, etc. Cette thèse répond au second problème évoqué, celui de la navigation. L'environnement considéré étant aéroportuaire, il est hautement structuré et répond à des normes de déplacement très strictes (zones interdites, etc.). Il peut être encombré d'obstacles statiques (attendus ou non) et dynamiques (véhicules divers, piétons, ...) qu'il conviendra d'éviter pour garantir la sécurité des biens et des personnes. Cette thèse présente deux contributions. La première porte sur la synthèse d'un asservissement visuel permettant au robot de se déplacer sur de longues distances (autour de l'avion ou en hangar) grâce à une carte topologique et au choix de cibles dédiées. De plus, cet asservissement visuel exploite les informations fournies par toutes les caméras embarquées. La seconde contribution porte sur la sécurité et l'évitement d'obstacles. Une loi de commande basée sur les spirales équiangulaires exploite seulement les données sensorielles fournies par les lasers embarqués. Elle est donc purement référencée capteur et permet de contourner tout obstacle, qu'il soit fixe ou mobile. Il s'agit donc d'une solution générale permettant de garantir la non collision. Enfin, des résultats expérimentaux, réalisés au LAAS et sur le site d'Airbus à Blagnac, montrent l'efficacité de la stratégie développée. / This thesis is directed towards the autonomous long range navigation of wheeled robots in dynamic environments. It takes place within the Air-Cobot project. This project aims at designing a collaborative robot (cobot) able to perform the preflight inspection of an aircraft. The considered environment is then highly structured (airport runway and hangars) and may be cluttered with both static and dynamic unknown obstacles (luggage or refueling trucks, pedestrians, etc.). Our navigation framework relies on previous works and is based on the switching between different control laws (go to goal controller, visual servoing, obstacle avoidance) depending on the context. Our contribution is twofold. First of all, we have designed a visual servoing controller able to make the robot move over a long distance thanks to a topological map and to the choice of suitable targets. In addition, multi-camera visual servoing control laws have been built to benefit from the image data provided by the different cameras which are embedded on the Air-Cobot system. The second contribution is related to obstacle avoidance. A control law based on equiangular spirals has been designed to guarantee non collision. This control law, based on equiangular spirals, is fully sensor-based, and allows to avoid static and dynamic obstacles alike. It then provides a general solution to deal efficiently with the collision problem. Experimental results, performed both in LAAS and in Airbus hangars and runways, show the efficiency of the developed techniques.
37

Robot visual servoing with iterative learning control

Jiang, Ping, Unbehauen, R. January 2002 (has links)
Yes / This paper presents an iterative learning scheme for vision guided robot trajectory tracking. At first, a stability criterion for designing iterative learning controller is proposed. It can be used for a system with initial resetting error. By using the criterion, one can convert the design problem into finding a positive definite discrete matrix kernel and a more general form of learning control can be obtained. Then, a three-dimensional (3-D) trajectory tracking system with a single static camera to realize robot movement imitation is presented based on this criterion.
38

A universal iterative learning stabilizer for a class of MIMO systems.

Jiang, Ping, Chen, H., Bamforth, C.A. January 2006 (has links)
No / Design of iterative learning control (ILC) often requires some prior knowledge about a system's control matrix. In some applications, such as uncalibrated visual servoing, this kind of knowledge may be unavailable so that a stable learning control cannot always be achieved. In this paper, a universal ILC is proposed for a class of multi-input multi-output (MIMO) uncertain nonlinear systems with no prior knowledge about the system control gain matrix. It consists of a gain matrix selector from the unmixing set and a learned compensator in a form of the positive definite discrete matrix kernel, corresponding to rough gain matrix probing and refined uncertainty compensating, respectively. Asymptotic convergence for a trajectory tracking within a finite time interval is achieved through repetitive tracking. Simulations and experiments of uncalibrated visual servoing are carried out in order to verify the validity of the proposed control method.
39

A Hybrid Tracking Approach for Autonomous Docking in Self-Reconfigurable Robotic Modules

Sohal, Shubhdildeep Singh 02 July 2019 (has links)
Active docking in modular robotic systems has received a lot of interest recently as it allows small versatile robotic systems to coalesce and achieve the structural benefits of larger robotic systems. This feature enables reconfigurable modular robotic systems to bridge the gap between small agile systems and larger robotic systems. The proposed self-reconfigurable mobile robot design exhibits dual mobility using a tracked drive for longitudinal locomotion and wheeled drive for lateral locomotion. The two degrees of freedom (DOF) docking interface referred to as GHEFT (Genderless, High strength, Efficient, Fail-Safe, high misalignment Tolerant) allows for an efficient docking while tolerating misalignments in 6-DOF. In addition, motion along the vertical axis is also achieved via an additional translational DOF, allowing for toggling between tracked and wheeled locomotion modes by lowering and raising the wheeled assembly. This thesis also presents a visual-based onboard Hybrid Target Tracking algorithm to detect and follow a target robot leading to autonomous docking between the modules. As a result of this proposed approach, the tracked features are then used to bring the robots in sufficient proximity for the docking procedure using Image Based Visual Servoing (IBVS) control. Experimental results to validate the robustness of the proposed tracking method, as well as the reliability of the autonomous docking procedure, are also presented in this thesis. / Master of Science / Active docking in modular robotic systems has received a lot of interest recently as it allows small versatile robotic systems to coalesce and achieve the structural benefits of larger robotic systems. This feature enables reconfigurable modular robotic systems to bridge the gap between small agile systems and larger robotic systems. Such robots can prove useful in environments that are either too dangerous or inaccessible to humans. Therefore, in this research, several specific hardware and software development aspects related to self-reconfigurable mobile robots are proposed. In terms of hardware development, a robotic module was designed that is symmetrically invertible and exhibits dual mobility using a tracked drive for longitudinal locomotion and wheeled drive for lateral locomotion. Such interchangeable mobility is important when the robot operates in a constrained workspace. The mobile robot also has integrated two degrees of freedom (DOF) docking mechanisms referred to as GHEFT (Genderless, High strength, Efficient, Fail-Safe, high misalignment Tolerant). The docking interface allows for an efficient docking while tolerating misalignments in 6-DOF. In addition, motion along the vertical axis is also performed via an additional translational DOF, allowing for lowering and raising the wheeled assembly. The robot is equipped with sensors to provide positional feedback of the joints relative to the target robot. In terms of software development, a visual-based onboard Hybrid Target Tracking algorithm for high-speed consistent tracking iv of colored targets is also presented in this work. The proposed technique is used to detect and follow a colored target attached to the target robot leading to autonomous docking between the modules using Image Based Visual Servoing (IBVS). Experimental results to validate the robustness of the proposed tracking approach, as well as the reliability of the autonomous docking procedure, are also presented in the thesis. The thesis is concluded with discussions about future research in both structured and unstructured terrains.
40

Hardware Testbed for Relative Navigation of Unmanned Vehicles Using Visual Servoing

Monda, Mark J. 12 June 2006 (has links)
Future generations of unmanned spacecraft, aircraft, ground, and submersible vehicles will require precise relative navigation capabilities to accomplish missions such as formation operations and autonomous rendezvous and docking. The development of relative navigation sensing and control techniques is quite challenging, in part because of the difficulty of accurately simulating the physical relative navigation problems in which the control systems are designed to operate. A hardware testbed that can simulate the complex relative motion of many different relative navigation problems is being developed. This testbed simulates near-planar relative motion by using software to prescribe the motion of an unmanned ground vehicle and provides the attached sensor packages with realistic relative motion. This testbed is designed to operate over a wide variety of conditions in both indoor and outdoor environments, at short and long ranges, and its modular design allows it to easily test many different sensing and control technologies. / Master of Science

Page generated in 0.0508 seconds