Spelling suggestions: "subject:"[een] VISUAL SERVOING"" "subject:"[enn] VISUAL SERVOING""
31 |
Stereo visual servoing from straight lines / Asservissement visuel stéréo à partir de droitesAlkhalil, Fadi 24 September 2012 (has links)
L'emploi d'un retour visuel dans le but d'effectuer une commande en boucle fermée de robot s'est largement répandu et concerne de nos jours tous les domaines de la robotique. Un tel retour permet d'effectuer une comparaison entre un état désiré et l'état actuel, à l'aide de mesures visuelles. L'objectif principal de cette thèse consiste à concevoir plusieurs types de lois de commande cinématiques par vision stéréo. Ceci concerne aussi l'étude de la stabilité du système en boucle fermée et la convergence des fonctions de tâche. C'est essentiellement le découplage des lois de commandes cinématiques en rotation et en translation qui est recherché ici, selon le nombre d'indices visuels considérés.Les mesures visuelles utilisées dans cette thèse sont les lignes droites 3D. Les intérêts apportés à ce type de mesures visuelles sont la robustesse contre le bruit, et la possibilité de représenter d'autres primitives comme des couples de points ou de plans par la modélisation de Plücker. / Closing the control loop of a manipulator robot with vision feedback is widelyknown. It concerns nowadays all areas of robotics. Such a return can make a comparison between a desired state and current state, using visual measurements. The main objective of this doctoral thesis is to design several types of kinematic control laws for stereo visual servoing. It strongly involves the formalism of the task function which is a well-known and useful mathematical tool to express the visual error as a function of state vectors.We have investigated the decoupling between the rotational and translational velocities control laws together with the epipolar constraint with a stereo visual feedback.That is why, the visual measurements and features used in this thesis are the 3Dstraight lines.The interests of this type of visual features rely on the robustness against the noise, and the possibility to represent straight lines or other features like points or planes pairs by the Plücker coordinates, as a 3D straight line can be represented as well by two points or the intersection of two planes. This makes all the control laws designed in this thesis valid for another visual features like points
|
32 |
Ultra Low Latency Visual Servoing for High Speed Object Tracking Using Multi Focal Length Camera ArraysMcCown, Alexander Steven 01 July 2019 (has links)
In high speed applications of visual servoing, latency from the recognition algorithm can cause significant degradation of in response time. Hardware acceleration allows for recognition algorithms to be applied directly during the raster scan from the image sensor, thereby removing virtually all video processing latency. This paper examines one such method, along with an analysis of design decisions made to optimize for use during high speed airborne object tracking tests for the US military. Designing test equipment for defense use involves working around unique challenges that arise from having many details being deemed classified or highly sensitive information. Designing tracking system without knowing any exact numbers for speeds, mass, distance or nature of the objects being tracked requires a flexible control system that can be easily tuned after installation. To further improve accuracy and allow rapid tuning to a yet undisclosed set of parameters, a machine learning powered auto-tuner is developed and implemented as a control loop optimizer.
|
33 |
Vision based navigation in a dynamic environment / Navigation référencée vision dans un environnement dynamiqueFutterlieb, Marcus 10 July 2017 (has links)
Cette thèse s'intéresse au problème de la navigation autonome au long cours de robots mobiles à roues dans des environnements dynamiques. Elle s'inscrit dans le cadre du projet FUI Air-Cobot. Ce projet, porté par Akka Technologies, a vu collaborer plusieurs entreprises (Akka, Airbus, 2MORROW, Sterela) ainsi que deux laboratoires de recherche, le LAAS et Mines Albi. L'objectif est de développer un robot collaboratif (ou cobot) capable de réaliser l'inspection d'un avion avant le décollage ou en hangar. Différents aspects ont donc été abordés : le contrôle non destructif, la stratégie de navigation, le développement du système robotisé et de son instrumentation, etc. Cette thèse répond au second problème évoqué, celui de la navigation. L'environnement considéré étant aéroportuaire, il est hautement structuré et répond à des normes de déplacement très strictes (zones interdites, etc.). Il peut être encombré d'obstacles statiques (attendus ou non) et dynamiques (véhicules divers, piétons, ...) qu'il conviendra d'éviter pour garantir la sécurité des biens et des personnes. Cette thèse présente deux contributions. La première porte sur la synthèse d'un asservissement visuel permettant au robot de se déplacer sur de longues distances (autour de l'avion ou en hangar) grâce à une carte topologique et au choix de cibles dédiées. De plus, cet asservissement visuel exploite les informations fournies par toutes les caméras embarquées. La seconde contribution porte sur la sécurité et l'évitement d'obstacles. Une loi de commande basée sur les spirales équiangulaires exploite seulement les données sensorielles fournies par les lasers embarqués. Elle est donc purement référencée capteur et permet de contourner tout obstacle, qu'il soit fixe ou mobile. Il s'agit donc d'une solution générale permettant de garantir la non collision. Enfin, des résultats expérimentaux, réalisés au LAAS et sur le site d'Airbus à Blagnac, montrent l'efficacité de la stratégie développée. / This thesis is directed towards the autonomous long range navigation of wheeled robots in dynamic environments. It takes place within the Air-Cobot project. This project aims at designing a collaborative robot (cobot) able to perform the preflight inspection of an aircraft. The considered environment is then highly structured (airport runway and hangars) and may be cluttered with both static and dynamic unknown obstacles (luggage or refueling trucks, pedestrians, etc.). Our navigation framework relies on previous works and is based on the switching between different control laws (go to goal controller, visual servoing, obstacle avoidance) depending on the context. Our contribution is twofold. First of all, we have designed a visual servoing controller able to make the robot move over a long distance thanks to a topological map and to the choice of suitable targets. In addition, multi-camera visual servoing control laws have been built to benefit from the image data provided by the different cameras which are embedded on the Air-Cobot system. The second contribution is related to obstacle avoidance. A control law based on equiangular spirals has been designed to guarantee non collision. This control law, based on equiangular spirals, is fully sensor-based, and allows to avoid static and dynamic obstacles alike. It then provides a general solution to deal efficiently with the collision problem. Experimental results, performed both in LAAS and in Airbus hangars and runways, show the efficiency of the developed techniques.
|
34 |
Robot visual servoing with iterative learning controlJiang, Ping, Unbehauen, R. January 2002 (has links)
Yes / This paper presents an iterative learning scheme for vision guided
robot trajectory tracking. At first, a stability criterion for designing
iterative learning controller is proposed. It can be used for a system with
initial resetting error. By using the criterion, one can convert the design
problem into finding a positive definite discrete matrix kernel and a more
general form of learning control can be obtained. Then, a three-dimensional
(3-D) trajectory tracking system with a single static camera to realize robot
movement imitation is presented based on this criterion.
|
35 |
A Hybrid Tracking Approach for Autonomous Docking in Self-Reconfigurable Robotic ModulesSohal, Shubhdildeep Singh 02 July 2019 (has links)
Active docking in modular robotic systems has received a lot of interest recently as it allows small versatile robotic systems to coalesce and achieve the structural benefits of larger robotic systems. This feature enables reconfigurable modular robotic systems to bridge the gap between small agile systems and larger robotic systems. The proposed self-reconfigurable mobile robot design exhibits dual mobility using a tracked drive for longitudinal locomotion and wheeled drive for lateral locomotion. The two degrees of freedom (DOF) docking interface referred to as GHEFT (Genderless, High strength, Efficient, Fail-Safe, high misalignment Tolerant) allows for an efficient docking while tolerating misalignments in 6-DOF. In addition, motion along the vertical axis is also achieved via an additional translational DOF, allowing for toggling between tracked and wheeled locomotion modes by lowering and raising the wheeled assembly. This thesis also presents a visual-based onboard Hybrid Target Tracking algorithm to detect and follow a target robot leading to autonomous docking between the modules. As a result of this proposed approach, the tracked features are then used to bring the robots in sufficient proximity for the docking procedure using Image Based Visual Servoing (IBVS) control. Experimental results to validate the robustness of the proposed tracking method, as well as the reliability of the autonomous docking procedure, are also presented in this thesis. / Master of Science / Active docking in modular robotic systems has received a lot of interest recently as it allows small versatile robotic systems to coalesce and achieve the structural benefits of larger robotic systems. This feature enables reconfigurable modular robotic systems to bridge the gap between small agile systems and larger robotic systems. Such robots can prove useful in environments that are either too dangerous or inaccessible to humans. Therefore, in this research, several specific hardware and software development aspects related to self-reconfigurable mobile robots are proposed. In terms of hardware development, a robotic module was designed that is symmetrically invertible and exhibits dual mobility using a tracked drive for longitudinal locomotion and wheeled drive for lateral locomotion. Such interchangeable mobility is important when the robot operates in a constrained workspace. The mobile robot also has integrated two degrees of freedom (DOF) docking mechanisms referred to as GHEFT (Genderless, High strength, Efficient, Fail-Safe, high misalignment Tolerant). The docking interface allows for an efficient docking while tolerating misalignments in 6-DOF. In addition, motion along the vertical axis is also performed via an additional translational DOF, allowing for lowering and raising the wheeled assembly. The robot is equipped with sensors to provide positional feedback of the joints relative to the target robot. In terms of software development, a visual-based onboard Hybrid Target Tracking algorithm for high-speed consistent tracking iv of colored targets is also presented in this work. The proposed technique is used to detect and follow a colored target attached to the target robot leading to autonomous docking between the modules using Image Based Visual Servoing (IBVS). Experimental results to validate the robustness of the proposed tracking approach, as well as the reliability of the autonomous docking procedure, are also presented in the thesis. The thesis is concluded with discussions about future research in both structured and unstructured terrains.
|
36 |
Hardware Testbed for Relative Navigation of Unmanned Vehicles Using Visual ServoingMonda, Mark J. 12 June 2006 (has links)
Future generations of unmanned spacecraft, aircraft, ground, and submersible vehicles will require precise relative navigation capabilities to accomplish missions such as formation operations and autonomous rendezvous and docking. The development of relative navigation sensing and control techniques is quite challenging, in part because of the difficulty of accurately simulating the physical relative navigation problems in which the control systems are designed to operate. A hardware testbed that can simulate the complex relative motion of many different relative navigation problems is being developed. This testbed simulates near-planar relative motion by using software to prescribe the motion of an unmanned ground vehicle and provides the attached sensor packages with realistic relative motion. This testbed is designed to operate over a wide variety of conditions in both indoor and outdoor environments, at short and long ranges, and its modular design allows it to easily test many different sensing and control technologies. / Master of Science
|
37 |
A universal iterative learning stabilizer for a class of MIMO systems.Jiang, Ping, Chen, H., Bamforth, C.A. January 2006 (has links)
No / Design of iterative learning control (ILC) often requires some prior knowledge about a system's control matrix. In some applications, such as uncalibrated visual servoing, this kind of knowledge may be unavailable so that a stable learning control cannot always be achieved. In this paper, a universal ILC is proposed for a class of multi-input multi-output (MIMO) uncertain nonlinear systems with no prior knowledge about the system control gain matrix. It consists of a gain matrix selector from the unmixing set and a learned compensator in a form of the positive definite discrete matrix kernel, corresponding to rough gain matrix probing and refined uncertainty compensating, respectively. Asymptotic convergence for a trajectory tracking within a finite time interval is achieved through repetitive tracking. Simulations and experiments of uncalibrated visual servoing are carried out in order to verify the validity of the proposed control method.
|
38 |
Multistage Localization for High Precision Mobile Manipulation TasksMobley, Christopher James 03 March 2017 (has links)
This paper will present a multistage localization approach for an autonomous industrial mobile manipulator (AIMM). This approach allows tasks with an operational scope outside the range of the robot's manipulator to be completed without having to recalibrate the position of the end-effector each time the robot's mobile base moves to another position. This is achieved by localizing the AIMM within its area of operation (AO) using adaptive Monte Carlo localization (AMCL), which relies on the fused odometry and sensor messages published by the robot, as well as a 2-D map of the AO, which is generated using an optimization-based smoothing simultaneous localization and mapping (SLAM) technique. The robot navigates to a predefined start location in the map incorporating obstacle avoidance through the use of a technique called trajectory rollout. Once there, the robot uses its RGB-D sensor to localize an augmented reality (AR) tag in the map frame. Once localized, the identity and the 3-D position and orientation, collectively known as pose, of the tag are used to generate a list of initial feature points and their locations based on a priori knowledge. After the end-effector moves to the approximate location of a feature point provided by the AR tag localization, the feature point's location, as well as the end-effector's pose are refined to within a user specified tolerance through the use of a control loop, which utilizes images from a calibrated machine vision camera and a laser pointer, simulating stereo vision, to localize the feature point in 3-D space using computer vision techniques and basic geometry. This approach was implemented on two different ROS enabled robots, the Clearpath Robotics' Husky and the Fetch Robotics' Fetch, in order to show the utility of the multistage localization approach in executing two tasks which are prevalent in both manufacturing and construction: drilling and sealant application. The proposed approach was able to achieve an average accuracy of ± 1 mm in these operations, verifying its efficacy for tasks which have a larger operational scope than that of the range of the AIMM's manipulator and its robustness to general applications in manufacturing. / Master of Science / This paper will present a multistage localization approach for an autonomous industrial mobile manipulator (AIMM). This approach allows for tasks with an operational scope outside the range of the robot’s manipulator to be completed without having to recalibrate the position of the end-effector each time the robot’s mobile base moves to another position. This is achieved by first localizing the AIMM within its area of operation (AO) using a probabilistic state estimator. The robot navigates to a predefined start location in the map incorporating obstacle avoidance through the use of a technique called trajectory rollout, which samples the space of feasible controls, generates trajectories through forward simulation, and chooses the simulated trajectory that minimizes a cost function. Once there, the robot uses a depth camera to localize an augmented reality (AR) tag in the map frame. Once localized, the identity and the 3-D position and orientation, collectively known as pose, of the tag are used to generate a list of initial feature points and their locations based on a <i>priori</i> knowledge of the operation, which was associated with the AR tag’s identity. After the end-effector moves to the approximate location of a feature point provided by the AR tag localization, the feature point’s location, as well as the end-effector’s pose, are refined to within a user specified tolerance through the use of a control loop. This approach was implemented on two different ROS-enabled robots, the Clearpath Robotics’ Husky and the Fetch Robotics’ Fetch, in order to show the utility of the multistage localization approach in executing two tasks which are prevalent in both manufacturing and construction: drilling and sealant application. The proposed approach was able to achieve an average accuracy of ± 1 mm in these operations, verifying its efficacy for tasks which have a larger operational scope than that of the range of the AIMM’s manipulator and its robustness to general applications in manufacturing.
|
39 |
Utilisation of photometric moments in visual servoing / Utilisation de moments photométriques en asservissement visuelBakthavatchalam, Manikandan 17 March 2015 (has links)
Cette thèse s'intéresse à l'asservissement visuel, une technique de commande à retour d'information visuelle permettant de contrôler le mouvement de systèmes équipées de caméras tels que des robots. Pour l'asservissement visuel, il est essentiel de synthétiser les informations obtenues via la caméra et ainsi établir la relation entre l'évolution de ces informations et le déplacement de la caméra dans l'espace. Celles-ci se basent généralement sur l'extraction et le suivi de primitives géométriques comme des points ou des lignes droites dans l'image. Il a été montré que le suivi visuel et les méthodes de traitement d'images restent encore un frein à l'expansion des techniques d'asservissement visuel. C'est pourquoi la distribution de l'intensité lumineuse de l'image a également été utilisée comme caractéristique visuelle. Finalement, les caractéristiques visuelles basée sur les moments de l'image ont permis de définir des lois de commande découplées. Cependant ces lois de commande sont conditionnées par l'obtention d'une région parfaitement segmentée ou d'un ensemble discret de points dans la scène. Ce travail propose donc une stratégie de capture de l'intensité lumineuse de façon indirecte, par le biais des moments calculés sur toute l'image. Ces caractéristiques globales sont dénommées moments photométriques. Les développements théoriques établis dans cette thèse tendent à définir une modélisation analytique de la matrice d'interaction relative aux moments photométriques. Ces derniers permettent de réaliser une tâche d'asservissement visuel dans des scènes complexes sans suivi visuel ni appariement. Un problème pratique rencontré par cette méthode dense d'asservissement visuel est l'apparition et la disparition de portions de l'image durant la réalisation de la tâche. Ce type de problème peut perturber la commande, voire dans le pire des cas conduire à l’échec de la réalisation de la tâche. Afin de résoudre ce problème, une modélisation incluant des poids spatiaux est proposée. Ainsi, la pondération spatiale, disposant d'une structure spécifique, est introduite de telle sorte qu'un modèle analytique de la matrice d'interaction peut être obtenue comme une simple fonction de la nouvelle formulation des moments photométriques. Une partie de ce travail apporte également une contribution au problème de la commande simultanée des mouvements de rotation autour des axes du plan image. Cette approche définit les caractéristiques visuelles de façon à ce que l'asservissement soit optimal en fonction de critères spécifiques. Quelques critères de sélection basées sur la matrice d'interaction ont été proposés. Ce travail ouvre donc sur d'intéressantes perspectives pour la sélection d'informations visuelles pour l'asservissement visuel basé sur les moments de l'image. / This thesis is concerned with visual servoing, a feedback control technique for controlling camera-equipped actuated systems like robots. For visual servoing, it is essential to synthesize visual information from the camera image in the form of visual features and establish the relationship between their variations and the spatial motion of the camera. The earliest visual features are dependent on the extraction and visual tracking of geometric primitives like points and straight lines in the image. It was shown that visual tracking and image processing procedures are a bottleneck to the expansion of visual servoing methods. That is why the image intensity distribution has also been used directly as a visual feature. Finally, visual features based on image moments allowed to design decoupled control laws but they are restricted by the availability of a well-segmented regions or a discrete set of points in the scene. This work proposes the strategy of capturing the image intensities not directly, but in the form of moments computed on the whole image plane. These global features have been termed photometric moments. Theoretical developments are made to derive the analytical model for the interaction matrix of the photometric moments. Photometric moments enable to perform visual servoing on complex scenes without visual tracking or image matching procedures, as long as there is no severe violation of the zero border assumption (ZBA). A practical issue encountered in such dense VS methods is the appearance and disappearance of portions of the scene during the visual servoing. Such unmodelled effects strongly violate the ZBA assumption and can disturb the control and in the worst case, result in complete failure to convergence. To handle this important practical problem, an improved modelling scheme for the moments that allows for inclusion of spatial weights is proposed. Then, spatial weighting functions with a specific structure are exploited such that an analytical model for the interaction matrix can be obtained as simple functions of the newly formulated moments. A part of this work provides an additional contribution towards the problem of simultaneous control of rotational motions around the image axes. The approach is based on connecting the design of the visual feature such that the visual servoing is optimal with respect to specific criteria. Few selection criteria based on the interaction matrix was proposed. This contribution opens interesting possibilities and finds immediate applications in the selection of visual features in image moments-based VS.
|
40 |
Development of Integration Algorithms for Vision/Force Robot Control with Automatic Decision SystemBdiwi, Mohamad 10 June 2014 (has links)
In advanced robot applications, the challenge today is that the robot should perform different successive subtasks to achieve one or more complicated tasks similar to human. Hence, this kind of tasks required to combine different kind of sensors in order to get full information about the work environment. However, from the point of view of control, more sensors mean more possibilities for the structure of the control system. As shown previously, vision and force sensors are the most common external sensors in robot system. As a result, in scientific papers it can be found numerous control algorithms and different structures for vision/force robot control, e.g. shared, traded control etc. The lacks in integration of vision/force robot control could be summarized as follows:
• How to define which subspaces should be vision, position or force controlled?
• When the controller should switch from one control mode to another one?
• How to insure that the visual information could be reliably used?
• How to define the most appropriated vision/force control structure?
In many previous works, during performing a specified task one kind of vision/force control structure has been used which is pre-defined by the programmer. In addition to that, if the task is modified or changed, it would be much complicated for the user to describe the task and to define the most appropriated vision/force robot control especially if the user is inexperienced. Furthermore, vision and force sensors are used only as simple feedback (e.g. vision sensor is used usually as position estimator) or they are intended to avoid the obstacles. Accordingly, much useful information provided by the sensors which help the robot to perform the task autonomously is missed.
In our opinion, these lacks of defining the most appropriate vision/force robot control and the weakness in the utilization from all the information which could be provided by the sensors introduce important limits which prevent the robot to be versatile, autonomous, dependable and user-friendly. For this purpose, helping to increase autonomy, versatility, dependability and user-friendly in certain area of robotics which requires vision/force integration is the scope of this thesis. More concretely:
1. Autonomy: In the term of an automatic decision system which defines the most appropriated vision/force control modes for different kinds of tasks and chooses the best structure of vision/force control depending on the surrounding environments and a priori knowledge.
2. Versatility: By preparing some relevant scenarios for different situations, where both the visual servoing and force control are necessary and indispensable.
3. Dependability: In the term of the robot should depend on its own sensors more than on reprogramming and human intervention. In other words, how the robot system can use all the available information which could be provided by the vision and force sensors, not only for the target object but also for the features extraction of the whole scene.
4. User-friendly: By designing a high level description of the task, the object and the sensor configuration which is suitable also for inexperienced user.
If the previous properties are relatively achieved, the proposed robot system can:
• Perform different successive and complex tasks.
• Grasp/contact and track imprecisely placed objects with different poses.
• Decide automatically the most appropriate combination of vision/force feedback for every task and react immediately to the changes from one control cycle to another because of occurrence of some unforeseen events.
• Benefit from all the advantages of different vision/force control structures.
• Benefit from all the information provided by the sensors.
• Reduce the human intervention or reprogramming during the execution of the task.
• Facilitate the task description and entering of a priori-knowledge for the user, even if he/she is inexperienced.
|
Page generated in 0.0713 seconds