Spelling suggestions: "subject:"visualiseringsövning"" "subject:"servoing""
1 |
Autonomous Convoy Study of Unmanned Ground Vehicles using Visual SnakesSouthward II, Charles Michael 17 May 2007 (has links)
Many applications for unmanned vehicles involve autonomous interaction between two or more craft, and therefore, relative navigation is a key issue to explore. Several high fidelity hardware simulations exist to produce accurate dynamics. However, these simulations are restricted by size, weight, and power needed to operate them. The use of a small Unmanned Ground Vehicle (UGV) for the relative navigation problem is investigated. The UGV has the ability to traverse large ranges over uneven terrain and into varying lighting conditions which has interesting applications to relative navigation.
The basic problem of a vehicle following another is researched and a possible solution explored. Statistical pressure snakes are used to gather relative position data at a specified frequency. A cubic spline is then fit to the relative position data using a least squares algorithm. The spline represents the path on which the lead vehicle has already traversed. Controlling the UGV onto this relative path using a sliding mode control, allows the follow vehicle to avoid the same stationary obstacles the lead vehicle avoided without any other sensor information. The algorithm is run on the UGV hardware with good results. It was able to follow the lead vehicle around a curved course with only centimeter-level position errors. This sets up a firm foundation on which to build a more versatile relative motion platform. / Master of Science
|
2 |
An Automated Micromanipulation System for 3D Parallel MicroassemblyChu, Henry Kar Hang 05 January 2012 (has links)
The introduction of microassembly technologies has opened up new venues for the fabrication of sophisticated, three-dimensional Microelectromechanical System (MEMS) devices. This thesis presents the development of a robotic micromanipulation system and its controller algorithms for conventional pick-and-place microassembly processes. This work incorporated the approach of parallel assembly and automation to improve overall productivity and reduce operating costs of the process. A parallel set of three microgrippers was designed and implemented for the grasping and assembly of three microparts simultaneously. The complete microassembly process was automated through a vision-based control approach. Visual images from two vision systems were adopted for precise position evaluation and alignment.
Precise alignment between the micropart and microgripper is critical to the microassembly process. Due to the limited field of view of the vision systems, the micropart could displace away from the microscope field of view during the re-orientation process. In this work, a tracking algorithm was developed to constrain the micropart within the camera view. The unwanted translational motions of the micropart were estimated. The algorithm then continuously manipulated and repositioned the micropart for the vision-based assembly.
In addition, the limited fields of view of the vision systems are not sufficient to concurrently monitor the assembly operation for all three individual grippers. This work presents a strategy to use visual information from only one gripper set for all the necessary alignment and positioning processes. Through proper system calibration and the alignment algorithms developed, grippers that were not visually monitored could also perform the assembly operations.
When using visual images from a single vision camera for 3D positioning, the extra dimension between the 2D image and 3D workspace results in errors in position evaluation. Hence, a novel approach is presented to utilize image reflection of the micropart for online evaluation of the Jacobian matrix. The relative 3D position between the slot and micropart was evaluated with high precision.
The developed algorithms were integrated onto the micromanipulation system. Automated parallel microassemblies were conducted successfully.
|
3 |
An Automated Micromanipulation System for 3D Parallel MicroassemblyChu, Henry Kar Hang 05 January 2012 (has links)
The introduction of microassembly technologies has opened up new venues for the fabrication of sophisticated, three-dimensional Microelectromechanical System (MEMS) devices. This thesis presents the development of a robotic micromanipulation system and its controller algorithms for conventional pick-and-place microassembly processes. This work incorporated the approach of parallel assembly and automation to improve overall productivity and reduce operating costs of the process. A parallel set of three microgrippers was designed and implemented for the grasping and assembly of three microparts simultaneously. The complete microassembly process was automated through a vision-based control approach. Visual images from two vision systems were adopted for precise position evaluation and alignment.
Precise alignment between the micropart and microgripper is critical to the microassembly process. Due to the limited field of view of the vision systems, the micropart could displace away from the microscope field of view during the re-orientation process. In this work, a tracking algorithm was developed to constrain the micropart within the camera view. The unwanted translational motions of the micropart were estimated. The algorithm then continuously manipulated and repositioned the micropart for the vision-based assembly.
In addition, the limited fields of view of the vision systems are not sufficient to concurrently monitor the assembly operation for all three individual grippers. This work presents a strategy to use visual information from only one gripper set for all the necessary alignment and positioning processes. Through proper system calibration and the alignment algorithms developed, grippers that were not visually monitored could also perform the assembly operations.
When using visual images from a single vision camera for 3D positioning, the extra dimension between the 2D image and 3D workspace results in errors in position evaluation. Hence, a novel approach is presented to utilize image reflection of the micropart for online evaluation of the Jacobian matrix. The relative 3D position between the slot and micropart was evaluated with high precision.
The developed algorithms were integrated onto the micromanipulation system. Automated parallel microassemblies were conducted successfully.
|
4 |
High-Performance Visual Closed-Loop Robot ControlCorke, Peter Ian January 1994 (has links) (PDF)
This thesis addresses the use of monocular eye-in-hand machine vision to control the position of a robot manipulator for dynamically challenging tasks. Such tasks are defined as those where the robot motion required approaches or exceeds the performance limits stated by the manufacturer. / Computer vision systems have been used for robot control for over two decades now, but have rarely been used for high-performance visual closed-loop control. This has largely been due to technological limitations in image processing, but since the mid 1980sadvances have made it feasible to apply computer vision techniques at a sufficiently high rate to guide a robot or close a feedback control loop. Visual servoing is the use of computer vision for closed-loop control of a robot manipulator, and has the potential to solve a number of problems that currently limit the potential of robots in industry and advanced applications. / This thesis introduces a distinction between visual kinematic and visual dynamic control. The former is well addressed in the literature and is concerned with how the manipulator should move in response to perceived visual features. The latter is concerned with dynamic effects due to the manipulator and machine vision sensor which limit performance and must be explicitly addressed in order to achieve high-performance control. This is the principle focus of the thesis. / In order to achieve high-performance it is necessary to have accurate models of the system to be controlled (the robot) and the sensor (the camera and vision system).Despite the long history of research in these areas individually, and combined in visual servoing, it is apparent that many issues have not been addressed in sufficient depth, and that much of the relevant information is spread through a very diverse literature. Another contribution of this thesis is to draw together this disparate information and present it in a systematic and consistent manner. This thesis also has a strong theme of experimentation. Experiments are used to develop realistic models which are used for controller synthesis, and these controllers are then verified experimentally.
|
5 |
Contribution à la spécification et à la calibration des caméras relief / Contribution to specification and calibration of multi-view camerasAli-Bey, Mohamed 12 December 2011 (has links)
Les travaux proposés dans cette thèse s’inscrivent dans le cadre des projets ANR-Cam-Relief et du CPER-CREATIS soutenus par l’Agence Nationale de la Recherche, la région Champagne-Ardenne et le FEDER. Ces études s'inscrivent également dans le cadre d’une collaboration avec la société 3DTV-Solutions et deux groupes du CReSTIC (AUTO et SIC). L’objectif de ce projet est ,entre autre, de concevoir par analogie aux systèmes 2D grands publics actuels, des systèmes de prise de vue 3D permettant de restituer sur écrans reliefs (auto-stéréoscopiques) visibles sans lunettes, des images 3D de qualité. Notre intérêt s’est porté particulièrement sur les systèmes de prise de vue à configuration parallèle et décentrée. Les travaux de recherche menés dans cette thèse sont motivés par l’incapacité des configurations statiques de ces systèmes de prise de vue de capturer correctement des scènes dynamiques réelles pour une restitution autostéréoscopique correcte. Pour surmonter cet obstacle, un schéma d’adaptation de laconfiguration géométrique du système de prise de vue est proposé. Afin de déterminer les paramètres devant être concernés par cette adaptation, une étude de l’effet de la constance de chaque paramètre sur la qualité du rendu relief est menée. Les répercussions des contraintes dynamiques et mécaniques sur le relief restitué sont ensuite examinées. La précision de positionnement des paramètres structurels est abordée à travers la proposition de deux méthodes d’évaluation de la qualité du rendu relief, pour déterminer les seuils d’erreur de positionnement des paramètres structurels du système de prise de vue. Enfin, le problème de la calibration est abordée, où l’on propose une méthode basée sur la méthode de transformation linéaire directe DLT, et des perspectives sont envisagées pour l’asservissement de ces systèmes de prise de vue par asservissement classique ou par asservissement visuel. / The works proposed in this thesis are part of the projects ANR-Cam-Relief and CPER-CREATIS supported by the National Agency of Research, the Champagne-Ardenne region and the FEDER.These studies also join within the framework of a collaboration with the 3DTV-Solutions society and two groups of the CReSTIC (AUTO and SIC).The objective of this project is, among others, to design by analogy to the current 2D popularized systems, 3D shooting systems allowing to display on 3D screens (auto-stereoscopic) visible without glasses, 3D quality images. Our interest has focused particularly on shooting systems with parallel and decentred configuration. There search works led in this thesis are motivated by the inability of the static configurations of these shooting systems to capture correctly real dynamic scenes for a correct auto-stereoscopic endering. To overcome this drawback, an adaptation scheme of the geometrical configuration of the shooting system is proposed. To determine the parameters to be affected by this adaptation,the effect of the constancy of each parameter on the rendering quality is studied. The repercussions of the dynamic and mechanical constraints on the 3D rendering are then examined. Positioning accuracy of the structural parameters is approached through two methods proposed for the rendering quality assessment, to determine the thresholds of positioning error of the structural parameters of the shooting system. Finally, the problem of calibration is discussed where we propose an approach based on the DLT method, and perspectives are envisaged to automatic control of these shooting systems by classical approaches or by visual servoing.
|
6 |
Visual Servoing Based on Learned Inverse KinematicsLarsson, Fredrik January 2007 (has links)
<p>Initially an analytical closed-form inverse kinematics solution for a 5 DOF robotic arm was developed and implemented. This analytical solution proved not to meet the accuracy required for the shape sorting puzzle setup used in the COSPAL (COgnitiveSystems using Perception-Action Learning) project [2]. The correctness of the analytic model could be confirmed through a simulated ideal robot and the source of the problem was deemed to be nonlinearities introduced by weak servos unable to compensate for the effect of gravity. Instead of developing a new analytical model that took the effect of gravity into account, which would be erroneous when the characteristics of the robotic arm changed, e.g. when picking up a heavy object, a learning approach was selected.</p><p>As learning method Locally Weighted Projection Regression (LWPR) [27] is used. It is an incremental supervised learning method and it is considered a state-ofthe-art method for function approximation in high dimensional spaces. LWPR is further combined with visual servoing. This allows for an improvement in accuracy by the use of visual feedback and the problems introduced by the weak servos can be solved. By combining the trained LWPR model with visual servoing, a high level of accuracy is reached, which is sufficient for the shape sorting puzzle setup used in COSPAL.</p>
|
7 |
Recovering Scale in Relative Pose and Target Model Estimation Using Monocular VisionTribou, Michael January 2009 (has links)
A combined relative pose and target object model estimation framework using a monocular camera as the primary feedback sensor has been designed and validated in a simulated robotic environment. The monocular camera is mounted on the end-effector of a robot manipulator and measures the image plane coordinates of a set of point features on a target workpiece object. Using this information, the relative position and orientation, as well as the geometry, of the target object are recovered recursively by a Kalman filter process. The Kalman filter facilitates the fusion of supplemental measurements from range sensors, with those gathered with the camera. This process allows the estimated system state to be accurate and recover the proper environment scale.
Current approaches in the research areas of visual servoing control and mobile robotics are studied in the case where the target object feature point geometry is well-known prior to the beginning of the estimation. In this case, only the relative pose of target object frames is estimated over a sequence of frames from a single monocular camera. An observability analysis was carried out to identify the physical configurations of camera and target object for which the relative pose cannot be recovered by measuring only the camera image plane coordinates of the object point features.
A popular extension to this is to concurrently estimate the target object model concurrently with the relative pose of the camera frame, a process known as Simultaneous Localization and Mapping (SLAM). The recursive framework was augmented to facilitate this larger estimation problem. The scale of the recovered solution is ambiguous using measurements from a single camera. A second observability analysis highlights more configurations for which the relative pose and target object model are unrecoverable from camera measurements alone. Instead, measurements which contain the global scale are required to obtain an accurate solution.
A set of additional sensors are detailed, including range finders and additional cameras. Measurement models for each are given, which facilitate the fusion of this supplemental data with the original monocular camera image measurements. A complete framework is then derived to combine a set of such sensor measurements to recover an accurate relative pose and target object model estimate.
This proposed framework is tested in a simulation environment with a virtual robot manipulator tracking a target object workpiece through a relative trajectory. All of the detailed estimation schemes are executed: the single monocular camera cases when the target object geometry are known and unknown, respectively; a two camera system in which the measurements are fused within the Kalman filter to recover the scale of the environment; a camera and point range sensor combination which provides a single range measurement at each system time step; and a laser pointer and camera hybrid which concurrently tries to measure the feature point images and a single range metric. The performance of the individual test cases are compared to determine which set of sensors is able to provide robust and reliable estimates for use in real world robotic applications.
Finally, some conclusions on the performance of the estimators are drawn and directions for future work are suggested. The camera and range finder combination is shown to accurately recover the proper scale for the estimate and warrants further investigation. Further, early results from the multiple monocular camera setup show superior performance to the other sensor combinations and interesting possibilities are available for wide field-of-view super sensors with high frame rates, built from many inexpensive devices.
|
8 |
Recovering Scale in Relative Pose and Target Model Estimation Using Monocular VisionTribou, Michael January 2009 (has links)
A combined relative pose and target object model estimation framework using a monocular camera as the primary feedback sensor has been designed and validated in a simulated robotic environment. The monocular camera is mounted on the end-effector of a robot manipulator and measures the image plane coordinates of a set of point features on a target workpiece object. Using this information, the relative position and orientation, as well as the geometry, of the target object are recovered recursively by a Kalman filter process. The Kalman filter facilitates the fusion of supplemental measurements from range sensors, with those gathered with the camera. This process allows the estimated system state to be accurate and recover the proper environment scale.
Current approaches in the research areas of visual servoing control and mobile robotics are studied in the case where the target object feature point geometry is well-known prior to the beginning of the estimation. In this case, only the relative pose of target object frames is estimated over a sequence of frames from a single monocular camera. An observability analysis was carried out to identify the physical configurations of camera and target object for which the relative pose cannot be recovered by measuring only the camera image plane coordinates of the object point features.
A popular extension to this is to concurrently estimate the target object model concurrently with the relative pose of the camera frame, a process known as Simultaneous Localization and Mapping (SLAM). The recursive framework was augmented to facilitate this larger estimation problem. The scale of the recovered solution is ambiguous using measurements from a single camera. A second observability analysis highlights more configurations for which the relative pose and target object model are unrecoverable from camera measurements alone. Instead, measurements which contain the global scale are required to obtain an accurate solution.
A set of additional sensors are detailed, including range finders and additional cameras. Measurement models for each are given, which facilitate the fusion of this supplemental data with the original monocular camera image measurements. A complete framework is then derived to combine a set of such sensor measurements to recover an accurate relative pose and target object model estimate.
This proposed framework is tested in a simulation environment with a virtual robot manipulator tracking a target object workpiece through a relative trajectory. All of the detailed estimation schemes are executed: the single monocular camera cases when the target object geometry are known and unknown, respectively; a two camera system in which the measurements are fused within the Kalman filter to recover the scale of the environment; a camera and point range sensor combination which provides a single range measurement at each system time step; and a laser pointer and camera hybrid which concurrently tries to measure the feature point images and a single range metric. The performance of the individual test cases are compared to determine which set of sensors is able to provide robust and reliable estimates for use in real world robotic applications.
Finally, some conclusions on the performance of the estimators are drawn and directions for future work are suggested. The camera and range finder combination is shown to accurately recover the proper scale for the estimate and warrants further investigation. Further, early results from the multiple monocular camera setup show superior performance to the other sensor combinations and interesting possibilities are available for wide field-of-view super sensors with high frame rates, built from many inexpensive devices.
|
9 |
Asservissement visuel d'un éclairage opératoire / Visual servoing of a surgical lightGauvin, Aurélien 05 June 2012 (has links)
Les travaux présentés dans ce manuscrit traitent de l’asservissement visuel d’un éclairage opératoire. Il s’agit d’une thèse CIFRE soutenue par l’entreprise MAQUET SAS et en collaboration avec le Laboratoire PRISME de l’Université d’Orléans. Les éclairages opératoires offrent à l’équipe chirurgicale des conditions d’éclairement suffisantes pour leur permettre d’accomplir leurs gestes. Leur positionnement est difficile en cours d’intervention et engendre fréquemment des heurts entre les membres de l’équipe. Les solutions déjà développées pour résoudre ce problème ne donnent pas satisfaction en raison de l’interaction forte entre le système et l’équipe chirurgicale. Nous proposons dans cette étude un éclairage opératoire asservi visuellement ne nécessitant pas d’information explicite, et opérationnel quel que soit le type de chirurgie. Il s’agit d’un système "intelligent" autrement dit capable de désigner de lui-même la zone où éclairer et "autonome", c’est-à-dire apte à se déplacer seul une fois les coordonnées de la cible connues. Ces deux points constituent la problématique de cette étude. Pour rendre "intelligent" l’éclairage opératoire, nous proposons un processus de désignation basé sur la reconnaissance d’objets spécifiques : le sang, la peau, les champs stériles et les instruments. Pour ce faire nous utilisons une fusion des attributs forme, couleur et mouvement basée sur le cadre crédibiliste. Nous résolvons les problèmes d’inhomogénéité de l’image dus à la puissance de l’éclairement par l’ajout d’une étape de fusion intermédiaire. Une fois l’ensemble des objets reconnus, nous procédons à la désignation de la zone à éclairer à l’aide de la théorie de la décision. L’autonomie du système est quant à elle assurée par une boucle d’asservissement visuel 2D, qui permet de faire converger l’éclairage vers la zone précédemment désignée. Nous avons réalisé au cours de cette étude un prototype quia permis de valider l’approche dans des conditions réelles. / The work presented in this manuscript is related to the visual servoing of a surgical light. This is a collaborative study between MAQUET SAS and PRISM laboratory (University of Orléans). Surgical light provides the surgical team enough lighting to perform their activities. The positioning of this equipment during the operation is arduous and leads frequently to disagreement between members of the team. Solutions already developed to solve this problem do not offer satisfaction in reason of the high interaction between the system and the team. This work aims to propose a surgical light visually served which do not require explicit information and which is operational whatever the kind of surgery. It consists in an "intelligent" system able to designate by itself the region where the surgeon is working on, and which is also "autonomous", that is to say the system can move to this target. These two points correspond to the problematic of this study. To make the operating light "intelligent" we propose an architecture based on the recognition of specific objects : blood, skin, steriled field, instruments. To achieve this we fuse shape, color and movement attributes. We solve the inhomogeneity problem of images due to high illumination thanks to an intermediate fusion step. Once all the objects are recognized we carry out the designation thanks to the decision theory. The "autonomous" part of the system consists in a 2D visual servoing loop that makes possible the convergence of the surgical light to the region of interest. A prototype has been realized during this work, which enabled us to validate the proposed approach in a real environment.
|
10 |
A 2 1/2 D Visual controller for autonomous underwater vehicleCesar, Diego Brito dos Santos 02 May 2017 (has links)
Submitted by Diego Cesar (rasecg3@gmail.com) on 2017-06-26T19:47:01Z
No. of bitstreams: 1
main_compressed.pdf: 16459769 bytes, checksum: b7838aeb4e94120d45daddb2c1b3c80e (MD5) / Approved for entry into archive by Flávia Sousa (flaviabs@ufba.br) on 2017-06-28T14:27:38Z (GMT) No. of bitstreams: 1
main_compressed.pdf: 16459769 bytes, checksum: b7838aeb4e94120d45daddb2c1b3c80e (MD5) / Made available in DSpace on 2017-06-28T14:27:38Z (GMT). No. of bitstreams: 1
main_compressed.pdf: 16459769 bytes, checksum: b7838aeb4e94120d45daddb2c1b3c80e (MD5) / Underwater navigation is affected by the lack of GPS due to the attenuation of the
electromagnetic signals. Thereby, underwater robots rely on dead reckoning as their main
navigation systems. However, localization via dead-reckoning raises uncertainties over time.
Consequently, visual and acoustic sensors have been used to increase accuracy in robotic
systems navigation, specially when they move in relation to a target object. This level
of precision is required, for instance, for object manipulation, inspection, monitoring and
docking. This work aims to develop and assess a hybrid visual controller for an autonomous
underwater vehicle (AUV) using artificial fiducial markers as reference. Artificial fiducial
markers are planar targets, designed to be easily detected by computer vision systems and
provide means to estimate the robot’s pose in respect to the marker. They usually have
high detection rate and low false positive rate, which are desirable for visual servoing tasks.
On this master thesis was evaluated, from among the most popular and open-source marker
systems, one that presents the best performance in underwater environments in terms of
detection rate, false positives rate, maximum distance and angle for successful detection.
Afterwards, the best marker was used for visual servoing purposes in an underwater robot.
The firsts experiments were performed on the Gazebo robot simulation environment and,
after that, on a real prototype, the FlatFish. Tests on a saltwater tank were performed
in order to assess the controller using static and adaptive gains. Finally, sea trials were
performed, using the controller that best behaved on the controlled environment in order
to assess its performance on a real environment. The tests have shown that the visual
controller was able of station-keeping in front of an artificial fiducial marker. Additionally,
it was also seen that the adaptive gain brings improvements, mainly because it smooths
the robot’s motion on the beginning of the task. / Navegação submarina é afetada pela falta de GPS, devido à atenuação de ondas eletromagnéticas.
Por causa disso, os robôs submarinos baseiam-se em sistemas de navegação via
odometria e sensores inerciais. Contudo, a localização via esse tipo de abordagem possui
uma incerteza associada que cresce com o passar do tempo. Por isso sensores visuais e
acústicos são utilizados para aumentar a precisão da navegação de veículos submarinos.
Nesse contexto, a utilização de um controlador visual aumenta a precisão dos sistemas
robóticos quando se locomovem em relação a um objeto alvo. Esse tipo de precisão é
requerida para manipulação de objetos, inspeção, monitoramento e docagem submarina.
Esse trabalho tem como objetivo projetar e avaliar um controlador visual híbrido para um
veículo submarino autônomo (AUV) utilizando como referência marcos visuais artificiais.
Os marcos artificiais são alvos planares projetados para serem facilmente detectados por
sistemas de visão computacional, sendo capazes de fornecer meios para estimação da
posição do robô em relação ao marco. As suas características de alta taxa de detecção
e baixa taxa de falsos positivo são desejáveis para tarefas de controle servo visual. Este
trabalho analisou, portanto, dentre os marcos mais populares e de código aberto, aquele que
apresenta o melhor desempenho em ambientes submarinos, em termos de taxa de detecção,
número de falsos positivos, máxima distância e ângulo para detecção. Posteriormente, o
marco que apresentou melhor performance foi utilizado para aplicação de controle visual
em um robô submarino. Os primeiros ensaios foram realizados na plataforma de simulação
robótica Gazebo e, posteriormente, em um protótipo de AUV real, o FlatFish. Testes em
um tanque de água salgada foram realizados visando avaliar a solução proposta utilizando
um ganho estático e um ganho adaptativo para o controlador visual. Finalmente, testes no
mar foram realizados utilizando o controlador que apresentou os melhores resultados no
ambiente controlado, a fim de verificar seu desempenho em um ambiente real. Os testes
mostraram que o controlador visual foi capaz de manter o veículo em frente aos marcos
visuais artificiais e que o ganho adaptativo trouxe vantagens, principalmente por suavizar
a movimentação do robô no início da missão.
|
Page generated in 0.0636 seconds