• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 14
  • 8
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 80
  • 80
  • 36
  • 28
  • 28
  • 25
  • 19
  • 17
  • 15
  • 14
  • 13
  • 10
  • 10
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Distributed Mobile Robot Localization and Communication System for Special Interventions

Sales Gil, Jorge 28 October 2011 (has links)
This thesis focuses on the development of a distributed mobile robot localization and communication system for special interventions like those carried out by fire-fighters in fire ground search and rescue. The use case scenario is related to the one described for the GUARDIANS EU project, where a swarm formation of mobile robots accompany a fire fighter during a rescue intervention in a warehouse. In this line, localizing the robots and the fire fighter during an indoor intervention with the presence of smoke is one of the more interesting challenges in this scenario. Several localization techniques have been developed using ultrasonic sensors, radio frequency signals and visual information. It has also been studied several communication protocols that can help to improve the efficiency of the system in such scenario and a proposal for designing a cross-layer communication platform that improves the connectivity of the mobile nodes during an intervention and reduces the number of lost data packets.
62

From Human to Robot Grasping

Romero, Javier January 2011 (has links)
Imagine that a robot fetched this thesis for you from a book shelf. How doyou think the robot would have been programmed? One possibility is thatexperienced engineers had written low level descriptions of all imaginabletasks, including grasping a small book from this particular shelf. A secondoption would be that the robot tried to learn how to grasp books from yourshelf autonomously, resulting in hours of trial-and-error and several bookson the floor.In this thesis, we argue in favor of a third approach where you teach therobot how to grasp books from your shelf through grasping by demonstration.It is based on the idea of robots learning grasping actions by observinghumans performing them. This imposes minimum requirements on the humanteacher: no programming knowledge and, in this thesis, no need for specialsensory devices. It also maximizes the amount of sources from which therobot can learn: any video footage showing a task performed by a human couldpotentially be used in the learning process. And hopefully it reduces theamount of books that end up on the floor. This document explores the challenges involved in the creation of such asystem. First, the robot should be able to understand what the teacher isdoing with their hands. This means, it needs to estimate the pose of theteacher's hands by visually observing their in the absence of markers or anyother input devices which could interfere with the demonstration. Second,the robot should translate the human representation acquired in terms ofhand poses to its own embodiment. Since the kinematics of the robot arepotentially very different from the human one, defining a similarity measureapplicable to very different bodies becomes a challenge. Third, theexecution of the grasp should be continuously monitored to react toinaccuracies in the robot perception or changes in the grasping scenario.While visual data can help correcting the reaching movement to the object,tactile data enables accurate adaptation of the grasp itself, therebyadjusting the robot's internal model of the scene to reality. Finally,acquiring compact models of human grasping actions can help in bothperceiving human demonstrations more accurately and executing them in a morehuman-like manner. Moreover, modeling human grasps can provide us withinsights about what makes an artificial hand design anthropomorphic,assisting the design of new robotic manipulators and hand prostheses. All these modules try to solve particular subproblems of a grasping bydemonstration system. We hope the research on these subproblems performed inthis thesis will both bring us closer to our dream of a learning robot andcontribute to the multiple research fields where these subproblems arecoming from. / QC 20111125
63

Posicionamento dinâmico utilizando controle a estrutura variável e servovisão. / Dynamic positioning control using variable structure and visual servoing.

Gustavo Jales Sokal 16 July 2010 (has links)
Neste trabalho é apresentado o desenvolvimento de um sistema de posicionamento dinâmico para uma pequena embarcação baseado em controle a estrutura variável com realimentação por visão computacional. Foram investigadas, na literatura, diversas técnicas desenvolvidas e escolheu-se o controle a estrutura variável devido, principalmente, ao modo de acionamento dos propulsores presentes no barco utilizado para os experimentos. Somando-se a isto, foi considerada importante a robustez que a técnica de controle escolhida apresenta, pois o modelo utilizado conta com incerteza em sua dinâmica. É apresentado ainda o projeto da superfície de deslizamento para realizar o controle a estrutura variável. Como instrumento de medição optou-se por utilizar técnicas de visão computacional em imagens capturadas a partir de uma webcam. A escolha por este tipo de sistema deve-se a alta precisão das medições aliada ao seu baixo custo. São apresentadas simulações e experimentos com controle a estrutura variável em tempo discreto utilizando a integral do erro da posição visando eliminar o erro em regime. Para realizar o controle que demanda o estado completo, são comparados quatro estimadores de estado realizados em tempo discreto: derivador aproximado; observador assintótico com uma frequência de amostragem igual a da câmera; observador assintótico com uma frequência de amostragem maior que a da câmera; e filtro de Kalman. / The design of a dynamic positioning system for a small boat based on variable strutucture control and visual-servoing is presented. Many control tecniques have been investigated and the variable structure control was chosen, mainly, due the operation mode of the motor drivers installed on the boat applied in the experiments. The robustness of this control technique was also considered since the available dynamic model of the boat is uncertain. The design of the sliding surface is shown as well. Computer vision techniques were used to measure the position of the boat from images taken with a webcam, this kind of measure system was chosen due to its high accuracy and low cost. Simulation and experimental results of discrete time variable structure control with integral action of the boats postion, included in order to eliminate steady state error, are shown. To develop this controller four state estimators, in discrete time, are compared: derivative of position; asymptotic observer at a high sampling rate; asymptotic observer at webcams sampling rate; and Kalman filter.
64

Micro-Robotic Cholesteatoma Surgery : clinical requirements analysis and image-based control under constraints / Micro-Robotique pour la Chirurgie de Cholestéatome

Dahroug, Bassem 16 February 2018 (has links)
Une maladie appelée cholestéatome affecte l'oreille moyenne, en absence de traitement, elle pourrait conduire à des complications graves. Le seul traitement dans la pratique médicale actuelle est une procédure chirurgicale. Les incidences de cholestéatome résiduelle ou récurrente sont élevés et le patient doit subir plus d'une intervention chirurgicale. Par conséquent, un système robotique original a été proposé pour d'éliminer l'incidence du cholestéatome résiduel en enlevant efficacement toutes les cellules infectées de la première intervention chirurgicale, et de faire une chirurgie moins invasive. Ainsi, ce manuscrit montre les différents défis auxquels fait face le chirurgien à travers une telle micro-procédure. Il est également défini le cahier de charge pour la réalisation d'un système futuriste dédié à la chirurgie du cholestéatome. En outre, un contrôleur est proposé comme un première étape vers le système idéal. Un tel contrôleur permet de guider un outil chirurgical rigide afin de suivre un chemin de référence sous les contraintes du trou d'incision. Le contrôleur proposé peut guider soit un outil droit, soit un outil courbe. En effet, le contrôleur proposé est une commande de haut niveau qui es formulé dans l'espace de tâche (ou espace Cartésien). Ce contrôleur est une couche modulaire qui peut être ajoutée à différentes structures robotiques. Le contrôleur proposé a montré de bons résultats en termes de précision tout en étant évalué sur un robot parallèle et un robot en série. / A disease called cholesteatoma affects the middle ear, in the absence of treatment, it could lead to serious complications. The only treatment in current medical practice is a surgical procedure. Incidences of residual or recurrent cholesteatoma are high and the patient may have more than one surgical procedure. Therefore, a novel robotic system was proposed to eliminate the incidence of residual cholesteatoma by removing efficiently all infected cells from the first surgery, and make a less invasive surgery. Thus, this manuscript shows the different challenges that face the surgeon through such a micro-procedure. It also is specified the requirements for achieving a futuristic system dedicated to cholesteatoma surgery. In addition, a controller is proposed as a first step toward the ideal system. Such a controller allows to guide a rigid surgical tool for following a reference path under the constraints of the incision hole. The proposed controller can guide either a straight tool or a curved one. Indeed, the proposed controller is a high level control which is formulated in the task-space (or Cartesian-space). This controller is a modular layer which can be added to different robotics structures. The proposed controller showed a good results in term of accuracy while assessed on a parallel robot and a serial one.
65

Posicionamento dinâmico utilizando controle a estrutura variável e servovisão. / Dynamic positioning control using variable structure and visual servoing.

Gustavo Jales Sokal 16 July 2010 (has links)
Neste trabalho é apresentado o desenvolvimento de um sistema de posicionamento dinâmico para uma pequena embarcação baseado em controle a estrutura variável com realimentação por visão computacional. Foram investigadas, na literatura, diversas técnicas desenvolvidas e escolheu-se o controle a estrutura variável devido, principalmente, ao modo de acionamento dos propulsores presentes no barco utilizado para os experimentos. Somando-se a isto, foi considerada importante a robustez que a técnica de controle escolhida apresenta, pois o modelo utilizado conta com incerteza em sua dinâmica. É apresentado ainda o projeto da superfície de deslizamento para realizar o controle a estrutura variável. Como instrumento de medição optou-se por utilizar técnicas de visão computacional em imagens capturadas a partir de uma webcam. A escolha por este tipo de sistema deve-se a alta precisão das medições aliada ao seu baixo custo. São apresentadas simulações e experimentos com controle a estrutura variável em tempo discreto utilizando a integral do erro da posição visando eliminar o erro em regime. Para realizar o controle que demanda o estado completo, são comparados quatro estimadores de estado realizados em tempo discreto: derivador aproximado; observador assintótico com uma frequência de amostragem igual a da câmera; observador assintótico com uma frequência de amostragem maior que a da câmera; e filtro de Kalman. / The design of a dynamic positioning system for a small boat based on variable strutucture control and visual-servoing is presented. Many control tecniques have been investigated and the variable structure control was chosen, mainly, due the operation mode of the motor drivers installed on the boat applied in the experiments. The robustness of this control technique was also considered since the available dynamic model of the boat is uncertain. The design of the sliding surface is shown as well. Computer vision techniques were used to measure the position of the boat from images taken with a webcam, this kind of measure system was chosen due to its high accuracy and low cost. Simulation and experimental results of discrete time variable structure control with integral action of the boats postion, included in order to eliminate steady state error, are shown. To develop this controller four state estimators, in discrete time, are compared: derivative of position; asymptotic observer at a high sampling rate; asymptotic observer at webcams sampling rate; and Kalman filter.
66

Desenvolvimento de esquema de controle com realimenta??o visual para um rob? manipulador

Soares, Allan Aminadab Andr? Freire 22 March 2005 (has links)
Made available in DSpace on 2014-12-17T14:56:01Z (GMT). No. of bitstreams: 1 AllanAAFS.pdf: 1304979 bytes, checksum: 691afb58d42d67dd5158ddcf00b02ce5 (MD5) Previous issue date: 2005-03-22 / This work proposes a kinematic control scheme, using visual feedback for a robot arm with five degrees of freedom. Using computational vision techniques, a method was developed to determine the cartesian 3d position and orientation of the robot arm (pose) using a robot image obtained through a camera. A colored triangular label is disposed on the robot manipulator tool and efficient heuristic rules are used to obtain the vertexes of that label in the image. The tool pose is obtained from those vertexes through numerical methods. A color calibration scheme based in the K-means algorithm was implemented to guarantee the robustness of the vision system in the presence of light variations. The extrinsic camera parameters are computed from the image of four coplanar points whose cartesian 3d coordinates, related to a fixed frame, are known. Two distinct poses of the tool, initial and final, obtained from image, are interpolated to generate a desired trajectory in cartesian space. The error signal in the proposed control scheme consists in the difference between the desired tool pose and the actual tool pose. Gains are applied at the error signal and the signal resulting is mapped in joint incrementals using the pseudoinverse of the manipulator jacobian matrix. These incrementals are applied to the manipulator joints moving the tool to the desired pose / Este trabalho prop?e um esquema de controle cinem?tico, utilizando realimenta??o visual para um bra?o rob?tico com cinco graus de liberdade. Foi desenvolvido um m?todo que utiliza t?cnicas de vis?o computacional, para a determina??o da posi??o e orienta??o cartesiana tridimensional (pose) do bra?o rob?tico a partir da imagem do mesmo fornecida por uma c?mera. Um r?tulo triangular colorido ? disposto sobre a garra do manipulador rob?tico e regras heur?sticas eficientes s?o utilizadas para obter os v?rtices desse r?tulo na imagem. M?todos num?ricos s?o usados para recupera??o da pose da garra a partir desses v?rtices. Um esquema de calibra??o de cores fundamentado no algoritmo K-means foi implementado de modo a garantir a robustez do sistema de vis?o na presen?a de varia??es na ilumina??o. Os par?metros extr?nsecos da c?mera s?o calibrados usando-se quatro pontos coplanares extra?dos da imagem, cujas posi??es no plano cartesiano em rela??o a um referencial fixo s?o conhecidas. Para duas poses distintas da garra, inicial e final, adquiridas atrav?s da imagem, interpola-se uma trajet?ria de refer?ncia em espa?o cartesiano. O esquema de controle proposto possui como sinal de erro a diferen?a entre a pose de refer?ncia e a pose atual da garra. Ap?s a aplica??o de ganhos, o sinal de erro ? mapeado em incrementos de junta utilizando-se a pseudoinversa do jacobiano do manipulador. Esses incrementos s?o aplicados ?s juntas do manipulador, deslocando a garra para a pose de refer?ncia
67

Navigation augmentée d'informations de fluorescence pour la chirurgie laparoscopique robot-assistée / Navigation augmented fluorescence informations for the laparoscopic surgeryrobot-assisted

Agustinos, Anthony 06 April 2016 (has links)
La chirurgie laparoscopique reproduit fidèlement les principes de la chirurgie conventionnelle avec une agressioncorporelle minimale. Si cette chirurgie apparaît comme étant très avantageuse pour le patient, il s’agit d’une interventionchirurgicale difficile où la complexité du geste chirurgical est accrue, en comparaison avec la chirurgie conventionnelle.Cette complexité réside en partie dans la manipulation des instruments chirurgicaux et la visualisation dela scène chirurgicale (notamment le champ de visualisation restreint d’un endoscope classique). La prise de décisionsdu chirurgien pourrait être améliorée en identifiant des zones critiques ou d’intérêts non visibles dans la scènechirurgicale.Mes travaux de recherche visent à combiner la robotique, la vision par ordinateur et la fluorescence pour apporterune réponse à ces difficultés : l’imagerie de fluorescence fournira une information visuelle supplémentaire pour aiderle chirurgien dans la détermination des zones à opérer ou à éviter (par exemple, visualisation du canal cystique lorsd’une cholécystectomie). La robotique assurera la précision et l’efficience du geste du chirurgien ainsi qu’une visualisationet un suivi "plus intuitif" de la scène chirurgicale. L’association de ces deux technologies permettra de guideret sécuriser le geste chirurgical.Une première partie de ce travail a consisté en l’extraction d’informations visuelles dans les deux modalités d’imagerie(laparoscopie/fluorescence). Des méthodes de localisation 2D/3D en temps réel d’instruments chirurgicaux dansl’image laparoscopique et de cibles anatomiques dans l’image de fluorescence ont été conçues et développées.Une seconde partie a consisté en l’exploitation de l’information visuelle bimodale pour l’élaboration de lois de commandepour des robots porte-endoscope et porte-instrument. Des commandes par asservissement visuel d’un robotporte-endoscope pour suivre un ou plusieurs instruments dans l’image laparoscopique ou une cible d’intérêt dansl’image de fluorescence ont été mises en oeuvre.Dans l’objectif de pouvoir commander un robot porte-instrument, enfonction des informations visuelles fournies par le système d’imagerie, une méthode de calibrage basée sur l’exploitationde l’information 3D de la localisation d’instruments chirurgicaux a également été élaborée. Cet environnementmultimodal a été évalué quantitativement sur banc d’essai puis sur spécimens anatomiques.À terme ce travail pourra s’intégrer au sein d’architectures robotisées légères, non-rigidement liées, utilisant des robotsde comanipulation avec des commandes plus élaborées tel que le retour d’effort. Une telle "augmentation" descapacités de visualisation et d’action du chirurgien pourraient l’aider à optimiser la prise en charge de son patient. / Laparoscopic surgery faithfully reproduce the principles of conventional surgery with minimal physical aggression.If this surgery appears to be very beneficial for the patient, it is a difficult surgery where the complexity of surgicalact is increased, compared with conventional surgery. This complexity is partly due to the manipulation of surgicalinstruments and viewing the surgical scene (including the restricted field of view of a conventional endoscope). Thedecisions of the surgeon could be improved by identifying critical or not visible areas of interest in the surgical scene.My research aimed to combine robotics, computer vision and fluorescence to provide an answer to these problems :fluorescence imaging provides additional visual information to assist the surgeon in determining areas to operate or tobe avoided (for example, visualization of the cystic duct during cholecystectomy). Robotics will provide the accuracyand efficiency of the surgeon’s gesture as well as a visualization and a "more intuitive" tracking of the surgical scene.The combination of these two technologies will help guide and secure the surgical gesture.A first part of this work consisted in extracting visual information in both imagingmodalities (laparoscopy/fluorescence).Localization methods for 2D/3D real-time of laparoscopic surgical instruments in the laparoscopic image and anatomicaltargets in the fluorescence image have been designed and developed. A second part consisted in the exploitationof the bimodal visual information for developing control laws for robotics endoscope holder and the instrument holder.Visual servoing controls of a robotic endoscope holder to track one or more instruments in laparoscopic image ora target of interest in the fluorescence image were implemented. In order to control a robotic instrument holder withthe visual information provided by the imaging system, a calibration method based on the use of 3D information of thelocalization of surgical instruments was also developed. This multimodal environment was evaluated quantitativelyon the test bench and on anatomical specimens.Ultimately this work will be integrated within lightweight robotic architectures, not rigidly linked, using comanipulationrobots with more sophisticated controls such as force feedback. Such an "increase" viewing capabilities andsurgeon’s action could help to optimize the management of the patient.
68

Vision Based Attitude Control

Hladký, Maroš January 2018 (has links)
The problematics of precise pointing and more specifically an attitude control is present sincethe first days of flight and Aerospace engineering. The precise attitude control is a matter ofnecessity for a great variety of applications. In the air, planes or unmanned aerial vehicles needto be able to orient precisely. In Space, a telescope or a satellite relies on the attitude control toreach the stars or survey the Earth. The attitude control can be based on various principles, pre-calculated variables, and measurements. It is common to use the gyroscope, Sun/Star/horizonsensors for attitude determination. While those technologies are well established in the indus-try, the rise in a computational power and efficiency in recent years enabled processing of aninfinitely more rich source of information - the vision. In this Thesis, a visual system is used forthe attitude determination and is blended together with a control algorithm to form a VisionBased Attitude Control system.A demonstrator is designed, build and programmed for the purpose of Vision Based AttitudeControl. It is based on the principle of Visual servoing, a method that links image measure-ments to the attitude control, in a form of a set of joint velocities. The intermittent steps arethe image acquisition and processing, feature detection, feature tracking and the computationof joint velocities in a closed loop control scheme. The system is then evaluated in a barrage ofpartial experiments.The results show, that the used detection algorithms, Shi&Tomasi and Harris, performequally well in feature detection and are able to provide a high amount of features for tracking.The pyramidal implementation of the Lucas&Kanade tracking algorithm proves to be a capablemethod for a reliable feature tracking, invariant to rotation and scale change. To further evaluatethe Visual servoing a complete demonstrator is tested. The demonstrator shows the capabilityof Visual Servoing for the purpose of Vision Based Attitude Control. An improvement in thehardware and implementation is recommended and planned to push the system beyond thedemonstrator stage into an applicable system.
69

Suivi des structures offshore par commande référencée vision et multi-capteurs / Offshore structure following by means of sensor servoing and sensor fusion

Krupiński, Szymon 10 July 2014 (has links)
Cette thèse vise à rendre possible l’utilisation des véhicules sous-marins autonomes (AUVs) dans l’inspection visuelle des structures offshore. Deux tâches sont identifiées: le suivi des structures rectilignes et la stabilisation devant les cibles planaires. Les AUVs complétement actionnés et équipés d'une centrale inertielle, un DVL, un altimètre et une caméra vidéo sont visés. La dynamique en 6 d.d.l. d'un AUV est rappelée. L'architecture de contrôle reflétant la structure en cascade de la dynamique est choisie. Une boucle interne asservie la vitesse du véhicule à la consigne et une boucle externe calcule la vitesse de référence à partir des informations visuelles. Le suivi de pipe est assuré par l'asservissement visuel 2D qui calcule la vitesse de référence à partir des bords du pipeline détectés dans l’image. La convergence globale asymptotique et localement exponentielle de la position, de l’orientation et de la vitesse sont obtenues. Le contrôleur de stabilisation utilise la matrice d’homographie. Seule la connaissance imprécise de l’orientation de la cible est nécessaire. L’information cartésienne de la profondeur de la cible est estimée à l’aide d’un observateur. La convergence quasi-globale et localement exponentielle de ce contrôleur est démontrée. Afin de tester ces méthodes un simulateur a été développé. Des images de synthèse de haute-fidélité sont générées à partir de simulateur Morse. Elles sont traitées en temps réel à l’aide de la bibliothèque OpenCV. Un modèle Simulink calcule la dynamique complète des 6 d.d.l. du véhicule simulé. Des résultats détaillés sont présentés et mettent en avant les résultats théoriques obtenus. / This thesis deals with a control system for a underwater autonomous vehicle given a two consequent tasks: following a linear object and stabilisation with respect to a planar target using an on-board camera. The proposed solution of this control problem takes advantage of a cascading nature of the system and divides it into a velocity pilot control and two visual servoing schemes. The serving controllers generate the reference velocity on the basis of visual information; line following is based on binormalized Pluecker coordinates of parallel lines corresponding to the pipe contours detected in the image, while the stabilisation relies on the planar homography matrix of observed object features, w.r.t. the image of the same object observed at the desired pose. The pilot, constructed on the full 6 d.o.f. nonlinear model of the AUV, assures that the vehicle’s linear and angular velocities converge to their respective setpoints. Both image servoing schemes are based on minimal assumptions and knowledge of the environment. Validation is provided by a high-fidelity 6 d.o.f. dynamics simulation coupled with a challenging 3D visual environment, which generates images for automatic processing and visual servoing. A custom simulator is built that consist of a Simulink model for dynamics simulation and the MORSE robot and sensor simulator, bound together by ROS message passing libraries. The OpenCV library is used for real-time image processing. Methods of visual data filtering are described. Thus generated experimental data is provided that confirms the desired properties of the control scheme presented earlier.
70

Robust micro/nano-positioning by visual servoing / Micro et nano-positionnement robuste par l'asservissement visuel

Cui, Le 26 January 2016 (has links)
Avec le développement des nanotechnologies, il est devenu possible et souhaitable de créer et d'assembler des nano-objets. Afin d'obtenir des processus automatisés robustes et fiables, la manipulation à l'échelle nanométrique est devenue, au cours des dernières années, une tâche primordiale. La vision est un moyen indispensable pour observer le monde à l'échelle micrométrique et nanométrique. Le contrôle basé sur la vision est une solution efficace pour les problèmes de contrôle de la robotique. Dans cette thèse, nous abordons la problématique du micro- et nano-positionnement par asservissement visuel via l'utilisation d'un microscope électronique à balayage (MEB). Dans un premier temps, la formation d'image MEB et les modèles géométriques de la vision appliqués aux MEB sont étudiés afin de présenter, par la suite, une méthode d'étalonnage de MEB par l'optimisation non-linéaire considérant les modèles de projection perspective et parallèle. Dans cette étude, il est constaté qu'il est difficile d'observer l'information de profondeur à partir de la variation de la position de pixel de l'échantillon dans l'image MEB à un grossissement élevé. Afin de résoudre le problème de la non-observabilité du mouvement dans l'axe de la profondeur du MEB, les informations de défocalisation d'image sont considérées comme caractéristiques visuelles pour commander le mouvement sur cet axe. Une méthode d'asservissement visuelle hybride est alors proposée pour effectuer le micro-positionnement en 6 degrés de liberté en utilisant les informations de défocalisation d'image et de photométrique d'image. Cette méthode est ensuite validée via l'utilisation d'un robot parallèle dans un MEB. Finalement, un système de contrôle en boucle fermée pour l'autofocus du MEB est introduit et validé par des expériences. Une méthode de suivi visuel et d'estimation de pose 3D, par la mise en correspondance avec un modèle de texture, est proposée afin de réaliser le guidage visuel dans un MEB. Cette méthode est robuste au flou d'image à cause de la défocalisation provoquée par le mouvement sur l'axe de la profondeur car le niveau de défocalisation est modélisée dans ce cadre de suivi visuel. / With the development of nanotechnology, it became possible to design and assemble nano-objects. For robust and reliable automation processes, handling and manipulation tasks at the nanoscale is increasingly required over the last decade. Vision is one of the most indispensable ways to observe the world in micrioscale and nanoscale. Vision-based control is an efficient solution for control problems in robotics. In this thesis, we address the issue of micro- and nano-positioning by visual servoing in a Scanning Electron Microscope (SEM). As the fundamental knowledge, the SEM image formation and SEM vision geometry models are studied at first. A nonlinear optimization process for SEM calibration has been presented considering both perspective and parallel projection model. In this study, it is found that it is difficult to observe the depth information from the variation of the pixel position of the sample in SEM image at high magnification. In order to solve the problem that the motion along the depth direction is not observable in a SEM, the image defocus information is considered as a visual feature to control the motion along the depth direction. A hybrid visual servoing scheme has been proposed for 6-DoF micro-positioning task using both image defocus information and image photometric information. It has been validated using a parallel robot in a SEM. Based on the similar idea, a closed-loop control scheme for SEM autofocusing task has been introduced and validated by experiments. In order to achieve the visual guidance in a SEM, a template-based visual tracking and 3D pose estimation framework has been proposed. This method is robust to the defocus blur caused by the motion along the depth direction since the defocus level is modeled in the visual tracking framework.

Page generated in 0.0709 seconds