• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 18
  • 8
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 80
  • 39
  • 32
  • 28
  • 26
  • 19
  • 17
  • 16
  • 14
  • 14
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Visual Servoing for Precision Shipboard Landing of an Autonomous Multirotor Aircraft System

Wynn, Jesse Stewart 01 September 2018 (has links)
Precision landing capability is a necessary development that must take place before unmanned aircraft systems (UAS) can realize their full potential in today's modern society. Current multirotor UAS are heavily reliant on GPS data to provide positioning information for landing. While generally accurate to within several meters, much higher levels of accuracy are needed to ensure safe and trouble-free operations in several UAS applications that are currently being pursued. Examples of these applications include package delivery, automatic docking and recharging, and landing on moving vehicles. The specific problem we consider is that of precision landing of a multirotor unmanned aircraft on a small barge at sea---which presents several significant challenges. Not only must we land on a moving vehicle, but the vessel also experiences random rotational and translational motion as a result of waves and wind. Because maritime operations often span long periods of time, it is also desirable that precision landing can occur at any time---day or night.In this work we present a complete approach for precision shipboard landing and address each of the aforementioned challenges. Our method is enabled by leveraging an on-board camera and a specialized landing target which can be detected in light or dark conditions. Features belonging to the target are extracted from camera imagery and are used to compute image-based visual servoing velocity commands that lead to precise alignment between the multirotor and landing target. To enable the multirotor to match the horizontal velocities of the barge, an extended Kalman filter is used to generate feed-forward velocity reference commands. The complete landing procedure is guided by a state machine architecture that incorporates corrections to account for wind, and is also capable of quickly reacquiring the landing target in a loss event. Our approach is thoroughly validated through full-scale outdoor flight tests and is shown to be reliable, timely, and accurate to within 4 to 10 centimeters.
62

Distributed Mobile Robot Localization and Communication System for Special Interventions

Sales Gil, Jorge 28 October 2011 (has links)
This thesis focuses on the development of a distributed mobile robot localization and communication system for special interventions like those carried out by fire-fighters in fire ground search and rescue. The use case scenario is related to the one described for the GUARDIANS EU project, where a swarm formation of mobile robots accompany a fire fighter during a rescue intervention in a warehouse. In this line, localizing the robots and the fire fighter during an indoor intervention with the presence of smoke is one of the more interesting challenges in this scenario. Several localization techniques have been developed using ultrasonic sensors, radio frequency signals and visual information. It has also been studied several communication protocols that can help to improve the efficiency of the system in such scenario and a proposal for designing a cross-layer communication platform that improves the connectivity of the mobile nodes during an intervention and reduces the number of lost data packets.
63

From Human to Robot Grasping

Romero, Javier January 2011 (has links)
Imagine that a robot fetched this thesis for you from a book shelf. How doyou think the robot would have been programmed? One possibility is thatexperienced engineers had written low level descriptions of all imaginabletasks, including grasping a small book from this particular shelf. A secondoption would be that the robot tried to learn how to grasp books from yourshelf autonomously, resulting in hours of trial-and-error and several bookson the floor.In this thesis, we argue in favor of a third approach where you teach therobot how to grasp books from your shelf through grasping by demonstration.It is based on the idea of robots learning grasping actions by observinghumans performing them. This imposes minimum requirements on the humanteacher: no programming knowledge and, in this thesis, no need for specialsensory devices. It also maximizes the amount of sources from which therobot can learn: any video footage showing a task performed by a human couldpotentially be used in the learning process. And hopefully it reduces theamount of books that end up on the floor. This document explores the challenges involved in the creation of such asystem. First, the robot should be able to understand what the teacher isdoing with their hands. This means, it needs to estimate the pose of theteacher's hands by visually observing their in the absence of markers or anyother input devices which could interfere with the demonstration. Second,the robot should translate the human representation acquired in terms ofhand poses to its own embodiment. Since the kinematics of the robot arepotentially very different from the human one, defining a similarity measureapplicable to very different bodies becomes a challenge. Third, theexecution of the grasp should be continuously monitored to react toinaccuracies in the robot perception or changes in the grasping scenario.While visual data can help correcting the reaching movement to the object,tactile data enables accurate adaptation of the grasp itself, therebyadjusting the robot's internal model of the scene to reality. Finally,acquiring compact models of human grasping actions can help in bothperceiving human demonstrations more accurately and executing them in a morehuman-like manner. Moreover, modeling human grasps can provide us withinsights about what makes an artificial hand design anthropomorphic,assisting the design of new robotic manipulators and hand prostheses. All these modules try to solve particular subproblems of a grasping bydemonstration system. We hope the research on these subproblems performed inthis thesis will both bring us closer to our dream of a learning robot andcontribute to the multiple research fields where these subproblems arecoming from. / QC 20111125
64

Posicionamento dinâmico utilizando controle a estrutura variável e servovisão. / Dynamic positioning control using variable structure and visual servoing.

Gustavo Jales Sokal 16 July 2010 (has links)
Neste trabalho é apresentado o desenvolvimento de um sistema de posicionamento dinâmico para uma pequena embarcação baseado em controle a estrutura variável com realimentação por visão computacional. Foram investigadas, na literatura, diversas técnicas desenvolvidas e escolheu-se o controle a estrutura variável devido, principalmente, ao modo de acionamento dos propulsores presentes no barco utilizado para os experimentos. Somando-se a isto, foi considerada importante a robustez que a técnica de controle escolhida apresenta, pois o modelo utilizado conta com incerteza em sua dinâmica. É apresentado ainda o projeto da superfície de deslizamento para realizar o controle a estrutura variável. Como instrumento de medição optou-se por utilizar técnicas de visão computacional em imagens capturadas a partir de uma webcam. A escolha por este tipo de sistema deve-se a alta precisão das medições aliada ao seu baixo custo. São apresentadas simulações e experimentos com controle a estrutura variável em tempo discreto utilizando a integral do erro da posição visando eliminar o erro em regime. Para realizar o controle que demanda o estado completo, são comparados quatro estimadores de estado realizados em tempo discreto: derivador aproximado; observador assintótico com uma frequência de amostragem igual a da câmera; observador assintótico com uma frequência de amostragem maior que a da câmera; e filtro de Kalman. / The design of a dynamic positioning system for a small boat based on variable strutucture control and visual-servoing is presented. Many control tecniques have been investigated and the variable structure control was chosen, mainly, due the operation mode of the motor drivers installed on the boat applied in the experiments. The robustness of this control technique was also considered since the available dynamic model of the boat is uncertain. The design of the sliding surface is shown as well. Computer vision techniques were used to measure the position of the boat from images taken with a webcam, this kind of measure system was chosen due to its high accuracy and low cost. Simulation and experimental results of discrete time variable structure control with integral action of the boats postion, included in order to eliminate steady state error, are shown. To develop this controller four state estimators, in discrete time, are compared: derivative of position; asymptotic observer at a high sampling rate; asymptotic observer at webcams sampling rate; and Kalman filter.
65

Micro-Robotic Cholesteatoma Surgery : clinical requirements analysis and image-based control under constraints / Micro-Robotique pour la Chirurgie de Cholestéatome

Dahroug, Bassem 16 February 2018 (has links)
Une maladie appelée cholestéatome affecte l'oreille moyenne, en absence de traitement, elle pourrait conduire à des complications graves. Le seul traitement dans la pratique médicale actuelle est une procédure chirurgicale. Les incidences de cholestéatome résiduelle ou récurrente sont élevés et le patient doit subir plus d'une intervention chirurgicale. Par conséquent, un système robotique original a été proposé pour d'éliminer l'incidence du cholestéatome résiduel en enlevant efficacement toutes les cellules infectées de la première intervention chirurgicale, et de faire une chirurgie moins invasive. Ainsi, ce manuscrit montre les différents défis auxquels fait face le chirurgien à travers une telle micro-procédure. Il est également défini le cahier de charge pour la réalisation d'un système futuriste dédié à la chirurgie du cholestéatome. En outre, un contrôleur est proposé comme un première étape vers le système idéal. Un tel contrôleur permet de guider un outil chirurgical rigide afin de suivre un chemin de référence sous les contraintes du trou d'incision. Le contrôleur proposé peut guider soit un outil droit, soit un outil courbe. En effet, le contrôleur proposé est une commande de haut niveau qui es formulé dans l'espace de tâche (ou espace Cartésien). Ce contrôleur est une couche modulaire qui peut être ajoutée à différentes structures robotiques. Le contrôleur proposé a montré de bons résultats en termes de précision tout en étant évalué sur un robot parallèle et un robot en série. / A disease called cholesteatoma affects the middle ear, in the absence of treatment, it could lead to serious complications. The only treatment in current medical practice is a surgical procedure. Incidences of residual or recurrent cholesteatoma are high and the patient may have more than one surgical procedure. Therefore, a novel robotic system was proposed to eliminate the incidence of residual cholesteatoma by removing efficiently all infected cells from the first surgery, and make a less invasive surgery. Thus, this manuscript shows the different challenges that face the surgeon through such a micro-procedure. It also is specified the requirements for achieving a futuristic system dedicated to cholesteatoma surgery. In addition, a controller is proposed as a first step toward the ideal system. Such a controller allows to guide a rigid surgical tool for following a reference path under the constraints of the incision hole. The proposed controller can guide either a straight tool or a curved one. Indeed, the proposed controller is a high level control which is formulated in the task-space (or Cartesian-space). This controller is a modular layer which can be added to different robotics structures. The proposed controller showed a good results in term of accuracy while assessed on a parallel robot and a serial one.
66

Posicionamento dinâmico utilizando controle a estrutura variável e servovisão. / Dynamic positioning control using variable structure and visual servoing.

Gustavo Jales Sokal 16 July 2010 (has links)
Neste trabalho é apresentado o desenvolvimento de um sistema de posicionamento dinâmico para uma pequena embarcação baseado em controle a estrutura variável com realimentação por visão computacional. Foram investigadas, na literatura, diversas técnicas desenvolvidas e escolheu-se o controle a estrutura variável devido, principalmente, ao modo de acionamento dos propulsores presentes no barco utilizado para os experimentos. Somando-se a isto, foi considerada importante a robustez que a técnica de controle escolhida apresenta, pois o modelo utilizado conta com incerteza em sua dinâmica. É apresentado ainda o projeto da superfície de deslizamento para realizar o controle a estrutura variável. Como instrumento de medição optou-se por utilizar técnicas de visão computacional em imagens capturadas a partir de uma webcam. A escolha por este tipo de sistema deve-se a alta precisão das medições aliada ao seu baixo custo. São apresentadas simulações e experimentos com controle a estrutura variável em tempo discreto utilizando a integral do erro da posição visando eliminar o erro em regime. Para realizar o controle que demanda o estado completo, são comparados quatro estimadores de estado realizados em tempo discreto: derivador aproximado; observador assintótico com uma frequência de amostragem igual a da câmera; observador assintótico com uma frequência de amostragem maior que a da câmera; e filtro de Kalman. / The design of a dynamic positioning system for a small boat based on variable strutucture control and visual-servoing is presented. Many control tecniques have been investigated and the variable structure control was chosen, mainly, due the operation mode of the motor drivers installed on the boat applied in the experiments. The robustness of this control technique was also considered since the available dynamic model of the boat is uncertain. The design of the sliding surface is shown as well. Computer vision techniques were used to measure the position of the boat from images taken with a webcam, this kind of measure system was chosen due to its high accuracy and low cost. Simulation and experimental results of discrete time variable structure control with integral action of the boats postion, included in order to eliminate steady state error, are shown. To develop this controller four state estimators, in discrete time, are compared: derivative of position; asymptotic observer at a high sampling rate; asymptotic observer at webcams sampling rate; and Kalman filter.
67

Desenvolvimento de esquema de controle com realimenta??o visual para um rob? manipulador

Soares, Allan Aminadab Andr? Freire 22 March 2005 (has links)
Made available in DSpace on 2014-12-17T14:56:01Z (GMT). No. of bitstreams: 1 AllanAAFS.pdf: 1304979 bytes, checksum: 691afb58d42d67dd5158ddcf00b02ce5 (MD5) Previous issue date: 2005-03-22 / This work proposes a kinematic control scheme, using visual feedback for a robot arm with five degrees of freedom. Using computational vision techniques, a method was developed to determine the cartesian 3d position and orientation of the robot arm (pose) using a robot image obtained through a camera. A colored triangular label is disposed on the robot manipulator tool and efficient heuristic rules are used to obtain the vertexes of that label in the image. The tool pose is obtained from those vertexes through numerical methods. A color calibration scheme based in the K-means algorithm was implemented to guarantee the robustness of the vision system in the presence of light variations. The extrinsic camera parameters are computed from the image of four coplanar points whose cartesian 3d coordinates, related to a fixed frame, are known. Two distinct poses of the tool, initial and final, obtained from image, are interpolated to generate a desired trajectory in cartesian space. The error signal in the proposed control scheme consists in the difference between the desired tool pose and the actual tool pose. Gains are applied at the error signal and the signal resulting is mapped in joint incrementals using the pseudoinverse of the manipulator jacobian matrix. These incrementals are applied to the manipulator joints moving the tool to the desired pose / Este trabalho prop?e um esquema de controle cinem?tico, utilizando realimenta??o visual para um bra?o rob?tico com cinco graus de liberdade. Foi desenvolvido um m?todo que utiliza t?cnicas de vis?o computacional, para a determina??o da posi??o e orienta??o cartesiana tridimensional (pose) do bra?o rob?tico a partir da imagem do mesmo fornecida por uma c?mera. Um r?tulo triangular colorido ? disposto sobre a garra do manipulador rob?tico e regras heur?sticas eficientes s?o utilizadas para obter os v?rtices desse r?tulo na imagem. M?todos num?ricos s?o usados para recupera??o da pose da garra a partir desses v?rtices. Um esquema de calibra??o de cores fundamentado no algoritmo K-means foi implementado de modo a garantir a robustez do sistema de vis?o na presen?a de varia??es na ilumina??o. Os par?metros extr?nsecos da c?mera s?o calibrados usando-se quatro pontos coplanares extra?dos da imagem, cujas posi??es no plano cartesiano em rela??o a um referencial fixo s?o conhecidas. Para duas poses distintas da garra, inicial e final, adquiridas atrav?s da imagem, interpola-se uma trajet?ria de refer?ncia em espa?o cartesiano. O esquema de controle proposto possui como sinal de erro a diferen?a entre a pose de refer?ncia e a pose atual da garra. Ap?s a aplica??o de ganhos, o sinal de erro ? mapeado em incrementos de junta utilizando-se a pseudoinversa do jacobiano do manipulador. Esses incrementos s?o aplicados ?s juntas do manipulador, deslocando a garra para a pose de refer?ncia
68

Navigation augmentée d'informations de fluorescence pour la chirurgie laparoscopique robot-assistée / Navigation augmented fluorescence informations for the laparoscopic surgeryrobot-assisted

Agustinos, Anthony 06 April 2016 (has links)
La chirurgie laparoscopique reproduit fidèlement les principes de la chirurgie conventionnelle avec une agressioncorporelle minimale. Si cette chirurgie apparaît comme étant très avantageuse pour le patient, il s’agit d’une interventionchirurgicale difficile où la complexité du geste chirurgical est accrue, en comparaison avec la chirurgie conventionnelle.Cette complexité réside en partie dans la manipulation des instruments chirurgicaux et la visualisation dela scène chirurgicale (notamment le champ de visualisation restreint d’un endoscope classique). La prise de décisionsdu chirurgien pourrait être améliorée en identifiant des zones critiques ou d’intérêts non visibles dans la scènechirurgicale.Mes travaux de recherche visent à combiner la robotique, la vision par ordinateur et la fluorescence pour apporterune réponse à ces difficultés : l’imagerie de fluorescence fournira une information visuelle supplémentaire pour aiderle chirurgien dans la détermination des zones à opérer ou à éviter (par exemple, visualisation du canal cystique lorsd’une cholécystectomie). La robotique assurera la précision et l’efficience du geste du chirurgien ainsi qu’une visualisationet un suivi "plus intuitif" de la scène chirurgicale. L’association de ces deux technologies permettra de guideret sécuriser le geste chirurgical.Une première partie de ce travail a consisté en l’extraction d’informations visuelles dans les deux modalités d’imagerie(laparoscopie/fluorescence). Des méthodes de localisation 2D/3D en temps réel d’instruments chirurgicaux dansl’image laparoscopique et de cibles anatomiques dans l’image de fluorescence ont été conçues et développées.Une seconde partie a consisté en l’exploitation de l’information visuelle bimodale pour l’élaboration de lois de commandepour des robots porte-endoscope et porte-instrument. Des commandes par asservissement visuel d’un robotporte-endoscope pour suivre un ou plusieurs instruments dans l’image laparoscopique ou une cible d’intérêt dansl’image de fluorescence ont été mises en oeuvre.Dans l’objectif de pouvoir commander un robot porte-instrument, enfonction des informations visuelles fournies par le système d’imagerie, une méthode de calibrage basée sur l’exploitationde l’information 3D de la localisation d’instruments chirurgicaux a également été élaborée. Cet environnementmultimodal a été évalué quantitativement sur banc d’essai puis sur spécimens anatomiques.À terme ce travail pourra s’intégrer au sein d’architectures robotisées légères, non-rigidement liées, utilisant des robotsde comanipulation avec des commandes plus élaborées tel que le retour d’effort. Une telle "augmentation" descapacités de visualisation et d’action du chirurgien pourraient l’aider à optimiser la prise en charge de son patient. / Laparoscopic surgery faithfully reproduce the principles of conventional surgery with minimal physical aggression.If this surgery appears to be very beneficial for the patient, it is a difficult surgery where the complexity of surgicalact is increased, compared with conventional surgery. This complexity is partly due to the manipulation of surgicalinstruments and viewing the surgical scene (including the restricted field of view of a conventional endoscope). Thedecisions of the surgeon could be improved by identifying critical or not visible areas of interest in the surgical scene.My research aimed to combine robotics, computer vision and fluorescence to provide an answer to these problems :fluorescence imaging provides additional visual information to assist the surgeon in determining areas to operate or tobe avoided (for example, visualization of the cystic duct during cholecystectomy). Robotics will provide the accuracyand efficiency of the surgeon’s gesture as well as a visualization and a "more intuitive" tracking of the surgical scene.The combination of these two technologies will help guide and secure the surgical gesture.A first part of this work consisted in extracting visual information in both imagingmodalities (laparoscopy/fluorescence).Localization methods for 2D/3D real-time of laparoscopic surgical instruments in the laparoscopic image and anatomicaltargets in the fluorescence image have been designed and developed. A second part consisted in the exploitationof the bimodal visual information for developing control laws for robotics endoscope holder and the instrument holder.Visual servoing controls of a robotic endoscope holder to track one or more instruments in laparoscopic image ora target of interest in the fluorescence image were implemented. In order to control a robotic instrument holder withthe visual information provided by the imaging system, a calibration method based on the use of 3D information of thelocalization of surgical instruments was also developed. This multimodal environment was evaluated quantitativelyon the test bench and on anatomical specimens.Ultimately this work will be integrated within lightweight robotic architectures, not rigidly linked, using comanipulationrobots with more sophisticated controls such as force feedback. Such an "increase" viewing capabilities andsurgeon’s action could help to optimize the management of the patient.
69

Touch driven dexterous robot arm control / Commande de bras de robot dextrose conduit par le toucher

Kappassov, Zhanat 06 March 2017 (has links)
Les robots ont amélioré les industries, en particulier les systèmes d'assemblage basé sur des conveyors et ils ont le potentiel pour apporter plus de bénéfices: transports; exploration de zones dangereuses, mer profonde et même d'autres planètes; santé et dans la vie courante.Une barrière majeure pour leur évasion des environnements industriels avec des enceintes vers des environnements partagés avec les humains, c'est leur capacité réduite dans les tâches d’interaction physique, inclue la manipulation d'objets.Tandis que la dextérité de la manipulation n'est pas affectée par la cécité dans les humains, elle décroit énormément pour les robots: ils sont limités à des environnements statiques, mais le monde réel est très changeant. Dans cette thèse, nous proposons une approche différente qui considère le contrôle du contact pendant les interaction physiques entre un robot et l'environnement.Néanmoins, les approches actuelles pour l'interaction physique sont pauvres par rapport au numéro de tâches qu'elles peuvent exécuter. Pour permettre aux robots d'exécuter plus de tâches, nous dérivons des caractéristiques tactiles représentant les déformations de la surface molle d'un capteur tactile et nous incorporons ces caractéristiques dans le contrôleur d'un robot à travers des matrices de mapping tactile basées sur les informations tactiles et sur les tâches à développer.Dans notre première contribution, nous montrons comment les algorithmes de traitement d'images peuvent être utilisés pour découvrir la structure tridimensionnelle subjacente du repère de contact entre un objet et une matrice de capteurs de pression avec une surface molle attachée à l’effecteur d'un bras robotique qui interagit avec cet objet. Ces algorithmes obtiennent comme sorties les soi-disant caractéristiques tactiles. Dans notre deuxième contribution, nous avons conçu un contrôleur qui combine ces caractéristiques tactiles avec un contrôleur position-couple du bras robotique.Il permet à l'effecteur du bras déplacer le repère du contact d'une manière désirée à travers la régulation d'une erreur dans ces caractéristiques. Finalement, dans notre dernière contribution,avec l'addition d'une couche de description des tâches, nous avons étendu ce contrôleur pour adresser quatre problèmes communs dans la robotique: exploration, manipulation, reconnaissance et co-manipulation d'objets.Tout au long de cette thèse, nous avons mis l'accent sur le développement d'algorithmes qui marchent pas simplement avec des robots simulés mais aussi avec de robots réels. De cette manière, toutes ces contributions ont été évaluées avec des expériences faites avec au moins un robot réel. En général, ce travail a comme objectif de fournir à la communauté robotique un cadre unifié qui permet aux bras robotique d'être plus dextres et autonomes. Des travaux préliminaires ont été proposés pour étendre ce cadre au développement de tâches qui impliquent un contrôle multi-contact avec des mains robotiques multi-doigts. / Robots have improved industry processes, most recognizably in conveyor-belt assemblysystems, and have the potential to bring even more benefits to our society in transportation,exploration of dangerous zones, deep sea or even other planets, health care and inour everyday life. A major barrier to their escape from fenced industrial areas to environmentsco-shared with humans is their poor skills in physical interaction tasks, includingmanipulation of objects. While the dexterity in manipulation is not affected by the blindnessin humans, it dramatically decreases in robots. With no visual perception, robotoperations are limited to static environments, whereas the real world is a highly variantenvironment.In this thesis, we propose a different approach that considers controlling contact betweena robot and the environment during physical interactions. However, current physicalinteraction control approaches are poor in terms of the range of tasks that can beperformed. To allow robots to perform more tasks, we derive tactile features representingdeformations of the mechanically compliant sensing surface of a tactile sensor andincorporate these features to a robot controller via touch-dependent and task-dependenttactile feature mapping matrices.As a first contribution, we show how image processing algorithms can be used todiscover the underlying three dimensional structure of a contact frame between an objectand an array of pressure sensing elements with a mechanically compliant surfaceattached onto a robot arm’s end-effector interacting with this object. These algorithmsobtain as outputs the so-called tactile features. As a second contribution, we design a tactileservoing controller that combines these tactile features with a position/torque controllerof the robot arm. It allows the end-effector of the arm to steer the contact frame ina desired manner by regulating errors in these features. Finally, as a last contribution, weextend this controller by adding a task description layer to address four common issuesin robotics: exploration, manipulation, recognition, and co-manipulation of objects.Throughout this thesis, we make emphasis on developing algorithms that work notonly with simulated robots but also with real ones. Thus, all these contributions havebeen evaluated in experiments conducted with at least one real robot. In general, thiswork aims to provide the robotics community with a unified framework to that will allowrobot arms to be more dexterous and autonomous. Preliminary works are proposedfor extending this framework to perform tasks that involve multicontact control withmultifingered robot hands.
70

Vision Based Attitude Control

Hladký, Maroš January 2018 (has links)
The problematics of precise pointing and more specifically an attitude control is present sincethe first days of flight and Aerospace engineering. The precise attitude control is a matter ofnecessity for a great variety of applications. In the air, planes or unmanned aerial vehicles needto be able to orient precisely. In Space, a telescope or a satellite relies on the attitude control toreach the stars or survey the Earth. The attitude control can be based on various principles, pre-calculated variables, and measurements. It is common to use the gyroscope, Sun/Star/horizonsensors for attitude determination. While those technologies are well established in the indus-try, the rise in a computational power and efficiency in recent years enabled processing of aninfinitely more rich source of information - the vision. In this Thesis, a visual system is used forthe attitude determination and is blended together with a control algorithm to form a VisionBased Attitude Control system.A demonstrator is designed, build and programmed for the purpose of Vision Based AttitudeControl. It is based on the principle of Visual servoing, a method that links image measure-ments to the attitude control, in a form of a set of joint velocities. The intermittent steps arethe image acquisition and processing, feature detection, feature tracking and the computationof joint velocities in a closed loop control scheme. The system is then evaluated in a barrage ofpartial experiments.The results show, that the used detection algorithms, Shi&Tomasi and Harris, performequally well in feature detection and are able to provide a high amount of features for tracking.The pyramidal implementation of the Lucas&Kanade tracking algorithm proves to be a capablemethod for a reliable feature tracking, invariant to rotation and scale change. To further evaluatethe Visual servoing a complete demonstrator is tested. The demonstrator shows the capabilityof Visual Servoing for the purpose of Vision Based Attitude Control. An improvement in thehardware and implementation is recommended and planned to push the system beyond thedemonstrator stage into an applicable system.

Page generated in 0.0775 seconds