• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • Tagged with
  • 10
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Design and implementation of robotic end-effectors for a prototype precision assembly system

Schöndorfer, Sebastian January 2016 (has links)
Manufacturers are facing increasing pressure to reduce the development costs and deployment times for automated assembly systems. This is especially true for a variety of precision mechatronic products. To meet new and changing market needs, the difficulties of integrating their systems must be significantly reduced. Since 1994, the Microdynamic Systems Laboratory at Carnegie Mellon University has been developing an automation framework, called Agile Assembly Architecture (AAA). Additionally to the concept, a prototype instantiation, in the form of a modular tabletop precision assembly system termed Minifactory, has been developed. The platform, provided by the Minifactory and AAA, is able to support and integrate various precision manufacturing processes. These are needed to assemble a large variety of small mechatronic products. In this thesis various enhancements for a second generation agent-based micro assembly system are designed, implemented, tested and improved. The project includes devising methods for tray feeding of precision high-value parts, micro fastening techniques and additional work on visual- and force-servoing. To help achieving these functions, modular and reconfigurable robot end-effectors for handling millimeter sized parts have been designed and built for the existing robotic agents. New concepts for robot end effectors to grasp and release tiny parts, including image processing and intelligent control software, were required and needed to be implemented in the prototype setup. These concepts need to distinguish themselves largely from traditional handling paradigms, in order to solve problems introduced by electrostatic and surface tension forces, that are dominant in manipulating parts that are millimeter and less in size. In order to have a modular system, the factory the main part of this project was the initialization and auto calibration of the different agents. The main focus, of this research, is on improving the design, deployment and reconfiguration capabilities of automated assembly systems for precision mechatronic products. This helps to shorten the development process as well as the assembly of factory systems.  A strategic application for this approach is the automated assembly of small sensors, actuators, medical devices and chip-scale atomic systems such as atomic clocks, magnetometers and gyroscopes.
2

Reconstruction of 3D Points From Uncalibrated Underwater Video

Cavan, Neil January 2011 (has links)
This thesis presents a 3D reconstruction software pipeline that is capable of generating point cloud data from uncalibrated underwater video. This research project was undertaken as a partnership with 2G Robotics, and the pipeline described in this thesis will become the 3D reconstruction engine for a software product that can generate photo-realistic 3D models from underwater video. The pipeline proceeds in three stages: video tracking, projective reconstruction, and autocalibration. Video tracking serves two functions: tracking recognizable feature points, as well as selecting well-spaced keyframes with a wide enough baseline to be used in the reconstruction. Video tracking is accomplished using Lucas-Kanade optical flow as implemented in the OpenCV toolkit. This simple and widely used method is well-suited to underwater video, which is taken by carefully piloted and slow-moving underwater vehicles. Projective reconstruction is the process of simultaneously calculating the motion of the cameras and the 3D location of observed points in the scene. This is accomplished using a geometric three-view technique. Results are presented showing that the projective reconstruction algorithm detailed here compares favourably to state-of-the-art methods. Autocalibration is the process of transforming a projective reconstruction, which is not suitable for visualization or measurement, into a metric space where it can be used. This is the most challenging part of the 3D reconstruction pipeline, and this thesis presents a novel autocalibration algorithm. Results are shown for two existing cost function-based methods in the literature which failed when applied to underwater video, as well as the proposed hybrid method. The hybrid method combines the best parts of its two parent methods, and produces good results on underwater video. Final results are shown for the 3D reconstruction pipeline operating on short under- water video sequences to produce visually accurate 3D point clouds of the scene, suitable for photorealistic rendering. Although further work remains to extend and improve the pipeline for operation on longer sequences, this thesis presents a proof-of-concept method for 3D reconstruction from uncalibrated underwater video.
3

Reconstruction of 3D Points From Uncalibrated Underwater Video

Cavan, Neil January 2011 (has links)
This thesis presents a 3D reconstruction software pipeline that is capable of generating point cloud data from uncalibrated underwater video. This research project was undertaken as a partnership with 2G Robotics, and the pipeline described in this thesis will become the 3D reconstruction engine for a software product that can generate photo-realistic 3D models from underwater video. The pipeline proceeds in three stages: video tracking, projective reconstruction, and autocalibration. Video tracking serves two functions: tracking recognizable feature points, as well as selecting well-spaced keyframes with a wide enough baseline to be used in the reconstruction. Video tracking is accomplished using Lucas-Kanade optical flow as implemented in the OpenCV toolkit. This simple and widely used method is well-suited to underwater video, which is taken by carefully piloted and slow-moving underwater vehicles. Projective reconstruction is the process of simultaneously calculating the motion of the cameras and the 3D location of observed points in the scene. This is accomplished using a geometric three-view technique. Results are presented showing that the projective reconstruction algorithm detailed here compares favourably to state-of-the-art methods. Autocalibration is the process of transforming a projective reconstruction, which is not suitable for visualization or measurement, into a metric space where it can be used. This is the most challenging part of the 3D reconstruction pipeline, and this thesis presents a novel autocalibration algorithm. Results are shown for two existing cost function-based methods in the literature which failed when applied to underwater video, as well as the proposed hybrid method. The hybrid method combines the best parts of its two parent methods, and produces good results on underwater video. Final results are shown for the 3D reconstruction pipeline operating on short under- water video sequences to produce visually accurate 3D point clouds of the scene, suitable for photorealistic rendering. Although further work remains to extend and improve the pipeline for operation on longer sequences, this thesis presents a proof-of-concept method for 3D reconstruction from uncalibrated underwater video.
4

Robust Self-Calibration and Fundamental Matrix Estimation in 3D Computer Vision

Rastgar, Houman 30 September 2013 (has links)
The recent advances in the field of computer vision have brought many of the laboratory algorithms into the realm of industry. However, one problem that still remains open in the field of 3D vision is the problem of noise. The challenging problem of 3D structure recovery from images is highly sensitive to the presence of input data that are contaminated by errors that do not conform to ideal assumptions. Tackling the problem of extreme data, or outliers has led to many robust methods in the field that are able to handle moderate levels of outliers and still provide accurate outputs. However, this problem remains open, especially for higher noise levels and so it has been the goal of this thesis to address the issue of robustness with respect to two central problems in 3D computer vision. The two problems are highly related and they have been presented together within a Structure from Motion (SfM) context. The first, is the problem of robustly estimating the fundamental matrix from images whose correspondences contain high outlier levels. Even though this area has been extensively studied, two algorithms have been proposed that significantly speed up the computation of the fundamental matrix and achieve accurate results in scenarios containing more than 50% outliers. The presented algorithms rely on ideas from the field of robust statistics in order to develop guided sampling techniques that rely on information inferred from residual analysis. The second, problem addressed in this thesis is the robust estimation of camera intrinsic parameters from fundamental matrices, or self-calibration. Self-calibration algorithms are notoriously unreliable for general cases and it is shown that the existing methods are highly sensitive to noise. In spite of this, robustness in self-calibration has received little attention in the literature. Through experimental results, it is shown that it is essential for a real-world self-calibration algorithm to be robust. In order to introduce robustness to the existing methods, three robust algorithms have been proposed that utilize existing constraints for self-calibration from the fundamental matrix. However, the resulting algorithms are less affected by noise than existing algorithms based on these constraints. This is an important milestone since self-calibration offers many possibilities by providing estimates of camera parameters without requiring access to the image acquisition device. The proposed algorithms rely on perturbation theory, guided sampling methods and a robust root finding method for systems of higher order polynomials. By adding robustness to self-calibration it is hoped that this idea is one step closer to being a practical method of camera calibration rather than merely a theoretical possibility.
5

Manual And Auto Calibration Of Stereo Camera Systems

Ozuysal, Mustafa 01 September 2004 (has links) (PDF)
To make three dimensional measurements using a stereo camera system, the intrinsic and extrinsic calibration of the system should be obtained. Furthermore, to allow zooming, intrinsic parameters should be re-estimated using only scene constraints. In this study both manual and autocalibration algorithms are implemented and tested. The implemented manual calibration system is used to calculate the parameters of the calibration with the help of a planar calibration object. The method is tested on different internal calibration settings and results of 3D measurements using the obtained calibration is presented. Two autocalibration methods have been implemented. The first one requires a general motion while the second method requires a pure rotation of the cameras. The autocalibration methods require point matches between images. To achieve a fully automated process, robust algorithms for point matching have been implemented. For the case of general motion the fundamental matrix relation is used in the matching algorithm. When there is only rotation between views, the homography relation is used. The results of variations on the autocalibration methods are also presented. The result of the manual calibration has been found to be very reliable. The results of the first autocalibration method are not accurate enough but it has been shown that the calibration from rotating cameras performs precise enough if rotation between images is sufficiently large.
6

Robust Self-Calibration and Fundamental Matrix Estimation in 3D Computer Vision

Rastgar, Houman January 2013 (has links)
The recent advances in the field of computer vision have brought many of the laboratory algorithms into the realm of industry. However, one problem that still remains open in the field of 3D vision is the problem of noise. The challenging problem of 3D structure recovery from images is highly sensitive to the presence of input data that are contaminated by errors that do not conform to ideal assumptions. Tackling the problem of extreme data, or outliers has led to many robust methods in the field that are able to handle moderate levels of outliers and still provide accurate outputs. However, this problem remains open, especially for higher noise levels and so it has been the goal of this thesis to address the issue of robustness with respect to two central problems in 3D computer vision. The two problems are highly related and they have been presented together within a Structure from Motion (SfM) context. The first, is the problem of robustly estimating the fundamental matrix from images whose correspondences contain high outlier levels. Even though this area has been extensively studied, two algorithms have been proposed that significantly speed up the computation of the fundamental matrix and achieve accurate results in scenarios containing more than 50% outliers. The presented algorithms rely on ideas from the field of robust statistics in order to develop guided sampling techniques that rely on information inferred from residual analysis. The second, problem addressed in this thesis is the robust estimation of camera intrinsic parameters from fundamental matrices, or self-calibration. Self-calibration algorithms are notoriously unreliable for general cases and it is shown that the existing methods are highly sensitive to noise. In spite of this, robustness in self-calibration has received little attention in the literature. Through experimental results, it is shown that it is essential for a real-world self-calibration algorithm to be robust. In order to introduce robustness to the existing methods, three robust algorithms have been proposed that utilize existing constraints for self-calibration from the fundamental matrix. However, the resulting algorithms are less affected by noise than existing algorithms based on these constraints. This is an important milestone since self-calibration offers many possibilities by providing estimates of camera parameters without requiring access to the image acquisition device. The proposed algorithms rely on perturbation theory, guided sampling methods and a robust root finding method for systems of higher order polynomials. By adding robustness to self-calibration it is hoped that this idea is one step closer to being a practical method of camera calibration rather than merely a theoretical possibility.
7

Reconstruction 3D infrarouge par perception active / 3D infrared reconstruction with active perception

Ducarouge, Benoit 26 September 2011 (has links)
Ces travaux de thèse ont été menés dans le contexte du projet ANR blanc "Real Time and True Temperature measurement" (R3T), dédié à la métrologie thermique à partir de mesures dans l'infrarouge. L'estimation d'une température vraie à partir d'une mesure de température apparente par une caméra infrarouge, exploite un modèle radiométrique dans lequel apparaît des facteurs qui dépendent de la nature et de la forme de l'objet considéré. Ces travaux portent sur la construction d'un modèle géométrique de l'objet à partir de caméras infrarouges déplacées par un robot autour d'un objet.Ces caméras, par rapport à des caméras standards, ont des caractéristiques spécifiques : faible résolution, peu de texture. Afin de faciliter la mise en œuvre et de minimiser la complexité du système final, nous avons choisi une approche de stéréovision non calibrée. Nous avons donc un banc de stéréovision infrarouge embarqué sur un robot cartésien, pour acquérir plusieurs vues de l'objet d'intérêt ; les principales étapes concernent la rectification non calibrée des images acquises par le banc stéréo, puis le calibrage des caméras rectifiées et de la relation main-œil sans utilisation de mire, puis la construction de modèles 3D locaux denses et le recalage de ces modèles partiels pour construire un modèle global de l'objet. Les contributions portent sur les deux premières étapes, rectification et calibrage pour la stéréovision. Pour la rectification non calibrée, il est proposé une approche d'optimisation sous contraintes qui estime les homographies, à appliquer sur ces images pour les rectifier, sans calcul préalable de la matrice Fondamentale, tout en minimisant les déformations projectives entre images d'origine et images rectifiées. La fonction coût est calculée à partir de la distance de Sampson avec une décomposition de la matrice fondamentale. Deux types de contraintes, géométriques et algébriques, sont comparés pour minimiser les déformations projectives. L'approche proposée est comparée aux méthodes proposées par Loop et Zhang, Hartley, Mallon et al... sur des jeux de données classiques de la littérature. Il est montré que les résultats sont au moins équivalents sur des images classiques et meilleurs sur des images de faible qualité comme des images infrarouges.Pour le calibrage sans mire, l'auteur propose de calibrer les caméras ainsi que la transformation main-œil, indispensable dès lors que le banc stéréo est porté par un robot, en une seule étape ; l'une des originalités est que cette méthode permet de calibrer les caméras préalablement rectifiées et ainsi de minimiser le nombre de paramètres à estimer. De même plusieurs critères sont proposés et évalués par de nombreux résultats sur des données de synthèse et sur des données réelles. Finalement, les méthodes de stéréovision testées pour ce contexte applicatif sont rapidement décrites ; des résultats expérimentaux acquis sur des objets sont présentés ainsi que des comparaisons vis-à-vis d'une vérité terrain connue / This dissertation was lead in the context of the R3T project (Real Time and True Temperature measurement), dedicated to metrology from thermal infrared measurements. The estimation of true temperature from apparent temperature measurement by an infrared camera uses a radiometric model which depends on nature and shape of the considered object. This work focuses on the construction of a geometric model from infrared cameras moved by a robot around an object.Those cameras, in comparison with standard ones, have specific characteristics : low resolution, low texture. To minimize the complexity and easily implement the final system, we chose a stereo approach using uncalibrated cameras. So we have an infrared stereoring embeded on a Cartesian robot, to acquire multiple views of the object of interest. First, the main steps implemented concern uncalibrated images rectification and autocalibration of infrared stereoring and hand-eye transformation without use of a calibration pattern. Then, the reconstruction of locals 3D models and the merge of these models was done to reach a global model of the object. The contributions cover the first two stages, rectification and autocalibration, for the other stereo reconstruction steps, different algorithms were tested and the best was chosen for our application.For the uncalibrated images rectification, an optimization approach under constraints is proposed. The estimation of rectification homographies is done, without the Fundamental matrix determination, while minimizing the distortion between original and corrected images. The cost function is based on the Sampson's distance with breakdown of the Fundamental matrix. Two constraints, geometrical and analytical, are compared to minimize distortion. The proposed approach is compared to methods proposed by Loop and Zhang, Hartley, Mallon et al ... on data sets from state of art. It is shown that the results are at least equivalent on conventional images and better on low quality images such as infrared images.For the autocalibration, the author proposes to calibrate cameras and hand-eye transformation, essential whenever the stereoring is embedded on a robot, in one step. One of the originality is that this method allows to calibrate rectified cameras and so minimize the number of parameters to estimate. Similarly, several criteria are proposed and evaluated by numerous results on synthetic and real data.Finally, all methods of stereovision tested for this application context are briefly described, the experimental results obtained on objects are presented and compared to ground truth
8

Autocalibration d'antenne vibrante ou déformée

Santori, Agnès 09 September 2008 (has links) (PDF)
L'autocalibration des positions des capteurs formant une grande antenne réseau aéroportée s'appuie sur les enregistrements de sources d'opportunité de directions d'arrivée inconnues, bande-étroite, émettant simultanément sur une même fréquence porteuse. Ce problème non-observable peut le devenir localement si l'on dispose de suffisamment de sources d'opportunité ou d'un modèle de déformations de voilure. Une étude de deux approches de la littérature est proposée. La première, basée sur le principe du Maximum de Vraisemblance est itérative ; la seconde, basée sur une méthode de sous-espace/modules constant (SEMC) identifie algébriquement la matrice de transfert du réseau. Leurs limites sont montrées quand le niveau de déformation est supérieur à une demi-longueur d'onde. Dans ce cas, des ambiguïtés de phase engendrent des positions erronées. Des solutions originales sont proposées pour estimer les positions des capteurs dans le cas de déformations statiques importantes. Trois sources d'opportunité et l'utilisation d'un modèle polynomial de déformation ou plus simplement des contraintes physiques couplées à une méthode de résolution des ambiguïtés de phase, permettent d'autocalibrer l'antenne. Enfin, pour autocalibrer une antenne vibrante grandement déformée une approche basée sur SEMC est proposée. Elle autorise la résolution des ambiguïtés de phase en intégrant suffisamment d'échantillons et permet ensuite de suivre l'antenne au cours des vibrations en utilisant un temps d'intégration plus court. Une extension pour des sources de fréquences porteuses différentes est finalement présentée.
9

3D Reconstruction in Scanning Electron Microscope : from image acquisition to dense point cloud / Reconstruction 3D dans le microscope électronique à balayage non-calibre

Kudryavtsev, Andrey 31 October 2017 (has links)
L’objectif de ce travail est d’obtenir un modèle 3D d’un objet à partir d’une série d’images prisesavec un Microscope Electronique à Balayage (MEB). Pour cela, nous utilisons la technique dereconstruction 3D qui est une application bien connue du domaine de vision par ordinateur.Cependant, en raison des spécificités de la formation d’images dans le MEB et dans la microscopieen général, les techniques existantes ne peuvent pas être appliquées aux images MEB. Lesprincipales raisons à cela sont la projection parallèle et les problèmes d’étalonnage de MEB entant que caméra. Ainsi, dans ce travail, nous avons développé un nouvel algorithme permettant deréaliser une reconstruction 3D dans le MEB tout en prenant en compte ces difficultés. De plus,comme la reconstruction est obtenue par auto-étalonnage de la caméra, l’utilisation des mires n’estplus requise. La sortie finale des techniques présentées est un nuage de points dense, pouvant donccontenir des millions de points, correspondant à la surface de l’objet. / The goal of this work is to obtain a 3D model of an object from its multiple views acquired withScanning Electron Microscope (SEM). For this, the technique of 3D reconstruction is used which isa well known application of computer vision. However, due to the specificities of image formation inSEM, and in microscale in general, the existing techniques are not applicable to the SEM images. Themain reasons for that are the parallel projection and the problems of SEM calibration as a camera.As a result, in this work we developed a new algorithm allowing to achieve 3D reconstruction in SEMwhile taking into account these issues. Moreover, as the reconstruction is obtained through cameraautocalibration, there is no need in calibration object. The final output of the presented techniques isa dense point cloud corresponding to the surface of the object that may contain millions of points.
10

Perception visuelle pour les drones légers

Skowronski, Robin 03 November 2011 (has links)
Dans cette thèse, en collaboration avec l'entreprise AéroDRONES, le Laboratoire Bordelais de Recherche en Informatique et l'INRIA, nous abordons le problème de la perception de l'environnement à partir d'une caméra embarquée sur un drone léger. Nous avons conçu, développé et validé de nouvelles méthodes de traitement qui optimisent l'exploitation des données produites par des systèmes de prise de vue aéroportés bas coût. D'une part, nous présentons une méthode d'autocalibrage de la caméra et de la tourelle d'orientation, sans condition spécifique sur l'environnement observé. Ensuite nous proposons un nouvel algorithme pour extraire la rotation de la caméra calibrée entre deux images (gyroscope visuel) et l'appliquons à la stabilisation vidéo en temps réel. D'autre part, nous proposons une méthode de géoréférencement des images par fusion avec un fond cartographique existant. Cette méthode permet d'enrichir des bases de données de photos aériennes, en gérant les cas de non-planéité du terrain. / The last decade has seen the emergence of many Unmanned Aerial Vehicles (UAV) which are becoming increasingly cheap and miniaturized. A mounted video-camera is standard equipment and can be found on any such UAVs. In this context, we present robust techniques to enhance autonomy levels of airborne vision systems based on mini-UAV technologies. First, we present a camera autocalibration method based on central projection based image \dimension{2}-invariants analysis and we compare it to classical Dual Image of the Absolute Conic (DIAC) technique. We present also a method to detect and calibrate turret's effectors hierarchy. Then, we propose a new algorithm to extract a calibrated camera self-rotation (visual gyroscope) and we apply it to propose a real-time video stabilizer with full perspective correction.

Page generated in 0.1356 seconds