Spelling suggestions: "subject:"3reconstruction"" "subject:"3preconstruction""
21 |
Vision et reconstruction 3D : application à la robotique mobile / Vision and 3D reconstruction : application to Mobile RoboticsHmida, Rihab 16 December 2016 (has links)
Avec l’évolution des processus technologiques, l’intérêt pour la robotique mobile n’a de cesse d’augmenter depuis quelques années, notamment pour remplacer l’homme dans des environnements à risque (zones radioactives, robots militaires) ou des zones qui lui sont inaccessibles (exploration planétaire ou sous-marine), ou à des échelles différentes (robot à l’intérieur d’une canalisation, voire robot chirurgical à l’intérieur du corps humain). Dans ce même contexte, les systèmes de navigation destinés plus particulièrement à l’exploration sous-marine suscitent de plus en plus l’intérêt de plusieurs géologues, roboticiens et scientifiques en vue de mieux connaître et caractériser les réseaux sous-marins. Pour des raisons de sécurité optimale, de nouvelles technologies (Radar, Sonar, système de caméras,..) ont été développées pour remplacer les plongeurs humains.C’est dans ce cadre que s’intègre les travaux de cette thèse ayant comme objectif la mise en œuvre d’un système de vision stéréoscopique permettant l’acquisition d’informations utiles et le développement d’un algorithme pour la restitution de la structure 3D d’un environnement confiné aquatique. Notre système est composé d’une paire de capteurs catadioptriques et d’une ceinture de pointeurs lasers permettant d’identifier des amers visuels de la scène et d’une plateforme pour l’exécution du processus de traitement des images acquises. La chaîne de traitement est précédée par une phase d’étalonnage effectuée hors-ligne pour la modélisation géométrique du système complet. L’algorithme de traitement consiste à une analyse pixellique des images stéréoscopiques pour l’extraction des projections lasers 2D et reconstruire leurs correspondants 3D en se basant sur les paramètres d’étalonnage.La mise en œuvre du système complet sur une plateforme logicielle demande un temps d’exécution supérieur à celui exigé par l’application. Les travaux clôturant ce mémoire s’adressent à cette problématique, et proposent une solution permettant de simplifier le développement et l’implémentation d’applications temps-réel sur des plateformes basées sur un dispositif FPGA. La mise en œuvre de notre application a été effectuée et une étude des performances est présentée tout en considérant les besoins de l’application et ses exigences de point de vue précision, rapidité et taux d’efficacité. / With the development of technological processes, interest in mobile robotics is constantly increasing in recent years, particularly to replace human in environments of risk (radioactive areas, military robots) or areas that are inaccessible (planetary or underwater exploration), or at different scales (robot within a pipeline or surgical robot inside the human body). In the same context, navigation systems are designed specifically for underwater exploration which is attracting more and more interest of several geologists, robotics and scientists in order to better understand and characterize submarine environment. For optimal security, new technologies (radar, sonar, camera system, ..) have been developed to replace human.In this context, the work of this thesis is focusing with the aim of implementing a stereoscopic vision system to acquire useful information and the development of an algorithm for the restoration of the 3D structure of a confined aquatic environment. Our system consists of a pair of catadioptric sensors and a laser pointer belt permitting to identify visual landmarks of the scene and a platform for the implementation of the acquired image processing. The processing chain is preceded by an offline calibration phase to generate the geometric modeling of the complete system. The processing algorithm consists of pixel-wise analysis of the stereoscopic images for the extraction of 2D laser projections and rebuilds their 3D corresponding based on the calibration parameters.The implementation of the complete system on a software platform requests an execution time higher than that required by the application. The work closing the memory is addressed to this problem and proposes a solution to simplify the development and implementation of real-time applications on platforms based on a FPGA device. The implementation of our application was performed and a study of performance is presented, considering the requirements of the application in terms of precision, speed and efficiency rate.
|
22 |
Contributions to accurate and efficient cost aggregation for stereo matchingChen, Dongming 12 March 2015 (has links)
Les applications basées sur 3D tels que les films 3D, l’impression 3D, la cartographie 3D, la reconnaissance 3D, sont de plus en plus présentes dans notre vie quotidienne; elles exigent une reconstruction 3D qui apparaît alors comme une technique clé. Dans cette thèse, nous nous intéressons à l’appariement stéréo qui est au coeur de l’acquisition 3D. Malgré les nombreuses publications traitant de l’appariement stéréo, il demeure un défi en raison des contraintes de précision et de temps de calcul: la conduite autonome requiert le temps réel; la modélisation d’objets 3D exige une précision et une résolution élevées. La méthode de pondération adaptative des pixels de support (adaptative-supportweight), basée sur le bien connu filtre bilatéral, est une méthode de l’état de l’art, de catégorie locale, qui en dépit de ses potentiels atouts peine à lever l’ambiguïté induite par des pixels voisins, de disparités différentes mais avec des couleurs similaires. Notre première contribution, à base de filtre trilatéral, est une solution pertinente qui tout en conservant les avantages du filtre bilatéral permet de lever l’ambiguïté mentionnée. Evaluée sur le corpus de référence, communément acceptée, Middlebury, elle se positionne comme étant la plus précise au moment où nous écrivons ces lignes. Malgré ces performances, la complexité de notre première contribution est élevée. Elle dépend en effet de la taille de la fenêtre support. Nous avons proposé alors une implémentation récursive du filtre trilatérale, inspirée par les filtres récursifs. Ici, les coûts bruts en chaque pixel sont agrégés à travers une grille organisée en graphe. Quatre passages à une dimension permettent d’atteindre une complexité en O(N), indépendante cette fois de la taille de la fenêtre support. C’est-à-dire des centaines de fois plus rapide que la méthode originale. Pour le calcul des pondérations des pixels du support, notre méthode basée sur le filtre trilatéral introduit un nouveau terme, qui est une fonction d’amplitude du gradient. Celui-ci est remarquable aux bords des objets, mais aussi en cas de changement de couleurs et de texture au sein des objets. Or, le premier cas est déterminant dans l’estimation de la profondeur. La dernière contribution de cette thèse vise alors à distinguer les contours des objets de ceux issus du changement de couleur au sein de l’objet. Les évaluations, sur Middlebury, prouvent l’efficacité de la méthode proposée. Elle est en effet plus précise que la méthode basée sur le filtre trilatéral d’origine, mais aussi d’autres méthodes locales. / 3D-related applications are becoming more and more popular in our daily life, such as 3D movies, 3D printing, 3D maps, 3D object recognition, etc. Many applications require realistic 3D models and thus 3D reconstruction is a key technique behind them. In this thesis, we focus on a basic problem of 3D reconstruction, i.e. stereo matching, which searches for correspondences in a stereo pair or more images of a 3D scene. Although various stereo matching methods have been published in the past decades, it is still a challenging task since the high requirement of accuracy and efficiency in practical applications. For example, autonomous driving demands realtime stereo matching technique; while 3D object modeling demands high quality solution. This thesis is dedicated to develop efficient and accurate stereo matching method. The well-known bilateral filter based adaptive support weight method represents the state-of-the-art local method, but it hardly sorts the ambiguity induced by nearby pixels at different disparities but with similar colors. Therefore, we proposed a novel trilateral filter based method that remedies such ambiguities by introducing a boundary strength term. As evaluated on the commonly accepted Middlebury benchmark, the proposed method is proved to be the most accurate local stereo matching method at the time of submission (April 2013). The computational complexity of the trilateral filter based method is high and depends on the support window size. In order to enhance its computational efficiency, we proposed a recursive trilateral filter method, inspired by recursive filter. The raw costs are aggregated on a grid graph by four one-dimensional aggregations and its computational complexity proves to be O(N), which is independent of the support window size. The practical runtime of the proposed recursive trilateral filter based method processing 375 _ 450 resolution image is roughly 260ms on a PC with a 3:4 GHz Inter Core i7 CPU, which is hundreds times faster than the original trilateral filter based method. The trilateral filter based method introduced a boundary strength term, which is computed from color edges, to handle the ambiguity induced by nearby pixels at different disparities but with similar colors. The color edges consist of two types of edges, i.e. depth edges and texture edges. Actually, only depth edges are useful for the boundary strength term. Therefore, we presented a depth edge detection method, aiming to pick out depth edges and proposed a depth edge trilateral filter based method. Evaluation on Middlebury benchmark proves the effectiveness of the proposed depth edge trilateral filter method, which is more accurate than the original trilateral filter method and other local stereo matching methods.
|
23 |
Lens Distortion Calibration Using Point CorrespondencesStein, Gideon P. 01 December 1996 (has links)
This paper describes a new method for lens distortion calibration using only point correspondences in multiple views, without the need to know either the 3D location of the points or the camera locations. The standard lens distortion model is a model of the deviations of a real camera from the ideal pinhole or projective camera model.Given multiple views of a set of corresponding points taken by ideal pinhole cameras there exist epipolar and trilinear constraints among pairs and triplets of these views. In practice, due to noise in the feature detection and due to lens distortion these constraints do not hold exactly and we get some error. The calibration is a search for the lens distortion parameters that minimize this error. Using simulation and experimental results with real images we explore the properties of this method. We describe the use of this method with the standard lens distortion model, radial and decentering, but it could also be used with any other parametric distortion models. Finally we demonstrate that lens distortion calibration improves the accuracy of 3D reconstruction.
|
24 |
Temporal Surface ReconstructionHeel, Joachim 01 May 1991 (has links)
This thesis investigates the problem of estimating the three-dimensional structure of a scene from a sequence of images. Structure information is recovered from images continuously using shading, motion or other visual mechanisms. A Kalman filter represents structure in a dense depth map. With each new image, the filter first updates the current depth map by a minimum variance estimate that best fits the new image data and the previous estimate. Then the structure estimate is predicted for the next time step by a transformation that accounts for relative camera motion. Experimental evaluation shows the significant improvement in quality and computation time that can be achieved using this technique.
|
25 |
Geometric and Algebraic Aspects of 3D Affine and Projective Structures from Perspective 2D ViewsShashua, Amnon 01 July 1993 (has links)
We investigate the differences --- conceptually and algorithmically --- between affine and projective frameworks for the tasks of visual recognition and reconstruction from perspective views. It is shown that an affine invariant exists between any view and a fixed view chosen as a reference view. This implies that for tasks for which a reference view can be chosen, such as in alignment schemes for visual recognition, projective invariants are not really necessary. We then use the affine invariant to derive new algebraic connections between perspective views. It is shown that three perspective views of an object are connected by certain algebraic functions of image coordinates alone (no structure or camera geometry needs to be involved).
|
26 |
Optical Flow Based Structure from MotionZucchelli, Marco January 2002 (has links)
No description available.
|
27 |
A Contour Grouping Algorithm for 3D Reconstruction of Biological CellsLeung, Tony Kin Shun January 2009 (has links)
Advances in computational modelling offer unprecedented potential for obtaining insights into the mechanics of cell-cell interactions. With the aid of such models, cell-level phenomena such as cell sorting and tissue self-organization are now being understood in terms of forces generated by specific sub-cellular structural components. Three-dimensional systems can behave differently from two-dimensional ones and since models cannot be validated without corresponding data, it is crucial to build accurate three-dimensional models of real cell aggregates. The lack of automated methods to determine which cell outlines in successive images of a confocal stack or time-lapse image set belong to the same cell is an important unsolved problem in the reconstruction process. This thesis addresses this problem through a contour grouping algorithm (CGA) designed to lead to unsupervised three-dimensional reconstructions of biological cells.
The CGA associates contours obtained from fluorescently-labeled cell membranes in individual confocal slices using concepts from the fields of machine learning and combinatorics. The feature extraction step results in a set of association metrics. The algorithm then uses a probabilistic grouping step and a greedy-cost optimization step to produce grouped sets of contours. Groupings are representative of imaged cells and are manually evaluated for accuracy.
The CGA presented here is able to produce accuracies greater than 96% when properly tuned. Parameter studies show that the algorithm is robust. That is, acceptable results are obtained under moderately varied probabilistic constraints and reasonable cost weightings. Image properties – such as slicing distance, image quality – affect the results. Sources of error are identified and enhancements based on fuzzy-logic and other optimization methods are considered. The successful grouping of cell contours, as realized here, is an important step toward the development of realistic, three-dimensional, cell-based finite element models.
|
28 |
A Contour Grouping Algorithm for 3D Reconstruction of Biological CellsLeung, Tony Kin Shun January 2009 (has links)
Advances in computational modelling offer unprecedented potential for obtaining insights into the mechanics of cell-cell interactions. With the aid of such models, cell-level phenomena such as cell sorting and tissue self-organization are now being understood in terms of forces generated by specific sub-cellular structural components. Three-dimensional systems can behave differently from two-dimensional ones and since models cannot be validated without corresponding data, it is crucial to build accurate three-dimensional models of real cell aggregates. The lack of automated methods to determine which cell outlines in successive images of a confocal stack or time-lapse image set belong to the same cell is an important unsolved problem in the reconstruction process. This thesis addresses this problem through a contour grouping algorithm (CGA) designed to lead to unsupervised three-dimensional reconstructions of biological cells.
The CGA associates contours obtained from fluorescently-labeled cell membranes in individual confocal slices using concepts from the fields of machine learning and combinatorics. The feature extraction step results in a set of association metrics. The algorithm then uses a probabilistic grouping step and a greedy-cost optimization step to produce grouped sets of contours. Groupings are representative of imaged cells and are manually evaluated for accuracy.
The CGA presented here is able to produce accuracies greater than 96% when properly tuned. Parameter studies show that the algorithm is robust. That is, acceptable results are obtained under moderately varied probabilistic constraints and reasonable cost weightings. Image properties – such as slicing distance, image quality – affect the results. Sources of error are identified and enhancements based on fuzzy-logic and other optimization methods are considered. The successful grouping of cell contours, as realized here, is an important step toward the development of realistic, three-dimensional, cell-based finite element models.
|
29 |
Reconstruction of 3D Points From Uncalibrated Underwater VideoCavan, Neil January 2011 (has links)
This thesis presents a 3D reconstruction software pipeline that is capable of generating
point cloud data from uncalibrated underwater video. This research project was undertaken
as a partnership with 2G Robotics, and the pipeline described in this thesis will become
the 3D reconstruction engine for a software product that can generate photo-realistic 3D
models from underwater video. The pipeline proceeds in three stages: video tracking,
projective reconstruction, and autocalibration.
Video tracking serves two functions: tracking recognizable feature points, as well as selecting well-spaced
keyframes with a wide enough baseline to be used in the reconstruction. Video tracking is accomplished
using Lucas-Kanade optical flow as implemented in the OpenCV toolkit. This simple and
widely used method is well-suited to underwater video, which is taken by carefully piloted
and slow-moving underwater vehicles.
Projective reconstruction is the process of simultaneously calculating the motion of the
cameras and the 3D location of observed points in the scene. This is accomplished using
a geometric three-view technique. Results are presented
showing that the projective reconstruction algorithm detailed here compares favourably to
state-of-the-art methods.
Autocalibration is the process of transforming a projective reconstruction, which is not
suitable for visualization or measurement, into a metric space where it can be used. This
is the most challenging part of the 3D reconstruction pipeline, and this thesis presents a
novel autocalibration algorithm. Results are shown for two existing cost function-based
methods in the literature which failed when applied to underwater video, as well as the
proposed hybrid method. The hybrid method combines the best parts of its two parent
methods, and produces good results on underwater video.
Final results are shown for the 3D reconstruction pipeline operating on short under-
water video sequences to produce visually accurate 3D point clouds of the scene, suitable
for photorealistic rendering. Although further work remains to extend and improve the
pipeline for operation on longer sequences, this thesis presents a proof-of-concept method
for 3D reconstruction from uncalibrated underwater video.
|
30 |
Entwicklung eines iterativen 3D Rekonstruktionverfahrens für die Kontrolle der Tumorbehandlung mit Schwerionen mittels der Positronen-Emissions-TomographieLauckner, Kathrin 31 March 2010 (has links) (PDF)
At the Gesellschaft für Schwerionenforschung in Darmstadt a therapy unit for heavy ion cancer treatment has been established in collaboration with the Deutsches Krebsforschungszentrum Heidelberg, the Radiologische Universitätsklinik Heidelberg and the Forschungszentrum Rossendorf. For quality assurance the dual-head positron camera BASTEI (Beta Activity meaSurements at the Therapy with Energetic Ions) has been integrated into this facility. It measures ß+-activity distributions generated via nuclear fragmentation reactions within the target volume. BASTEI has about 4 million coincidence channels. The emission data are acquired in a 3D regime and stored in a list mode data format. Typically counting statstics is two to three orders of magnitude lower than those of typical PET-scans in nuclear medicine. Two iterative 3D reconstruction algorithms based on ISRA (Image Space Reconstruction Algorithm) and MLEM (Maximum Likelihood Expectation Maximization), respectively, have been adapted to this imaging geometry. The major advantage of the developed approaches are run-time Monte-Carlo simulations which are used to calculate the transition matrix. The influences of detector sensitivity variations, randoms, activity from outside of the field of view and attenuation are corrected for the individual coincidence channels. Performance studies show, that the implementation based on MLEM is the algorithm of merit. Since 1997 it has been applied sucessfully to patient data. The localization of distal and lateral gradients of the ß+-activity distribution is guaranteed in the longitudinal sections. Out of the longitudinal sections the lateral gradients of the ß+-activity distribution should be interpreted using a priori knowledge.
|
Page generated in 0.105 seconds