Spelling suggestions: "subject:"[een] 3D RECONSTRUCTION"" "subject:"[enn] 3D RECONSTRUCTION""
191 |
New Methods for Triangulation-based Shape Acquisition using Laser ScannersForest Collado, Josep 11 December 2004 (has links)
Tradicionalment, la reproducció del mon real se'ns ha mostrat a traves d'imatges planes. Aquestes imatges se solien materialitzar mitjançant pintures sobre tela o be amb dibuixos. Avui, per sort, encara podem veure pintures fetes a ma, tot i que la majoria d'imatges s'adquireixen mitjançant càmeres, i es mostren directament a una audiència, com en el cinema, la televisió o exposicions de fotografies, o be son processades per un sistema computeritzat per tal d'obtenir un resultat en particular. Aquests processaments s'apliquen en camps com en el control de qualitat industrial o be en la recerca mes puntera en intel·ligència artificial. Aplicant algorismes de processament de nivell mitja es poden obtenir imatges 3D a partir d'imatges 2D, utilitzant tècniques ben conegudes anomenades Shape From X, on X es el mètode per obtenir la tercera dimensió, i varia en funció de la tècnica que s'utilitza a tal nalitat. Tot i que l'evolució cap a la càmera 3D va començar en els 90, cal que les tècniques per obtenir les formes tridimensionals siguin mes i mes acurades. Les aplicacions dels escàners 3D han augmentat considerablement en els darrers anys, especialment en camps com el lleure, diagnosi/cirurgia assistida, robòtica, etc. Una de les tècniques mes utilitzades per obtenir informació 3D d'una escena, es la triangulació, i mes concretament, la utilització d'escàners laser tridimensionals. Des de la seva aparició formal en publicacions científiques al 1971 [SS71], hi ha hagut contribucions per solucionar problemes inherents com ara la disminució d'oclusions, millora de la precisió, velocitat d'adquisició, descripció de la forma, etc. Tots i cadascun dels mètodes per obtenir punts 3D d'una escena te associat un procés de calibració, i aquest procés juga un paper decisiu en el rendiment d'un dispositiu d'adquisició tridimensional. La nalitat d'aquesta tesi es la d'abordar el problema de l'adquisició de forma 3D, des d'un punt de vista total, reportant un estat de l'art sobre escàners laser basats en triangulació, provant el funcionament i rendiment de diferents sistemes, i fent aportacions per millorar la precisió en la detecció del feix laser, especialment en condicions adverses, i solucionant el problema de la calibració a partir de mètodes geomètrics projectius. / Traditionally, the reproduction of the real world has been shown to us by means of at images. These images used to be materialised by means of paint on canvas, drawings or the like. Today, we still see hand made pictures, by fortune, although most of the images are acquired by cameras and they are either directly shown to an audience, like in the cinema, television or photographs, or they are processed by a computer system in order to obtain a particular result, like in industrial quality assurance or bleeding edge artificial intelligence research. Applying mid-level processing algorithms, 3D images can be obtained from 2D ones, using well known techniques called Shape From X, where X is the method for obtaining the 3rd dimension. While the evolution to the 3D camera begun in the 90s, the techniques for obtaining the most accurate 3D shape need to be continuously improving. The application of 3D scanners has spread signi cantly in the recent years, specially in elds like entertainment, assisted diagnosis/ surgery, robotics, etc. One of the most used techniques to obtain 3D information from a scene is triangulation, and more concretely, triangulationbased laser scanners. Since their formal appearance in scienti c publications, in 1971 [SS71], there have been contributions for solving inherent problems like occlusion avoidance, accuracy improvement, acquisition speed, shape description, etc. All of the methods for obtaining 3D points of a scene is accompained with a calibration procedure, and this procedure plays a decisive role in the performance of the acquisition device. The goal of this thesis is to provide a holistic approach to the problem of shape acquisition, giving a wide survey of triangulation laser scanners, testing the performance of di erent systems, and to give contributions for both improving acquisition accuracy under adverse conditions and solving the calibration problem. In addition, the calibration approach is based on previous works that used projective geometry to this end.
|
192 |
The God-like Interaction Framework: tools and techniques for communicating in mixed-space collaborationStafford, Aaron January 2008 (has links)
This dissertation presents the god-like interaction framework, consisting of tools and techniques for remote communication of situational and navigational information. The framework aims to facilitated intuitive and effective communication between a group of experts and remote field workers in the context of military, fire-fighting, and search and rescue.
|
193 |
Improving Conventional Image-based 3D Reconstruction of Man-made Environments Through Line Cloud IntegrationGråd, Martin January 2018 (has links)
Image-based 3D reconstruction refers to the capture and virtual reconstruction of real scenes, through the use of ordinary camera sensors. A common approach is the use of the algorithms Structure from Motion, Multi-view Stereo and Poisson Surface Reconstruction, that fares well for many types of scenes. However, a problem that this pipeline suffers from is that it often falters when it comes to texture-less surfaces and areas, such as those found in man-made environments. Building facades, roads and walls often lack detail and easily trackable feature points, making this approach less than ideal for such scenes. To remedy this weakness, this thesis investigates an expanded approach, incorporating line segment detection and line cloud generation into the already existing point cloud-based pipeline. Texture-less objects such as building facades, windows and roofs are well-suited for line segment detection, and line clouds are fitting for encoding 3D positional data in scenes consisting mostly of objects featuring many straight lines. A number of approaches have been explored in order to determine the usefulness of line clouds in this context, each of them addressing different aspects of the reconstruction procedure.
|
194 |
Analyse de "Time Lapse" optiques stéréo et d'images radar satellitaires : application à la mesure du déplacement de glaciers / Analysis of optical stereo Time Lapse and radar satellite images : application to the measurement of glacier displacementPham, Ha Thai 24 February 2015 (has links)
L’observation de la Terre par des systèmes d’acquisition d’images permet de suivre l’évolution temporelle de phénomènes naturels tels que les séismes, les volcans ou les mouvements gravitaires. Différentes techniques existent dont l’imagerie satellitaire, la photogrammétrie terrestre et les mesures in-situ. Les séries temporelles d’images issues d’appareils photo automatiques (Time Lapse) sont une source d’informations en plein essor car elles offrent un compromis intéressant en termes de couverture spatiale et de fréquence d’observation pour mesurer les déplacements de surface de zones spécifiques. Cette thèse est consacrée à l’analyse de séries d’images issues de la photographie terrestre et de l’imagerie radar satellitaire pour la mesure du déplacement des glaciers Alpins. Nous nous intéressons en particulier aux problèmes du traitement de Time Lapse stéréo pour le suivi d’objets géophysiques dans des conditions terrain peu favorables à la photogrammétrie. Nous proposons une chaîne de traitement mono-caméra qui comprend les étapes de sélection automatique des images, de recalage et de calcul de champs de déplacement bidimensionnel (2D). L’information apportée par les couples stéréo est ensuite exploitée à l’aide du logiciel MICMAC pour reconstruire le relief et obtenir le déplacement tridimensionnel(3D). Plusieurs couples d’images radar à synthèse d’ouverture (SAR) ont également été traités à l’aide des outils EFIDIR pour obtenir des champs de déplacement 2D dans la géométrie radar sur des orbites ascendantes ou descendantes. La combinaison de mesures obtenues quasi-simultanément sur ces deux types d’orbites permet de reconstruire le déplacement 3D. Ces méthodes ont été mises en oeuvre sur des séries de couples stéréo acquis par deux appareils photo automatiques installés sur la rive droite du glacier d’Argentière et sur des images du satellite TerraSAR-X couvrant le massif du Mont-Blanc. Les résultats sont présentés sur des données acquises lors d’une expérimentation multi-instruments menée en collaboration avec l’IGN à l’automne 2013, incluant le déploiement d’un réseau de Géocubes qui ont fournit des mesures GPS. Elles sont utilisées pour évaluer la précision des résultats obtenus par télédétection proximale et spatiale sur ce type de glacier. / Earth observation by image acquisition systems allows the survey of temporal evolution of natural phenomena such as earthquakes, volcanoes or gravitational movements. Various techniques exist including satellite imagery, terrestrial photogrammetry and in-situ measurements. Image time series from automatic cameras (Time Lapse) are a growing source of information since they offer an interesting compromise in terms of spatial coverage and observation frequency in order to measure surface motion in specific areas. This PhD thesis is devoted to the analysis of image time series from terrestrial photography and satellite radar imagery to measure the displacement of Alpine glaciers. We are particularly interested in Time Lapse stereo processing problems for monitoring geophysical objects in unfavorable conditions for photogrammetry. We propose a single-camera processing chain that includes the steps of automatic photograph selection, coregistration and calculation of two-dimensional (2D) displacement field. The information provided by the stereo pairs is then processed using the MICMAC software to reconstruct the relief and get the three-dimensional (3D) displacement. Several pairs of synthetic aperture radar (SAR) images were also processed with the EFIDIR tools to obtain 2D displacement fields in the radar geometry in ascending or descending orbits. The combination of measurements obtained almost simultaneously on these two types of orbits allows the reconstruction of the 3D displacement. These methods have been implemented on time series of stereo pairs acquired by two automatic cameras installed on the right bank of the Argentière glacier and on TerraSAR-X satellite images covering the Mont-Blanc massif. The results are presented on data acquired during a multi-instrument experiment conducted in collaboration with the French Geographic National Institute (IGN) during the fall of 2013,with a network of Géocubes which provided GPS measurements. They are used to evaluate the accuracy of the results obtained by proximal and remote sensing on this type of glacier.
|
195 |
Durabilité des convertisseurs électrochimiques haute température à oxydes solides : une étude expérimentale et de modélisation basée sur la caractérisation au synchrotron par nanotomographie des rayons X / Durability of solid oxide cells : an experimental and modelling investigation based on synchrotron X-ray nano-tomography characterizationHubert, Maxime 24 May 2017 (has links)
Ce travail porte sur l’étude de la dégradation des convertisseurs électrochimiques haute température à oxydes solides. Une approche couplant des tests électrochimiques, des caractérisations post-mortem avancées et une modélisation multi-échelle a été mise en place afin d’établir les liens entre les performances, la microstructure des électrodes et leur dégradation. Dans ce but, des essais de durabilité de plus de mille heures ont été menés dans différentes conditions opératoires. La microstructure des électrodes a été reconstruite par nano-holotomographie des rayons X pour la cellule de référence avant et après vieillissement. Une attention particulière a été apportée à la mesure de la résolution spatiale et à la fiabilisation du protocole expérimental. Grâce aux volumes 3D, les propriétés microstructurales de l’électrode H2 en Ni-YSZ ont été quantifiées pour les cellules à l’état initial et vieillies. Un modèle physique d’agglomération des particules de Nickel a ensuite été ajusté sur les analyses tridimensionnelles et intégré dans une structure de modélisation multi-échelle développée au laboratoire. Il a auparavant été nécessaire de compléter l’outil numérique avec un module spécifique dédié aux matériaux composant l’électrode à oxygène fait avec un conducteur mixte ionique-électronique. Une fois le modèle validé sur des courbes de polarisation expérimentales, il a été utilisé pour quantifier la contribution de l’agglomération du Nickel sur les pertes de performances mesurées expérimentalement en mode pile à combustible et électrolyse. / This work aims at a better understanding of the high temperature Solid Oxide Cells degradation. An approach based on electrochemical tests, advanced post-test characterizations and multi-scale models has been used to investigate the links between the performances, the electrodes microstructure and their degradation. In that goal, long-term durability tests have been performed over thousand hours in different operating conditions. Electrode microstructures have been reconstructed by X-ray nano-holotomography for the pristine and the aged cells. It is worth noting that a special attention has been paid to improve both the process reliability for the tomographic experiments as well as the spatial resolution of the 3D reconstructed images. Thanks to the valuable 3D volumes, the Ni-YSZ microstructural properties of the H2 electrode have been quantified for the fresh and the aged samples. Then, a physically-based model for Nickel particle agglomeration has been adjusted on the microstructural parameters obtained by the 3D analysis and implemented in an in-house multi-scale modelling framework. Beforehand, it has been necessary to enrich the available numerical tool with a specific module dedicated to the oxygen electrode made in Mixed Ionic Electronic Conducting materials. Once validated on polarisation curves, the completed model has been used to quantify the contribution of Nickel agglomeration on the experimental degradation rates recorded in fuel cell and electrolysis modes.
|
196 |
Living in a dynamic world : semantic segmentation of large scale 3D environmentsMiksik, Ondrej January 2017 (has links)
As we navigate the world, for example when driving a car from our home to the work place, we continuously perceive the 3D structure of our surroundings and intuitively recognise the objects we see. Such capabilities help us in our everyday lives and enable free and accurate movement even in completely unfamiliar places. We largely take these abilities for granted, but for robots, the task of understanding large outdoor scenes remains extremely challenging. In this thesis, I develop novel algorithms for (near) real-time dense 3D reconstruction and semantic segmentation of large-scale outdoor scenes from passive cameras. Motivated by "smart glasses" for partially sighted users, I show how such modeling can be integrated into an interactive augmented reality system which puts the user in the loop and allows her to physically interact with the world to learn personalized semantically segmented dense 3D models. In the next part, I show how sparse but very accurate 3D measurements can be incorporated directly into the dense depth estimation process and propose a probabilistic model for incremental dense scene reconstruction. To relax the assumption of a stereo camera, I address dense 3D reconstruction in its monocular form and show how the local model can be improved by joint optimization over depth and pose. The world around us is not stationary. However, reconstructing dynamically moving and potentially non-rigidly deforming texture-less objects typically require "contour correspondences" for shape-from-silhouettes. Hence, I propose a video segmentation model which encodes a single object instance as a closed curve, maintains correspondences across time and provide very accurate segmentation close to object boundaries. Finally, instead of evaluating the performance in an isolated setup (IoU scores) which does not measure the impact on decision-making, I show how semantic 3D reconstruction can be incorporated into standard Deep Q-learning to improve decision-making of agents navigating complex 3D environments.
|
197 |
Widening the basin of convergence for the bundle adjustment type of problems in computer visionHong, Je Hyeong January 2018 (has links)
Bundle adjustment is the process of simultaneously optimizing camera poses and 3D structure given image point tracks. In structure-from-motion, it is typically used as the final refinement step due to the nonlinearity of the problem, meaning that it requires sufficiently good initialization. Contrary to this belief, recent literature showed that useful solutions can be obtained even from arbitrary initialization for fixed-rank matrix factorization problems, including bundle adjustment with affine cameras. This property of wide convergence basin of high quality optima is desirable for any nonlinear optimization algorithm since obtaining good initial values can often be non-trivial. The aim of this thesis is to find the key factor behind the success of these recent matrix factorization algorithms and explore the potential applicability of the findings to bundle adjustment, which is closely related to matrix factorization. The thesis begins by unifying a handful of matrix factorization algorithms and comparing similarities and differences between them. The theoretical analysis shows that the set of successful algorithms actually stems from the same root of the optimization method called variable projection (VarPro). The investigation then extends to address why VarPro outperforms the joint optimization technique, which is widely used in computer vision. This algorithmic comparison of these methods yields a larger unification, leading to a conclusion that VarPro benefits from an unequal trust region assumption between two matrix factors. The thesis then explores ways to incorporate VarPro to bundle adjustment problems using projective and perspective cameras. Unfortunately, the added nonlinearity causes a substantial decrease in the convergence basin of VarPro, and therefore a bootstrapping strategy is proposed to bypass this issue. Experimental results show that it is possible to yield feasible metric reconstructions and pose estimations from arbitrary initialization given relatively clean point tracks, taking one step towards initialization-free structure-from-motion.
|
198 |
Multidimensional Multicolor Image Reconstruction Techniques for Fluorescence MicroscopyDilipkumar, Shilpa January 2015 (has links) (PDF)
Fluorescence microscopy is an indispensable tool in the areas of cell biology, histology and material science as it enables non-invasive observation of specimen in their natural environment. The main advantage of fluorescence microscopy is that, it is non-invasive and capable of imaging with very high contrast and visibility. It is dynamic, sensitive and allows high selectivity. The specificity and sensitivity of antibody-conjugated probes and genetically-engineered fluorescent protein constructs allows the user to label multiple targets and the precise location of intracellular components. However, its spatial reso- lution is limited to one-quarter of the excitation wavelength (Abbe’s diffraction limit). The advent of new and sophisticated optics and availability of fluorophores has made fluorescence imaging a flourishing field. Several advanced techniques like TIRF, 4PI, STED, SIM, SPIM, PALM, fPALM, GSDIM and STORM, have enabled high resolution imaging by breaking the diffraction barrier and are a boon to medical and biological research. Invention of confocal and multi-photon microscopes have enabled observation of the specimen embedded at depth. All these advances in fluorescence microscopy have made it a much sought-after technique.
The first chapter provides an overview of the fundamental concepts in fluorescence imag- ing. A brief history of emergence of the field is provided in this chapter along with the evolution of different super-resolution microscopes. An introduction to the concept of fluorophores, their broad classification and their characteristics is discussed in this chap- ter. A brief explanation of different fluorescence imaging techniques and some trending techniques are introduced. This chapter provides a thorough foundation for the research work presented in the thesis.
Second chapter deals with different microscopy techniques that have changed the face of biophotonics and nanoscale imaging. The resolution of optical imaging systems are dictated by the inherent property of the system, known as impulse response or more popularly “point spread function”. A basic fluorescence imaging system is presented in this chapter and introduces the concept of point spread function and resolution. The introduction of confocal microscope and multi-photon microscope brought about improved optical sectioning. 4PI microscopy technique was invented to improve the axial resolution of the optical imaging system. Using this microscopy modality, an axial resolution of upto ≈ 100nm was made possible. The basic concepts of these techniques is provided in this chapter. The chapter concludes with a discussion on some of the optical engineering techniques that aid in improved lateral and axial resolution improvements and then we proceed to take on these engineering techniques in detail in the next chapter.
Introduction of spatial masks at the back aperture of the objective lens results in gen- eration of a Bessel-like beam, which enhances our ability to see deeper inside a spec- imen with reduced aberrations and improved lateral resolution. Bessel beams have non-diffracting and self-reconstructing properties which reduces the scattering while ob- serving cells embedded deep in a thick tissue. By coupling this with the 4PI super- resolution microscopy technique, multiple excitation spots can be generated along the optical axis of the two opposing high-NA objective lenses. This technique is known as multiple excitation spot optical (MESO) microscopy technique. It provides a lateral resolution improvement upto 150nm. A detailed description of the technique and a thorough analysis of the polarization properties is discussed in chapter 3.
Chapters 4 and 5 bring the focus of the thesis to the main topic of research - multi- dimensional image reconstruction for fluorescence microscopy by employing the statis- tical techniques. We begin with an introduction to filtering techniques in Chapter 4 and concentrate on an edge-preserving denoising filter: Bilateral Filter for fluorescence microscopy images. Bilateral filter is a non-linear combination of two Gaussian filters, one based on proximity of two pixels and the other based on the intensity similarity of the two. These two sub-filters result in the edge-preserving capability of the filter. This technique is very popular in the field of image processing and we demonstrate the application of the technique for fluorescence microscopy images. The chapter presents a through description of the technique along with comparisons with Poisson noise mod- eling. Chapters 4 and 5 provide a detailed introduction to statistical iterative recon- struction algorithms like expectation maximization-maximum likelihood (EM-ML) and maximum a-posteriori (MAP) techniques. The main objective of an image reconstruc- tion algorithm is to recover an object from its noisy degraded images. Deconvolution methods are generally used to denoise and recover the true object. The choice of an appropriate prior function is the crux of the MAP algorithm. The remaining of chapter 5 provides an introduction to different potential functions. We show some results of the MAP algorithm in comparison with that of ML algorithm.
In chapter 6, we continue the discussion on MAP reconstruction where two new potential functions are introduced and demonstrated. The first one is based on the application of Taylor series expansion on the image. The image field is considered to be analytic and hence Taylor series produces an accurate estimation of the field being reconstructed. The second half of the chapter introduces an interpolation function to approximate the value of a pixel in its neighborhood. Cubic B-splines are widely used as a basis function during interpolation and they are popular technique in computer vision and medical
imaging techniques. These novel algorithms are tested on di_erent microscopy data like,
confocal and 4PI. The results are shown at the _nal part of the chapter.
Tagging cell organelles with uorescent probes enable their visualization and analysis
non-invasively. In recent times, it is common to tag more than one organelle of interest
and simultaneously observe their structures and functions. Multicolor uorescence
imaging has become a key technique to study speci_c processes like pH sensing and cell
metabolism with a nanoscale precision. However, this process is hindered by various
problems like optical artifacts, noise, autouorescence, photobleaching and leakage of
uorescence from one channel to the other. Chapter 7 deals with an image reconstruction
technique to obtain noise-free and distortion-less data from multiple channels when imaging a multicolor sample. This technique is easily adaptable with the existing imaging systems and has potential application in biological imaging and biophysics where multiple probes are used to tag the features of interest.
The fact that the lateral resolution of an optical system is better than the axial resolution is well known. Conventional microscopes focus on cells that are very close to the cover-slip or a few microns into the specimen. However, for cells that are embedded deep in a thick sample (ex: tissues), it is di_cult to visualize them using a conventional microscope. A number of factors like, scattering, optical aberrations, mismatch of refractive
index between the objective lens and the mounting medium and noise, cause distortion of the images of samples at large depths. The system PSF gets distorted due
to di_raction and its shape changes rapidly at large depths. The aim of chapter 8 is
to introduce a technique to reduce distortion of images acquired at depth by employing
image reconstruction techniques. The key to this methodology is the modeling of PSF
at large depths. Maximum likelihood technique is then employed to reduce the streaking
e_ects of the PSF and removes noise from raw images. This technique enables the
visualization of cells embedded at a depth of 150_m.
Several biological processes within the cell occur at a rate faster than the rate of acquisition and hence vital information is missed during imaging. The recorded images of
these dynamic events are corrupted by motion blur, noise and other optical aberrations.
Chapter 9 deals with two techniques that address temporal resolution improvement of
the uorescence imaging system. The _rst technique focuses on accelerating the data
acquisition process. This includes employing the concept of time-multiplexing to acquire
sequential images from a dynamic sample using two cameras and generating multiple
sheets of light using a di_raction grating, resulting in multi-plane illumination. The
second technique involves the use of parallel processing units to enable real-time image
reconstruction of the acquired data. A multi-node GPU and CUDA architecture effciently reduce the computation time of the reconstruction algorithms. Faster implementation of iterative image reconstruction techniques can aid in low-light imaging and dynamic monitoring of rapidly moving samples in real time. Employing rapid acquisition and rapid image reconstruction aids in real-time visualization of cells and have immense potential in the _eld of microbiology and bio-mechanics. Finally, we conclude
the thesis with a brief section on the contribution of the thesis and the future scope the work presented.
Thank you for using www.freepdfconvert.com service!
Only two pages are converted. Please Sign Up to convert all pages. https://www.freepdfconvert.com/membership
|
199 |
Image-based deformable 3D reconstruction using differential geometry and cartan's connections / Reconstruction 3D déformable basée sur l'image utilisant la géométrie différentielle et les connexions de cartanParashar, Shaifali 23 November 2017 (has links)
La reconstruction 3D d’objets à partir de plusieurs images est un objectif important de la vision par ordinateur. Elle a été largement étudiée pour les objets rigides et non rigides (ou déformables). Le Structure-from-Motion (SfM) est un algorithme qui effectue la reconstruction 3D d’objets rigides en utilisant le mouvement visuel entre plusieurs images obtenues à l’aide d’une caméra en mouvement. Le SfM est une solution très précise et stable. La reconstruction 3D déformable a été largement étudiée pour les images monoculaires (obtenues à partir d’une seule caméra) mais reste un problème ouvert. Les méthodes actuelles exploitent des indices visuels tels que le mouvement visuel inter-image et l’ombrage afin de construire un algorithme de reconstruction. Cette thèse se concentre sur l’utilisation du mouvement visuel inter-image pour résoudre ce problème. Deux types de scénarios existent dans la littérature : 1) le Non-Rigid Structure-from-Motion (NRSfM) et 2) le Shape-from-Template (SfT). L’objectif du NRSfM est de reconstruire plusieurs formes d’un objet déformable tel qu’il apparaît dans plusieurs images, alors que le SfT (également appelé reconstruction à partir d’un modèle de référence) utilise une seule image d’un objet déformé et son modèle 3D de référence (une forme 3D texturée de l’objet dans une configuration) pour estimer la forme déformée de l’objet. (...) / Reconstructing the 3D shape of objects from multiple images is an important goal in computer vision and has been extensively studied for both rigid and non-rigid (or deformable) objects. Structure-from-Motion (SfM) is an algorithm that performs the 3D reconstruction of rigid objects using the inter-image visual motion from multiple images obtained from a moving camera. SfM is a very accurate and stable solution. Deformable 3D reconstruction, however, has been widely studied for monocular images (obtained from a single camera) and still remains an open research problem. The current methods exploit visual cues such as the inter-image visual motion and shading in order to formalise a reconstruction algorithm. This thesis focuses on the use of the inter-image visual motion for solving this problem. Two types of scenarios exist in the literature: 1) Non-Rigid Structure-from-Motion (NRSfM) and 2) Shape-from-Template (SfT). The goal of NRSfM is to reconstruct multiple shapes of a deformable object as viewed in multiple images while SfT (also referred to as template-based reconstruction) uses a single image of a deformed object and its 3D template (a textured 3D shape of the object in one configuration) to recover the deformed shape of the object. We propose an NRSfM method to reconstruct the deformable surfaces undergoing isometric deformations (the objects do not stretch or shrink under an isometric deformation) using Riemannian geometry. This allows NRSfM to be expressed in terms of Partial Differential Equations (PDE) and to be solved algebraically. We show that the problem has linear complexity and the reconstruction algorithm has a very low computational cost compared to existing NRSfM methods. This work motivated us to use differential geometry and Cartan’s theory of connections to model NRSfM, which led to the possibility of extending the solution to deformations other than isometry. In fact, this led to a unified theoretical framework for modelling and solving both NRSfM and SfT for various types of deformations. In addition, it also makes it possible to have a solution to SfT which does not require an explicit modelling of deformation. An important point is that most of the NRSfM and SfT methods reconstruct the thin-shell surface of the object. The reconstruction of the entire volume (the thin-shell surface and the interior) has not been explored yet. We propose the first SfT method that reconstructs the entire volume of a deformable object.
|
200 |
Visual monocular SLAM for minimally invasive surgery and its application to augmented reality / Localisation et cartographie simultanées par vision monoculaire pour la réalité médicale augmentéeAli, Nader Mahmoud Elshahat Elsayed 19 June 2018 (has links)
La création d'informations 3D denses à partir d'images endoscopiques intraopératoires, ainsi que le calcul de la position relative de la caméra endoscopique, sont des éléments fondamentaux pour un guidage de qualité durant la chirurgie guidée par l'image. Par exemple, cela permet de superposer modèle pré-opératoire via la réalité augmentée. Cette thèse présente une approche pour l'estimation de ces deux information basées sur une approche de localisation et cartographie simultanées (SLAM). Nous découplons la reconstruction dense de l'estimation de la trajectoire de la caméra, aboutissant à un système qui combine la précision du SLAM, et une reconstruction plus complète. Les solutions proposées dans cette thèse ont été validées sur de séquences porcines provenant de différents ensembles de données. Ces solutions n'ont pas besoin de matériel de suivi externe ni d'intervention. Les seules entrées nécessaires sont les trames vidéo d'un endoscope monoculaire. / Recovering dense 3D information from intra-operative endoscopic images together with the relative endoscope camera pose are fundamental blocks for accurate guidance and navigation in image-guided surgery. They have several important applications, e.g., augmented reality overlay of pre-operative models. This thesis provides a systematic approach for estimating these two pieces of information based on a pure vision Simultaneous Localization And Mapping (SLAM). We decouple the dense reconstruction from the camera trajectory estimation, resulting in a system that combines the accuracy and robustness of feature-based SLAM with the more complete reconstruction of direct SLAM methods. The proposed solutions in this thesis have been validated on real porcine sequences from different datasets and proved to be fast and do not need any external tracking hardware nor significant intervention from medical staff. The sole input is video frames of a standard monocular endoscope.
|
Page generated in 0.0487 seconds