• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 160
  • 63
  • 25
  • 15
  • 14
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 343
  • 343
  • 115
  • 97
  • 61
  • 46
  • 44
  • 40
  • 39
  • 37
  • 32
  • 31
  • 31
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Optical Flow Based Structure from Motion

Zucchelli, Marco January 2002 (has links)
No description available.
32

THREE DIMENSIONAL RECONSTRUCTION OF OBJECTS BASED ON DIGITAL FRINGE PROJECTION

Talebi, Reza 09 October 2013 (has links)
Three-dimensional reconstruction of small objects has been one of the most challenging problems over the last decade. Computer graphics researchers and photography professionals have been working on improving 3D reconstruction algorithms to fit the high demands of various real life applications. In this thesis, we implemented a 3D scanner system based on fringe projection method. Two different methods have been implemented and used as the unwrapping solution in fringe projection method. A parameterization tool has been created in order to generate different fringe patterns for distinctive needs in the fringe projection method. Considering our first practical implementation (based on phase shifting and multi wavelength techniques) the number of pictures used in phase shifting method has been decreased and the effects of reducing the fringe patterns on the level of precision of the 3D model have been investigated. Optical arrangement and calibration of the system (fringe projection method) have been studied, and numerous suggestions have been proposed to improve the precision of the system. Also, an evaluation method has been implemented based on calibration techniques. The error rate on both surface and height of the 3D model compare with the object has been calculated.
33

Transmission Electron Tomography: Imaging Nanostructures in 3D

Wang, Xiongyao Unknown Date
No description available.
34

Gray Code Composite Pattern Structured Light Illumination

Gupta, Pratibha 01 January 2007 (has links)
Structured light is the most common 3D data acquisition technique used in the industry. Traditional Structured light methods are used to obtain the 3D information of an object. Multiple patterns such as Phase measuring profilometry, gray code patterns and binary patterns are used for reliable reconstruction. These multiple patterns achieve non-ambiguous depth and are insensitive to ambient light. However their application is limited to motion much slower than their projection time. These multiple patterns can be combined into a single composite pattern based on the modulation and demodulation techniques and used for obtaining depth information. In this way, the multiple patterns are applied simultaneously and thus support rapid object motion. In this thesis we have combined multiple gray coded patterns to form a single Gray code Composite Pattern. The gray code composite pattern is projected and the deformation produced by the target object is captured by a camera. By demodulating these distorted patterns the 3D world coordinates are reconstructed.
35

Reconstruction of 3D Points From Uncalibrated Underwater Video

Cavan, Neil January 2011 (has links)
This thesis presents a 3D reconstruction software pipeline that is capable of generating point cloud data from uncalibrated underwater video. This research project was undertaken as a partnership with 2G Robotics, and the pipeline described in this thesis will become the 3D reconstruction engine for a software product that can generate photo-realistic 3D models from underwater video. The pipeline proceeds in three stages: video tracking, projective reconstruction, and autocalibration. Video tracking serves two functions: tracking recognizable feature points, as well as selecting well-spaced keyframes with a wide enough baseline to be used in the reconstruction. Video tracking is accomplished using Lucas-Kanade optical flow as implemented in the OpenCV toolkit. This simple and widely used method is well-suited to underwater video, which is taken by carefully piloted and slow-moving underwater vehicles. Projective reconstruction is the process of simultaneously calculating the motion of the cameras and the 3D location of observed points in the scene. This is accomplished using a geometric three-view technique. Results are presented showing that the projective reconstruction algorithm detailed here compares favourably to state-of-the-art methods. Autocalibration is the process of transforming a projective reconstruction, which is not suitable for visualization or measurement, into a metric space where it can be used. This is the most challenging part of the 3D reconstruction pipeline, and this thesis presents a novel autocalibration algorithm. Results are shown for two existing cost function-based methods in the literature which failed when applied to underwater video, as well as the proposed hybrid method. The hybrid method combines the best parts of its two parent methods, and produces good results on underwater video. Final results are shown for the 3D reconstruction pipeline operating on short under- water video sequences to produce visually accurate 3D point clouds of the scene, suitable for photorealistic rendering. Although further work remains to extend and improve the pipeline for operation on longer sequences, this thesis presents a proof-of-concept method for 3D reconstruction from uncalibrated underwater video.
36

Fisheye Camera Calibration and Applications

January 2014 (has links)
abstract: Fisheye cameras are special cameras that have a much larger field of view compared to conventional cameras. The large field of view comes at a price of non-linear distortions introduced near the boundaries of the images captured by such cameras. Despite this drawback, they are being used increasingly in many applications of computer vision, robotics, reconnaissance, astrophotography, surveillance and automotive applications. The images captured from such cameras can be corrected for their distortion if the cameras are calibrated and the distortion function is determined. Calibration also allows fisheye cameras to be used in tasks involving metric scene measurement, metric scene reconstruction and other simultaneous localization and mapping (SLAM) algorithms. This thesis presents a calibration toolbox (FisheyeCDC Toolbox) that implements a collection of some of the most widely used techniques for calibration of fisheye cameras under one package. This enables an inexperienced user to calibrate his/her own camera without the need for a theoretical understanding about computer vision and camera calibration. This thesis also explores some of the applications of calibration such as distortion correction and 3D reconstruction. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2014
37

Cortical thickness estimation of the proximal femur from multi-view, dual-energy X-ray absorptiometry

Tsaousis, Nikolaos January 2015 (has links)
Hip fracture is the leading cause of acute orthopaedic hospital admission amongst the elderly, with around a third of patients not surviving one-year post-fracture. Current risk assessment tools ignore cortical bone thinning, a focal structural defect characterizing hip fragility. Cortical thickness can be measured using computed tomography, but this is expensive and involves a significant radiation dose. Dual-energy X-ray absorptiometry (DXA) is the preferred imaging modality for assessing fracture risk, and is used routinely in clinical practice. This thesis proposes two novel methods which measure the cortical thickness of the proximal femur from multi-view DXA scans. First, a data-driven algorithm is designed, implemented and evaluated. It relies on a femoral B-spline template which can be deformed to fit an individual?s scans. In a series of experiments on the trochanteric regions of 120 proximal femurs, the algorithm?s performance limits were established using twenty views in the range 0? ? 171?: estimation errors were 0.00 ? 0.50 mm. In a clinically viable protocol using four views in the range ?20? to 40?, measurement errors were ?0.05 ? 0.54 mm. The second algorithm accomplishes the same task by deforming statistical shape and thickness models, both trained using Principal Component Analysis (PCA). Three training cohorts are used to investigate (a) the estimation efficacy as a function of the diversity in the training set and (b) the possibility of improving performance by building tailored models for different populations. In a series of cross-validation experiments involving 120 femurs, minimum estimation errors were 0.00 ? 0.59 mm and ?0.01 ? 0.61 mm for the twenty- and four-view experiments respectively, when fitting the tailored models. Statistical significance tests reveal that the template algorithm is more precise than the statistical, and that both are superior to a blind estimator which naively assumes the population mean, but only in regions of thicker cortex. It is concluded that cortical thickness measured from DXA is unlikely to assist fracture prediction in the femoral neck and trochanters, but might have applicability in the sub-trochanteric region.
38

Motion Segmentation for Autonomous Robots Using 3D Point Cloud Data

Kulkarni, Amey S. 13 May 2020 (has links)
Achieving robot autonomy is an extremely challenging task and it starts with developing algorithms that help the robot understand how humans perceive the environment around them. Once the robot understands how to make sense of its environment, it is easy to make efficient decisions about safe movement. It is hard for robots to perform tasks that come naturally to humans like understanding signboards, classifying traffic lights, planning path around dynamic obstacles, etc. In this work, we take up one such challenge of motion segmentation using Light Detection and Ranging (LiDAR) point clouds. Motion segmentation is the task of classifying a point as either moving or static. As the ego-vehicle moves along the road, it needs to detect moving cars with very high certainty as they are the areas of interest which provide cues to the ego-vehicle to plan it's motion. Motion segmentation algorithms segregate moving cars from static cars to give more importance to dynamic obstacles. In contrast to the usual LiDAR scan representations like range images and regular grid, this work uses a modern representation of LiDAR scans using permutohedral lattices. This representation gives ease of representing unstructured LiDAR points in an efficient lattice structure. We propose a machine learning approach to perform motion segmentation. The network architecture takes in two sequential point clouds and performs convolutions on them to estimate if 3D points from the first point cloud are moving or static. Using two temporal point clouds help the network in learning what features constitute motion. We have trained and tested our learning algorithm on the FlyingThings3D dataset and a modified KITTI dataset with simulated motion.
39

Tvorba 3D modelů / 3D reconstruction

Musálek, Martin January 2014 (has links)
Thesis solves 3D reconstruction of an object by method of lighting by pattern. A projector lights the measured object by defined pattern and two cameras are measuring 2D points from it. The pedestal of obejct rotates and during the measure are acquired data from different angles. Points are indentified from measured images, transformed to 3D using stereovision, connected to 3D model and displayed.
40

Entwicklung eines iterativen 3D Rekonstruktionverfahrens für die Kontrolle der Tumorbehandlung mit Schwerionen mittels der Positronen-Emissions-Tomographie

Lauckner, Kathrin January 1999 (has links)
At the Gesellschaft für Schwerionenforschung in Darmstadt a therapy unit for heavy ion cancer treatment has been established in collaboration with the Deutsches Krebsforschungszentrum Heidelberg, the Radiologische Universitätsklinik Heidelberg and the Forschungszentrum Rossendorf. For quality assurance the dual-head positron camera BASTEI (Beta Activity meaSurements at the Therapy with Energetic Ions) has been integrated into this facility. It measures ß+-activity distributions generated via nuclear fragmentation reactions within the target volume. BASTEI has about 4 million coincidence channels. The emission data are acquired in a 3D regime and stored in a list mode data format. Typically counting statstics is two to three orders of magnitude lower than those of typical PET-scans in nuclear medicine. Two iterative 3D reconstruction algorithms based on ISRA (Image Space Reconstruction Algorithm) and MLEM (Maximum Likelihood Expectation Maximization), respectively, have been adapted to this imaging geometry. The major advantage of the developed approaches are run-time Monte-Carlo simulations which are used to calculate the transition matrix. The influences of detector sensitivity variations, randoms, activity from outside of the field of view and attenuation are corrected for the individual coincidence channels. Performance studies show, that the implementation based on MLEM is the algorithm of merit. Since 1997 it has been applied sucessfully to patient data. The localization of distal and lateral gradients of the ß+-activity distribution is guaranteed in the longitudinal sections. Out of the longitudinal sections the lateral gradients of the ß+-activity distribution should be interpreted using a priori knowledge.

Page generated in 0.1175 seconds