• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 63
  • 21
  • 14
  • 14
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 303
  • 303
  • 99
  • 87
  • 51
  • 42
  • 38
  • 38
  • 34
  • 31
  • 30
  • 27
  • 27
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Lens Distortion Calibration Using Point Correspondences

Stein, Gideon P. 01 December 1996 (has links)
This paper describes a new method for lens distortion calibration using only point correspondences in multiple views, without the need to know either the 3D location of the points or the camera locations. The standard lens distortion model is a model of the deviations of a real camera from the ideal pinhole or projective camera model.Given multiple views of a set of corresponding points taken by ideal pinhole cameras there exist epipolar and trilinear constraints among pairs and triplets of these views. In practice, due to noise in the feature detection and due to lens distortion these constraints do not hold exactly and we get some error. The calibration is a search for the lens distortion parameters that minimize this error. Using simulation and experimental results with real images we explore the properties of this method. We describe the use of this method with the standard lens distortion model, radial and decentering, but it could also be used with any other parametric distortion models. Finally we demonstrate that lens distortion calibration improves the accuracy of 3D reconstruction.
22

Geometric and Algebraic Aspects of 3D Affine and Projective Structures from Perspective 2D Views

Shashua, Amnon 01 July 1993 (has links)
We investigate the differences --- conceptually and algorithmically --- between affine and projective frameworks for the tasks of visual recognition and reconstruction from perspective views. It is shown that an affine invariant exists between any view and a fixed view chosen as a reference view. This implies that for tasks for which a reference view can be chosen, such as in alignment schemes for visual recognition, projective invariants are not really necessary. We then use the affine invariant to derive new algebraic connections between perspective views. It is shown that three perspective views of an object are connected by certain algebraic functions of image coordinates alone (no structure or camera geometry needs to be involved).
23

Optical Flow Based Structure from Motion

Zucchelli, Marco January 2002 (has links)
No description available.
24

A Contour Grouping Algorithm for 3D Reconstruction of Biological Cells

Leung, Tony Kin Shun January 2009 (has links)
Advances in computational modelling offer unprecedented potential for obtaining insights into the mechanics of cell-cell interactions. With the aid of such models, cell-level phenomena such as cell sorting and tissue self-organization are now being understood in terms of forces generated by specific sub-cellular structural components. Three-dimensional systems can behave differently from two-dimensional ones and since models cannot be validated without corresponding data, it is crucial to build accurate three-dimensional models of real cell aggregates. The lack of automated methods to determine which cell outlines in successive images of a confocal stack or time-lapse image set belong to the same cell is an important unsolved problem in the reconstruction process. This thesis addresses this problem through a contour grouping algorithm (CGA) designed to lead to unsupervised three-dimensional reconstructions of biological cells. The CGA associates contours obtained from fluorescently-labeled cell membranes in individual confocal slices using concepts from the fields of machine learning and combinatorics. The feature extraction step results in a set of association metrics. The algorithm then uses a probabilistic grouping step and a greedy-cost optimization step to produce grouped sets of contours. Groupings are representative of imaged cells and are manually evaluated for accuracy. The CGA presented here is able to produce accuracies greater than 96% when properly tuned. Parameter studies show that the algorithm is robust. That is, acceptable results are obtained under moderately varied probabilistic constraints and reasonable cost weightings. Image properties – such as slicing distance, image quality – affect the results. Sources of error are identified and enhancements based on fuzzy-logic and other optimization methods are considered. The successful grouping of cell contours, as realized here, is an important step toward the development of realistic, three-dimensional, cell-based finite element models.
25

A Contour Grouping Algorithm for 3D Reconstruction of Biological Cells

Leung, Tony Kin Shun January 2009 (has links)
Advances in computational modelling offer unprecedented potential for obtaining insights into the mechanics of cell-cell interactions. With the aid of such models, cell-level phenomena such as cell sorting and tissue self-organization are now being understood in terms of forces generated by specific sub-cellular structural components. Three-dimensional systems can behave differently from two-dimensional ones and since models cannot be validated without corresponding data, it is crucial to build accurate three-dimensional models of real cell aggregates. The lack of automated methods to determine which cell outlines in successive images of a confocal stack or time-lapse image set belong to the same cell is an important unsolved problem in the reconstruction process. This thesis addresses this problem through a contour grouping algorithm (CGA) designed to lead to unsupervised three-dimensional reconstructions of biological cells. The CGA associates contours obtained from fluorescently-labeled cell membranes in individual confocal slices using concepts from the fields of machine learning and combinatorics. The feature extraction step results in a set of association metrics. The algorithm then uses a probabilistic grouping step and a greedy-cost optimization step to produce grouped sets of contours. Groupings are representative of imaged cells and are manually evaluated for accuracy. The CGA presented here is able to produce accuracies greater than 96% when properly tuned. Parameter studies show that the algorithm is robust. That is, acceptable results are obtained under moderately varied probabilistic constraints and reasonable cost weightings. Image properties – such as slicing distance, image quality – affect the results. Sources of error are identified and enhancements based on fuzzy-logic and other optimization methods are considered. The successful grouping of cell contours, as realized here, is an important step toward the development of realistic, three-dimensional, cell-based finite element models.
26

Entwicklung eines iterativen 3D Rekonstruktionverfahrens für die Kontrolle der Tumorbehandlung mit Schwerionen mittels der Positronen-Emissions-Tomographie

Lauckner, Kathrin 31 March 2010 (has links) (PDF)
At the Gesellschaft für Schwerionenforschung in Darmstadt a therapy unit for heavy ion cancer treatment has been established in collaboration with the Deutsches Krebsforschungszentrum Heidelberg, the Radiologische Universitätsklinik Heidelberg and the Forschungszentrum Rossendorf. For quality assurance the dual-head positron camera BASTEI (Beta Activity meaSurements at the Therapy with Energetic Ions) has been integrated into this facility. It measures ß+-activity distributions generated via nuclear fragmentation reactions within the target volume. BASTEI has about 4 million coincidence channels. The emission data are acquired in a 3D regime and stored in a list mode data format. Typically counting statstics is two to three orders of magnitude lower than those of typical PET-scans in nuclear medicine. Two iterative 3D reconstruction algorithms based on ISRA (Image Space Reconstruction Algorithm) and MLEM (Maximum Likelihood Expectation Maximization), respectively, have been adapted to this imaging geometry. The major advantage of the developed approaches are run-time Monte-Carlo simulations which are used to calculate the transition matrix. The influences of detector sensitivity variations, randoms, activity from outside of the field of view and attenuation are corrected for the individual coincidence channels. Performance studies show, that the implementation based on MLEM is the algorithm of merit. Since 1997 it has been applied sucessfully to patient data. The localization of distal and lateral gradients of the ß+-activity distribution is guaranteed in the longitudinal sections. Out of the longitudinal sections the lateral gradients of the ß+-activity distribution should be interpreted using a priori knowledge.
27

Optical Flow Based Structure from Motion

Zucchelli, Marco January 2002 (has links)
No description available.
28

Gray Code Composite Pattern Structured Light Illumination

Gupta, Pratibha 01 January 2007 (has links)
Structured light is the most common 3D data acquisition technique used in the industry. Traditional Structured light methods are used to obtain the 3D information of an object. Multiple patterns such as Phase measuring profilometry, gray code patterns and binary patterns are used for reliable reconstruction. These multiple patterns achieve non-ambiguous depth and are insensitive to ambient light. However their application is limited to motion much slower than their projection time. These multiple patterns can be combined into a single composite pattern based on the modulation and demodulation techniques and used for obtaining depth information. In this way, the multiple patterns are applied simultaneously and thus support rapid object motion. In this thesis we have combined multiple gray coded patterns to form a single Gray code Composite Pattern. The gray code composite pattern is projected and the deformation produced by the target object is captured by a camera. By demodulating these distorted patterns the 3D world coordinates are reconstructed.
29

Fisheye Camera Calibration and Applications

January 2014 (has links)
abstract: Fisheye cameras are special cameras that have a much larger field of view compared to conventional cameras. The large field of view comes at a price of non-linear distortions introduced near the boundaries of the images captured by such cameras. Despite this drawback, they are being used increasingly in many applications of computer vision, robotics, reconnaissance, astrophotography, surveillance and automotive applications. The images captured from such cameras can be corrected for their distortion if the cameras are calibrated and the distortion function is determined. Calibration also allows fisheye cameras to be used in tasks involving metric scene measurement, metric scene reconstruction and other simultaneous localization and mapping (SLAM) algorithms. This thesis presents a calibration toolbox (FisheyeCDC Toolbox) that implements a collection of some of the most widely used techniques for calibration of fisheye cameras under one package. This enables an inexperienced user to calibrate his/her own camera without the need for a theoretical understanding about computer vision and camera calibration. This thesis also explores some of the applications of calibration such as distortion correction and 3D reconstruction. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2014
30

Cortical thickness estimation of the proximal femur from multi-view, dual-energy X-ray absorptiometry

Tsaousis, Nikolaos January 2015 (has links)
Hip fracture is the leading cause of acute orthopaedic hospital admission amongst the elderly, with around a third of patients not surviving one-year post-fracture. Current risk assessment tools ignore cortical bone thinning, a focal structural defect characterizing hip fragility. Cortical thickness can be measured using computed tomography, but this is expensive and involves a significant radiation dose. Dual-energy X-ray absorptiometry (DXA) is the preferred imaging modality for assessing fracture risk, and is used routinely in clinical practice. This thesis proposes two novel methods which measure the cortical thickness of the proximal femur from multi-view DXA scans. First, a data-driven algorithm is designed, implemented and evaluated. It relies on a femoral B-spline template which can be deformed to fit an individual?s scans. In a series of experiments on the trochanteric regions of 120 proximal femurs, the algorithm?s performance limits were established using twenty views in the range 0? ? 171?: estimation errors were 0.00 ? 0.50 mm. In a clinically viable protocol using four views in the range ?20? to 40?, measurement errors were ?0.05 ? 0.54 mm. The second algorithm accomplishes the same task by deforming statistical shape and thickness models, both trained using Principal Component Analysis (PCA). Three training cohorts are used to investigate (a) the estimation efficacy as a function of the diversity in the training set and (b) the possibility of improving performance by building tailored models for different populations. In a series of cross-validation experiments involving 120 femurs, minimum estimation errors were 0.00 ? 0.59 mm and ?0.01 ? 0.61 mm for the twenty- and four-view experiments respectively, when fitting the tailored models. Statistical significance tests reveal that the template algorithm is more precise than the statistical, and that both are superior to a blind estimator which naively assumes the population mean, but only in regions of thicker cortex. It is concluded that cortical thickness measured from DXA is unlikely to assist fracture prediction in the femoral neck and trochanters, but might have applicability in the sub-trochanteric region.

Page generated in 0.0845 seconds