• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • Tagged with
  • 15
  • 15
  • 10
  • 8
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Estimating the effects of lens distortion on serial section electron microscopy images

Lindsey, Laurence Francis 30 October 2012 (has links)
Section to section alignment is a preliminary step to the creation of three dimensional reconstructions from serial section electron micrographs. Typically, the micrograph of one section is aligned to its neighbors by analyzing a set of fiducial points to calculate an appropriate polynomial transform. This transform is then used to map all of the pixels of the micrograph into alignment. Such transforms are usually linear or piecewise linear in order to limit the accumulation of small errors, which may occur with the use of higher–order approximations. Linear alignment is unable to correct common higher order geometric distortions, such as lens distortion in the case of TEM, and scan distortion in the case of transmission-mode SEM. Here, we attempt to show that standard calibration replicas may be used to calculate a high order distortion model despite the irregularities that are often present in them. We show that SEM scan distortion has much less of an effect than TEM lens distortion; however, the effect of TEM distortion on prior geometric measurements made over three-dimensional reconstructions of dendrites, axons, and synapses and their subcellular compartments is negligible. / text
2

Lens Distortion Correction Without Camera Access / Linsdistorsionskorrigering utan kameratillgång

Olsson, Emily January 2022 (has links)
Lens distortions appear in almost all digital images and cause straight lines to appear curved in the image. This can contribute to errors in position estimations and 3D reconstruction and it is therefore of interest to correct for the distortion. If the camera is available, the distortion parameters can be obtained when calibrating the camera. However, when the camera is unavailable the distortion parameters can not be found with the standard camera calibration technique and other approaches must be used. Recently, variants of Perspective-n-Point (PnP) extended with lens distortionand focal length parameters have been proposed. Given a set of 2D-3D point correspondences, the PnP-based methods can estimate distortion parameters without the camera being available or with modified settings. In this thesis, the performance of PnP-based methods is compared to Zhang’s camera calibration method. The methods are compared both quantitatively, using the errors in reprojectionand distortion parameters, and qualitatively by comparing images before and after lens distortion correction. A test set for the comparison was obtained from a camera and a 3D laser scanner of an indoor scene.The results indicate that one of the PnP-based models can achieve a similar reprojection error as the baseline method for one of the cameras. It could also be seen that two PnP-based models could reduce lens distortion when visually comparing the test images to the baseline. Moreover, it was noted that a model can have a small reprojection error even though the distortion coefficient error is large and the lens distortion is not completely removed. This indicates that it is important to include both quantitative measures, such as reprojection error and distortion coefficient errors, as well as qualitative results when comparing lens distortion correction methods. It could also be seen that PnP-based models with more parameters in the estimation are more sensitive to noise.
3

Image optimization algorithms on an FPGA

Ericsson, Kenneth, Grann, Robert January 2009 (has links)
<p> </p><p>In this thesis a method to compensate camera distortion is developed for an FPGA platform as part of a complete vision system. Several methods and models is presented and described to give a good introduction to the complexity of the problems that is overcome with the developed method. The solution to the core problem is shown to have a good precision on a sub-pixel level.</p><p> </p>
4

Calibration of Laser Triangulating Cameras in Small Fields of View / Kalibrering av lasertriangulerande 3D-kamera för användning i små synfält

Rydström, Daniel January 2013 (has links)
A laser triangulating camera system projects a laser line onto an object to create height curveson the object surface. By moving the object, height curves from different parts of the objectcan be observed and combined to produce a three dimensional representation of the object.The calibration of such a camera system involves transforming received data to get real worldmeasurements instead of pixel based measurements. The calibration method presented in this thesis focuses specifically on small fields ofview. The goal is to provide an easy to use and robust calibration method that can complementalready existing calibration methods. The tool should get as good measurementsin metric units as possible, while still keeping complexity and production costs of the calibrationobject low. The implementation uses only data from the laser plane itself making itusable also in environments where no external light exist. The proposed implementation utilises a complete scan of a three dimensional calibrationobject and returns a calibration for three dimensions. The results of the calibration havebeen evaluated against synthetic and real data.
5

Visual Stereo Odometry for Indoor Positioning

Johansson, Fredrik January 2012 (has links)
In this master thesis a visual odometry system is implemented and explained. Visual odometry is a technique, which could be used on autonomous vehicles to determine its current position and is preferably used indoors when GPS is notworking. The only input to the system are the images from a stereo camera and the output is the current location given in relative position. In the C++ implementation, image features are found and matched between the stereo images and the previous stereo pair, which gives a range of 150-250 verified feature matchings. The image coordinates are triangulated into a 3D-point cloud. The distance between two subsequent point clouds is minimized with respect to rigid transformations, which gives the motion described with six parameters, three for the translation and three for the rotation. Noise in the image coordinates gives reconstruction errors which makes the motion estimation very sensitive. The results from six experiments show that the weakness of the system is the ability to distinguish rotations from translations. However, if the system has additional knowledge of how it is moving, the minimization can be done with only three parameters and the system can estimate its position with less than 5 % error.
6

A calibration method for laser-triangulating 3D cameras / En kalibreringsmetod för lasertriangulerande 3D-kameror

Andersson, Robert January 2008 (has links)
<p>A laser-triangulating range camera uses a laser plane to light an object. If the position of the laser relative to the camera as well as certrain properties of the camera is known, it is possible to calculate the coordinates for all points along the profile of the object. If either the object or the camera and laser has a known motion, it is possible to combine several measurements to get a three-dimensional view of the object.</p><p>Camera calibration is the process of finding the properties of the camera and enough information about the setup so that the desired coordinates can be calculated. Several methods for camera calibration exist, but this thesis proposes a new method that has the advantages that the objects needed are relatively inexpensive and that only objects in the laser plane need to be observed. Each part of the method is given a thorough description. Several mathematical derivations have also been added as appendices for completeness.</p><p>The proposed method is tested using both synthetic and real data. The results show that the method is suitable even when high accuracy is needed. A few suggestions are also made about how the method can be improved further.</p>
7

Performance Improvement Of A 3-d Configuration Reconstruction Algorithm For An Object Using A Single Camera Image

Ozkilic, Sibel 01 January 2004 (has links) (PDF)
Performance improvement of a 3-D configuration reconstruction algorithm using a passive secondary target has been focused in this study. In earlier studies, a theoretical development of the 3-D configuration reconstruction algorithm was achieved and it was implemented by a computer program on a system consisting of an optical bench and a digital imaging system. The passive secondary target used was a circle with two internal spots. In order to use this reconstruction algorithm in autonomous systems, an automatic target recognition algorithm has been developed in this study. Starting from a pre-captured and stored 8-bit gray-level image, the algorithm automatically detects the elliptical image of a circular target and determines its contour in the scene. It was shown that the algorithm can also be used for partially captured elliptical images. Another improvement achieved in this study is the determination of internal camera parameters of the vision system.
8

Image optimization algorithms on an FPGA

Ericsson, Kenneth, Grann, Robert January 2009 (has links)
In this thesis a method to compensate camera distortion is developed for an FPGA platform as part of a complete vision system. Several methods and models is presented and described to give a good introduction to the complexity of the problems that is overcome with the developed method. The solution to the core problem is shown to have a good precision on a sub-pixel level.
9

Correction of radially asymmetric lens distortion with a closed form solution and inverse function

De Villiers, Jason Peter 23 January 2009 (has links)
The current paradigm in the lens distortion characterization industry is to use simple radial distortion models with only one or two radial terms. Tangential terms and the optimal distortion centre are also seldom determined. Inherent in the models currently used is the assumption that lens distortion is radially symmetrical. The reason for the use of these models is partly due to the perceived instability of more complex lens distortion models. This dissertation shows, in the first of its three hypotheses, that higher order models are indeed beneficial, when their parameters are determined using modern numerical optimization techniques. They are both stable and provide superior characterization. Although it is true that the first two radial terms dominate the distortion characterization, this work proves superior characterization is possible for those applications that may require it. The third hypothesis challenges the assumption of the radial symmetry of lens distortion. Building on the foundation provided by the first hypothesis, a sample of lens distortion models of similar and greater complexity to those found in literature are modified to have a radial gain, allowing the distortion corrections to vary both with polar angle and distance from the distortion centre. Four angular gains are evaluated, and two provide better characterization. The elliptical gain was the only method to both consistently improve the characterization and not ‘skew’ the corrected images. This gain was shown to improve characterization by as much as 50% for simple (single radial term) models and by 7% for even the most complex models. To create an undistorted image from a distorted image captured through a lens which has had its distortion characterized, one needs to find the corresponding distorted pixel for each undistorted pixel in the corrected image. This is either done iteratively or using a simplified model typically based on the Taylor expansion of a simple (one or two radial coefficients) distortion model. The first method is accurate yet slow and the second, the opposite. The second hypothesis of this research successfully combines the advantages of both methods without any of their disadvantages. It was shown that, using the superior characterization of high order radial models (when fitted with modern numerical optimization methods) together with the ‘side-effect’ undistorted image points created in the lens distortion characterization, it is possible to fit a ‘reverse’ model from the undistorted to distorted domains. This reverse characterization is of similar complexity to the simplified models yet provides characterization equivalent to the iterative techniques. Compared to using simplified models the reverse mapping yields an improvement of more than tenfold - from the many tenths of pixels to a few hundredths. / Dissertation (MEng)--University of Pretoria, 2009. / Electrical, Electronic and Computer Engineering / unrestricted
10

Generování testovacích vzorů / Test pattern generation

Hašek, Martin January 2010 (has links)
This thesis is focused on application development for simulation lenses’ optical distortions and also for creation own patterns. In the first part are discussed common problems of optical distortion and concept of software analysis. Further is described realization and implementation of particular modules in the application. In the end is show up graphical user interface and its functionality.

Page generated in 0.1237 seconds