Spelling suggestions: "subject:"camera calibration."" "subject:"camera alibration.""
1 |
Towards fully autonomous visual navigationKnight, Joss G. H. January 2002 (has links)
No description available.
|
2 |
The use of zoom within active visionHayman, Eric January 2000 (has links)
No description available.
|
3 |
Line scan camera calibration for fabric imagingZhao, Zuyun 03 December 2013 (has links)
Fabric defects inspection is a vital step for fabric quality assessment. Many vision-based automatic fabric defect detection methods have been proposed to detect fabric flaws efficiently and accurately. Because the inspection methods are vision-based, image quality is of great importance to the accuracy of detection result. To our knowledge, most of camera lenses have radial distortion. So our goal in this project is to remove the radial distortion and achieve undistorted images. Much research work has been done for 2-D image correction, but the study for 1-D line scan camera image correction is rarely done, although line scan cameras are gaining more and wider applications due to the high resolution and efficiency on 1-D data processing. A novel line scan camera correction method is proposed in this project. We first propose a pattern object with mutually parallel lines and oblique lines to each pair of parallel ones. The purpose of the pattern design is based upon the fact that line scan camera acquires image one line at a time and it's difficult for one scan line to match the "0-D" marked points on pattern. We detect the intersection points between pattern lines and one scan line and calculate their position according to the pattern geometry. As calibrations for 2-D cameras have been greatly achieved, we propose a method to calibrate 1-D camera. A least-square method is applied to solve the pinhole projection equation and estimate the values of camera parameter matrix. Finally we refine the data with maximum-likelihood estimation and get the camera lens distortion coefficients. We re-project the data from the image coordinate to the world coordinate, using the obtained camera matrix and the re-projection error is 0.68 pixel. With the distortion coefficients ready, we correct captured images with an undistortion equation. We introduce a term of unit distance in the discussion part to better assess the proposed method. When testifying the undistortion results, we observe corrected image has almost identical unit distance with standard deviation of 0.29 pixels. Compared to the ideal distortion-free unit distance, the corrected image has only 0.09 pixel off the average, which proves the validity of the proposed method. / text
|
4 |
The integration of an ultrasonic phased array and a vision system for the 3D measurement of multiple targetsChou, Tsung-nan January 1997 (has links)
No description available.
|
5 |
Digital camera calibration for mining applicationsJiang, Lingen Unknown Date
No description available.
|
6 |
Digital camera calibration for mining applicationsJiang, Lingen 11 1900 (has links)
This thesis examines the issues related to calibrating digital cameras and lenses, which is an essential prerequisite for the extraction of precise and reliable 3D metric information from 2D images. The techniques used to calibrate a Canon PowerShot A70 camera with 5.4 mm zoom lens and a professional single lens reflex camera Canon EOS 1Ds Mark II with 35 mm, 85 mm, 135 mm and 200 mm prime lenses are described. The test results have demonstrated that a high correlation exists among some interior and exterior orientation parameters. The correlations are dependent on the parameters being adjusted and the network configuration. Not all of the 11 interior orientation parameters are significant for modelling the camera and lens behaviour. The first two coefficients K1, K2 would be sufficient to describe the radial distortion effect for most digital cameras. Furthermore, the interior orientation parameters of a digital camera and lens from different calibration tests can change. This work has demonstrated that given a functional model that represents physical effects, a reasonably large number of 3D targets that are well distributed in three-dimensional space, and a highly convergent imaging network, all of the usual parameters can be estimated to reasonable values. / Mining Engineering
|
7 |
Why Stereo Vision is Not Always About 3D ReconstructionGrimson, W. Eric L. 01 July 1993 (has links)
It is commonly assumed that the goal of stereovision is computing explicit 3D scene reconstructions. We show that very accurate camera calibration is needed to support this, and that such accurate calibration is difficult to achieve and maintain. We argue that for tasks like recognition, figure/ground separation is more important than 3D depth reconstruction, and demonstrate a stereo algorithm that supports figure/ground separation without 3D reconstruction.
|
8 |
Online Calibration of Camera Roll Angle / Dynamisk kalibrering av kamerarollvinkelnde Laval, Astrid January 2013 (has links)
Modern day cars are often equipped with a vision system that collects informa- tion about the car and its surroundings. Camera calibration is extremely impor- tant in order to maintain high accuracy in an automotive safety applications. The cameras are calibrated offline in the factory, however the mounting of the camera may change slowly over time. If the angles of the actual mounting of the cam- era are known compensation for the angles can be done in software. Therefore, online calibration is desirable. This master’s thesis describes how to dynamically calibrate the roll angle. Two different methods have been implemented and compared.The first detects verti- cal edges in the image, such as houses and lamp posts. The second one method detects license plates on other cars in front of the camera in order to calculate the roll angle. The two methods are evaluated and the results are discussed. The results of the methods are very varied, and the method that turned out to give the best results was the one that detects vertical edges.
|
9 |
A study of augmented reality for posting information to building imagesYang, Yi-Jang 08 September 2010 (has links)
Geographical image data efficiently help people with wayfinding when in an unfamiliar environment. However, since the display modes of geographical image data such as 2D and 3D virtual reality could not meet with users' needs anymore, the new technique augmented reality (AR) has then become a better and effective solution to graphics Augmented reality is a kind of 3D display technique by computer vision, in which 3D virtual objects are combined with 3D real environment interactively, dynamically, and in real-time. It will bring more advantages especially to the display of building spatial data. The research aims to find out more spatial information by seeing-through buildings when we stay outside it. The approach is firstly to use a single camera to capture building features with serial images, and secondly to do building recognition and tracking between reference images and serial images by Speeded-Up Robust Features (SURF) algorithm. Thirdly, the relationship of points correspondence between serial images are then applied to estimate camera parameters via computer vision technique. Finally, the 3D model map of buildings can augment to building images according to the camera parameters.
|
10 |
Recognition using tagged objectsSoh, Ling Min January 2000 (has links)
This thesis describes a method for the recognition of objects in an unconstrained environment with a widely ranging illumination, imaged from unknown view points and complicated background. The general problem is simplified by placing specially designed patterns on the object that allows us to solve the pose determination problem easily. There are several key components involved in the proposed recognition approach. They include pattern detection, pose estimation, model acquisition and matching, searching and indexing the model database. Other crucial issues pertaining to the individual components of the recognition system such as the choice of the pattern, the reliability and accuracy of the pattern detector, pose estimator and matching and the speed of the overall system are addressed. After establishing the methodological framework, experiments are carried out on a wide range of both synthetic and real data to illustrate the validity and usefulness of the proposed methods. The principal contribution of this research is a methodology for Tagged Object Recognition (TOR) in unconstrained conditions. A robust pattern (calibration chart) detector is developed for off-the-shelf use. To empirically assess the effectiveness of the pattern detector and the pose estimator under various scenarios, simulated data generated using a graphics rendering process is used. This simulated data provides ground truth which is difficult to obtain in projected images. Using the ground truth, the detection error, which is usually ignored, can be analysed. For model matching, the Chamfer matching algorithm is modified to get a more reliable matching score. The technique facilitates reliable Tagged Object Recognition (TOR). Finally, the results of extensive quantitative and qualitative tests are presented that show the plausibility of practical use of Tagged Object Recognition (TOR). The features characterising the enabling technology developed are the ability to a) recognise an object which is tagged with the calibration chart, b) establish camera position with respect to a landmark and c) test any camera calibration and 3D pose estimation routines, thus facilitating future research and applications in mobile robots navigations, 3D reconstruction and stereo vision.
|
Page generated in 0.0957 seconds