Spelling suggestions: "subject:"camera calibration"" "subject:"camera alibration""
51 |
Lens Distortion Correction Without Camera Access / Linsdistorsionskorrigering utan kameratillgångOlsson, Emily January 2022 (has links)
Lens distortions appear in almost all digital images and cause straight lines to appear curved in the image. This can contribute to errors in position estimations and 3D reconstruction and it is therefore of interest to correct for the distortion. If the camera is available, the distortion parameters can be obtained when calibrating the camera. However, when the camera is unavailable the distortion parameters can not be found with the standard camera calibration technique and other approaches must be used. Recently, variants of Perspective-n-Point (PnP) extended with lens distortionand focal length parameters have been proposed. Given a set of 2D-3D point correspondences, the PnP-based methods can estimate distortion parameters without the camera being available or with modified settings. In this thesis, the performance of PnP-based methods is compared to Zhang’s camera calibration method. The methods are compared both quantitatively, using the errors in reprojectionand distortion parameters, and qualitatively by comparing images before and after lens distortion correction. A test set for the comparison was obtained from a camera and a 3D laser scanner of an indoor scene.The results indicate that one of the PnP-based models can achieve a similar reprojection error as the baseline method for one of the cameras. It could also be seen that two PnP-based models could reduce lens distortion when visually comparing the test images to the baseline. Moreover, it was noted that a model can have a small reprojection error even though the distortion coefficient error is large and the lens distortion is not completely removed. This indicates that it is important to include both quantitative measures, such as reprojection error and distortion coefficient errors, as well as qualitative results when comparing lens distortion correction methods. It could also be seen that PnP-based models with more parameters in the estimation are more sensitive to noise.
|
52 |
Multiple View Geometry For Video Analysis And Post-productionCao, Xiaochun 01 January 2006 (has links)
Multiple view geometry is the foundation of an important class of computer vision techniques for simultaneous recovery of camera motion and scene structure from a set of images. There are numerous important applications in this area. Examples include video post-production, scene reconstruction, registration, surveillance, tracking, and segmentation. In video post-production, which is the topic being addressed in this dissertation, computer analysis of the motion of the camera can replace the currently used manual methods for correctly aligning an artificially inserted object in a scene. However, existing single view methods typically require multiple vanishing points, and therefore would fail when only one vanishing point is available. In addition, current multiple view techniques, making use of either epipolar geometry or trifocal tensor, do not exploit fully the properties of constant or known camera motion. Finally, there does not exist a general solution to the problem of synchronization of N video sequences of distinct general scenes captured by cameras undergoing similar ego-motions, which is the necessary step for video post-production among different input videos. This dissertation proposes several advancements that overcome these limitations. These advancements are used to develop an efficient framework for video analysis and post-production in multiple cameras. In the first part of the dissertation, the novel inter-image constraints are introduced that are particularly useful for scenes where minimal information is available. This result extends the current state-of-the-art in single view geometry techniques to situations where only one vanishing point is available. The property of constant or known camera motion is also described in this dissertation for applications such as calibration of a network of cameras in video surveillance systems, and Euclidean reconstruction from turn-table image sequences in the presence of zoom and focus. We then propose a new framework for the estimation and alignment of camera motions, including both simple (panning, tracking and zooming) and complex (e.g. hand-held) camera motions. Accuracy of these results is demonstrated by applying our approach to video post-production applications such as video cut-and-paste and shadow synthesis. As realistic image-based rendering problems, these applications require extreme accuracy in the estimation of camera geometry, the position and the orientation of the light source, and the photometric properties of the resulting cast shadows. In each case, the theoretical results are fully supported and illustrated by both numerical simulations and thorough experimentation on real data.
|
53 |
A Fully Automated Geometric Lens Distortion Correction MethodMannuru, Sravanthi January 2011 (has links)
No description available.
|
54 |
Principal Point Determination for Camera CalibrationAlturki, Abdulrahman S. 24 August 2017 (has links)
No description available.
|
55 |
Layered Sensing Using Master-Slave CamerasMcLemore, Donald Rodney, Jr. 01 October 2009 (has links)
No description available.
|
56 |
Position and Orientation of a Front Loader Bucket using Stereo VisionMoin, Asad Ibne January 2011 (has links)
Stereopsis or Stereo vision is a technique that has been extensively used in computer vision these days helps to percept the 3D structure and distance of a scene from two images taken at different viewpoints, precisely the same way a human being visualizes anything using both eyes. The research involves object matching by extracting features from images and includes some preliminary tasks like camera calibration, correspondence and reconstruction of images taken by a stereo vision unit and 3D construction of an object. The main goal of this research work is to estimate the position and the orientation of a front loader bucket of an autonomous mobile robot configured in a work machine name 'Avant', which consists a stereo vision unit and several other sensors and is designed for outdoor operations like excavation. Several image features finding algorithms, including the most prominent two, SIFT and SURF has been considered for the image matching and object recognition. Both algorithms find interest points in an image in different ways which apparently accelerates the feature extraction procedure, but still the time requires for matching in both cases is left as an important issue to be resolved. As the machine requires to do some loading and unloading tasks, dust and other particles could be a major obstacle for recognizing the bucket at workspace, also it has been observed that the hydraulic arm and other equipment comes inside the FOV of the cameras which also makes the task much challenging. The concept of using markers has been considered as a solution to these problems. Moreover, the outdoor environment is very different from indoor environment and object matching is far more challenging due to some factors like light, shadows, environment, etc. that change the features inside a scene very rapidly. Although the work focuses on position and orientation estimation, optimum utilization of stereo vision like environment perception or ground modeling can be an interesting avenue of future research / <p>Validerat; 20101230 (ysko)</p>
|
57 |
Kalman Filter Based Fusion Of Camera And Inertial Sensor Measurements For Body State EstimationAslan Aydemir, Gokcen 01 September 2009 (has links) (PDF)
The focus of the present thesis is on the joint use of cameras and inertial sensors, a
recent area of active research. Within our scope, the performance of body state
estimation is investigated with isolated inertial sensors, isolated cameras and finally
with a fusion of two types of sensors within a Kalman Filtering framework. The
study consists of both simulation and real hardware experiments. The body state
estimation problem is restricted to a single axis rotation where we estimate turn angle
and turn rate. This experimental setup provides a simple but effective means of
assessing the benefits of the fusion process. Additionally, a sensitivity analysis is
carried out in our simulation experiments to explore the sensitivity of the estimation
performance to varying levels of calibration errors. It is shown by experiments that
state estimation is more robust to calibration errors when the sensors are used jointly.
For the fusion of sensors, the Indirect Kalman Filter is considered as well as the
Direct Form Kalman Filter. This comparative study allows us to assess the
contribution of an accurate system dynamical model to the final state estimates.
Our simulation and real hardware experiments effectively show that the fusion of the
sensors eliminate the unbounded error growth characteristic of inertial sensors while
final state estimation outperforms the use of cameras alone. Overall we can
v
demonstrate that the Kalman based fusion result in bounded error, high performance
estimation of body state. The results are promising and suggest that these benefits
can be extended to body state estimation for multiple degrees of freedom.
|
58 |
Parameter Extraction And Image Enhancement For Catadioptric Omnidirectional CamerasBastanlar, Yalin 01 April 2005 (has links) (PDF)
In this thesis, catadioptric omnidirectional imaging systems are analyzed in detail. Omnidirectional image (ODI) formation characteristics of different camera-mirror configurations are examined and geometrical relations for panoramic and perspective image generation with common mirror types are summarized.
A method is developed to determine the unknown parameters of a hyperboloidal-mirrored system using the world coordinates of a set of points and their corresponding image points on the ODI. A linear relation between the parameters of the hyperboloidal mirror is determined as well. Conducted research and findings are instrumental for calibration of such imaging systems.
The resolution problem due to the up-sampling while transferring the pixels from ODI to the panoramic image is defined. Enhancing effects of standard interpolation methods on the panoramic images are analyzed and edge detection-based techniques are developed for improving the resolutional quality of the panoramic images. Also, the projection surface alternatives of generating panoramic images are evaluated.
|
59 |
Camera Controlled Pick And Place Application With Puma 760 RobotDurusu, Deniz 01 December 2005 (has links) (PDF)
This thesis analyzes the kinematical structure of Puma 760 arm and introduces the implementation of image based pick and place application by taking care of the obstacles in the environment. Forward and inverse kinematical solutions of PUMA 760 are carried out. A control software has been developed to calculate both the forward and inverse kinematics solution of this manipulator. The control program enables user to perform both offline programming and real time realization by transmitting the VAL commands (Variable Assembly Language) to the control computer.
Using the proposed inverse kinematics solutions, an interactive application is generated on PUMA 760 arm. The picture of the workspace is taken using a fixed camera attached above the robot workspace. The captured image is then processed to find the position and the distribution of all objects in the workspace. The target is differentiated from the obstacles by analyzing some specific properties of all objects, i.e. roundness. After determining the configuration of the workspace, a clustering based search algorithm is executed to find a path to pick the target object and places it to the desired place. The trajectory points in pixel coordinates, are mapped into the robot workspace coordinates by using the camera calibration matrix obtained in the calibration procedure of the robot arm with respect to the attached camera. The required joint angles, to get the end effector of the robot arm to the desired location, are calculated using the Jacobian type inverse kinematics algorithm. The VAL commands are generated and sent to the control computer of PUMA 760 to pick the object and places it to a user defined location.
|
60 |
Um sistema de calibração de câmera / A camera calibration systemMarques, Clarissa Codá dos Santos Cavalcanti 05 February 2007 (has links)
A camera calibration procedure corresponds to determine the digital geometric and
optical characteristics of the camera from a known initial data set. This problem can
be divided into three steps: a) acquisition of initial data; b) calibration process itself;
and c) optimization. This work presents the development of a calibration tool based on
a generic architecture for any calibration approach. For this aim, the presented system
allows the personalization of each calibration step. In the proposed tool new calibration
procedures are introduced dynamically, allowing a better integration between the modules
of the system. / Fundação de Amparo a Pesquisa do Estado de Alagoas / Um processo de calibração de câmera consiste no problema de determinar as
características geométricas digitais e ópticas da câmera a partir de um conjunto de dados
iniciais. Este problema pode ser dividido em três etapas: aquisição de dados iniciais,
o processo de calibração em si e otimização. Este trabalho propõe o desenvolvimento
de uma ferramenta de calibração baseada em uma arquitetura genérica para qualquer
processo de calibração. Para este propósito, o sistema apresentado neste trabalho permite
a personalização de cada etapa da calibração. A inclusão de novos métodos de calibração
é realizada de forma dinâmica, permitindo assim maior integração e flexibilidade entre os
módulos do sistema.
|
Page generated in 0.1046 seconds