Spelling suggestions: "subject:"camera calibration."" "subject:"camera alibration.""
71 |
Object Detection and Tracking Using Uncalibrated CamerasAmara, Ashwini 14 May 2010 (has links)
This thesis considers the problem of tracking an object in world coordinates using measurements obtained from multiple uncalibrated cameras. A general approach to track the location of a target involves different phases including calibrating the camera, detecting the object's feature points over frames, tracking the object over frames and analyzing object's motion and behavior. The approach contains two stages. First, the problem of camera calibration using a calibration object is studied. This approach retrieves the camera parameters from the known locations of ground data in 3D and their corresponding image coordinates. The next important part of this work is to develop an automated system to estimate the trajectory of the object in 3D from image sequences. This is achieved by combining, adapting and integrating several state-of-the-art algorithms. Synthetic data based on a nearly constant velocity object motion model is used to evaluate the performance of camera calibration and state estimation algorithms.
|
72 |
Vehicle detection and tracking using wireless sensors and video camerasBandarupalli, Sowmya 06 August 2009 (has links)
This thesis presents the development of a surveillance testbed using wireless sensors and video cameras for vehicle detection and tracking. The experimental study includes testbed design and discusses some of the implementation issues in using wireless sensors and video cameras for a practical application. A group of sensor devices equipped with light sensors are used to detect and localize the position of moving vehicle. Background subtraction method is used to detect the moving vehicle from the video sequences. Vehicle centroid is calculated in each frame. A non-linear minimization method is used to estimate the perspective transformation which project 3D points to 2D image points. Vehicle location estimates from three cameras are fused to form a single trajectory representing the vehicle motion. Experimental results using both sensors and cameras are presented. Average error between vehicle location estimates from the cameras and the wireless sensors is around 0.5ft.
|
73 |
Vision Based Control for Industrial Robots : Research and implementationMorilla Cabello, David January 2019 (has links)
The automation revolution already helps in many tasks that are now performed by robots. Increases in the complexity of problems regarding robot manipulators require new approaches or alternatives in order to solve them. This project comprises a research in different available software for implementing easy and fast visual servoing tasks controlling a robot manipulator. It focuses on out-of-the-box solutions. Then, the tools found are applied to implement a solution for controlling an arm from Universal Robots. The task is to follow a moving object on a plane with the robot manipulator. The research compares the most popular software, the state-of-the-art alternatives, especially in computer vision and also robot control. The implementation aims to be a proof of concept of a system divided by each functionality (computer vision, path generation and robot control) in order to allow software modularity and exchangeability. The results show various options for each system to take into consideration. The implementation is successfully completed, showing the efficiency of the alternatives examined. The chosen software is MATLAB and Simulink for computer vision and trajectory calculation interfacing with Robotic Operating System (ROS). ROS is used for controlling a UR3 arm using ros_control and ur_modern_driver packages. Both the research and the implementation present a first approach for further applications and understanding over the current technologies for visual servoing tasks. These alternatives offer different easy, fast, and flexible methods to confront complex computer vision and robot control problems.
|
74 |
[en] A STUDY OF TECHNIQUES FOR SHAPE ACQUISITION USING STEREO AND STRUCTURED LIGHT AIMED FOR ENGINEERING / [pt] UM ESTUDO DAS TÉCNICAS DE OBTENÇÃO DE FORMA A PARTIR DE ESTÉREO E LUZ ESTRUTURADA PARA ENGENHARIAGABRIEL TAVARES MALIZIA ALVES 26 August 2005 (has links)
[pt] Há uma crescente demanda pela criação de modelos
computacionais
representativos de objetos reais para projetos de
engenharia. Uma alternativa
barata e eficaz consiste na utilização de técnicas de
Visão Computacional baseada
em câmeras e projetores disponíveis no mercado de
computadores pessoais. Este
trabalho avalia um sistema óptico estéreo ativo para
capturar formas geométricas
de objetos utilizando um par de câmeras e um projetor
digital. O sistema se baseia
em idéias de trabalhos anteriores, com duas contribuições
nesta dissertação. A
primeira é uma técnica mais robusta de detecção de pontos
notáveis em padrões
de calibração das câmeras. A segunda contribuição consiste
num novo método de
ajuste de cilindros que visa aplicar o sistema estudado na
inspeção de instalações
de dutos industriais. As conclusões apresentadas procuram
avaliar a robustez e
precisão do sistema proposto como um instrumento de
medidas em Engenharia. / [en] There has been a growing demand for creation of computer
models based on
real models for engineering projects. A cheap and
effective alternative consists in
using Computer Vision techniques based on cameras and
projectors available at
the personal computer market. This work evaluates a stereo
optic system for
capturing geometric shapes from objects using a pair of
cameras and a single
digital projector. The system is based on former works and
a pair of contributions
is obtained at this dissertation. The first contribution
is a more robust technique
for finding corners and points at cameras calibration
patterns. And the second one
consists on a new method for cylinder fit for inspecting
industrial piping facilities
with the studied system. The final conclusions evaluate
the robustness and
precision from the proposed system as a measurement tool
for Engineering.
|
75 |
The Application of Index Based, Region Segmentation, and Deep Learning Approaches to Sensor Fusion for Vegetation DetectionStone, David L. 01 January 2019 (has links)
This thesis investigates the application of index based, region segmentation, and deep learning methods to the sensor fusion of omnidirectional (O-D) Infrared (IR) sensors, Kinnect sensors, and O-D vision sensors to increase the level of intelligent perception for unmanned robotic platforms. The goals of this work is first to provide a more robust calibration approach and improve the calibration of low resolution and noisy IR O-D cameras. Then our goal was to explore the best approach to sensor fusion for vegetation detection. We looked at index based, region segmentation, and deep learning methods and compared them with a goal of significant reduction in false positives while maintaining reasonable vegetation detection.
The results are as follows:
Direct Spherical Calibration of the IR camera provided a more consistent and robust calibration board capture and resulted in the best overall calibration results with sub-pixel accuracy
The best approach for sensor fusion for vegetation detection was the deep learning approach, the three methods are detailed in the following chapters with the results summarized here.
Modified Normalized Difference Vegetation Index approach achieved 86.74% recognition and 32.5% false positive, with peaks to 80%
Thermal Region Fusion (TRF) achieved a lower recognition rate at 75.16% but reduced false positives to 11.75% (a 64% reduction)
Our Deep Learning Fusion Network (DeepFuseNet) results demonstrated that deep learning approach showed the best results with a significant (92%) reduction in false positives when compared to our modified normalized difference vegetation index approach. The recognition was 95.6% with 2% false positive.
Current approaches are primarily focused on O-D color vision for localization, mapping, and tracking and do not adequately address the application of these sensors to vegetation detection. We will demonstrate the contradiction between current approaches and our deep sensor fusion (DeepFuseNet) for vegetation detection. The combination of O-D IR and O-D color vision coupled with deep learning for the extraction of vegetation material type, has great potential for robot perception. This thesis will look at two architectures: 1) the application of Autoencoders Feature Extractors feeding a deep Convolution Neural Network (CNN) fusion network (DeepFuseNet), and 2) Bottleneck CNN feature extractors feeding a deep CNN fusion network (DeepFuseNet) for the fusion of O-D IR and O-D visual sensors. We show that the vegetation recognition rate and the number of false detects inherent in the classical indices based spectral decomposition are greatly improved using our DeepFuseNet architecture.
We first investigate the calibration of omnidirectional infrared (IR) camera for intelligent perception applications. The low resolution omnidirectional (O-D) IR image edge boundaries are not as sharp as with color vision cameras, and as a result, the standard calibration methods were harder to use and less accurate with the low definition of the omnidirectional IR camera. In order to more fully address omnidirectional IR camera calibration, we propose a new calibration grid center coordinates control point discovery methodology and a Direct Spherical Calibration (DSC) approach for a more robust and accurate method of calibration. DSC addresses the limitations of the existing methods by using the spherical coordinates of the centroid of the calibration board to directly triangulate the location of the camera center and iteratively solve for the camera parameters. We compare DSC to three Baseline visual calibration methodologies and augment them with additional output of the spherical results for comparison. We also look at the optimum number of calibration boards using an evolutionary algorithm and Pareto optimization to find the best method and combination of accuracy, methodology and number of calibration boards. The benefits of DSC are more efficient calibration board geometry selection, and better accuracy than the three Baseline visual calibration methodologies.
In the context of vegetation detection, the fusion of omnidirectional (O-D) Infrared (IR) and color vision sensors may increase the level of vegetation perception for unmanned robotic platforms. A literature search found no significant research in our area of interest. The fusion of O-D IR and O-D color vision sensors for the extraction of feature material type has not been adequately addressed. We will look at augmenting indices based spectral decomposition with IR region based spectral decomposition to address the number of false detects inherent in indices based spectral decomposition alone. Our work shows that the fusion of the Normalized Difference Vegetation Index (NDVI) from the O-D color camera fused with the IR thresholded signature region associated with the vegetation region, minimizes the number of false detects seen with NDVI alone. The contribution of this work is the demonstration of two new techniques, Thresholded Region Fusion (TRF) technique for the fusion of O-D IR and O-D Color. We also look at the Kinect vision sensor fused with the O-D IR camera. Our experimental validation demonstrates a 64% reduction in false detects in our method compared to classical indices based detection.
We finally compare our DeepFuseNet results with our previous work with Normalized Difference Vegetation index (NDVI) and IR region based spectral fusion. This current work shows that the fusion of the O-D IR and O-D visual streams utilizing our DeepFuseNet deep learning approach out performs the previous NVDI fused with far infrared region segmentation. Our experimental validation demonstrates an 92% reduction in false detects in our method compared to classical indices based detection. This work contributes a new technique for the fusion of O-D vision and O-D IR sensors using two deep CNN feature extractors feeding into a fully connected CNN Network (DeepFuseNet).
|
76 |
Lateral Position Detection Using a Vehicle-Mounted CameraÅgren, Elisabeth January 2003 (has links)
<p>A complete prototype system for measuring vehicle lateral position has been set up during the course of this master’s thesis project. In the development of the software, images acquired from a back-ward looking video camera mounted on the roof of the vehicle were used. </p><p>The problem of using computer vision to measure lateral position can be divided into road marking detection and lateral position extraction. Since the strongest characteristic of a road marking image are the edges of the road markings, the road marking detection step is based on edge detection. For the detection of the straight edge lines a Hough based method was chosen. Due to peak spreading in Hough space, the difficulty of detecting the correct peak in Hough space was encountered. A flexible Hough peak detection algorithm was developed based on an adaptive window that takes peak spreading into account. The road marking candidate found by the system is verified before the lateral position data is generated. A good performance of the road marking tracking algorithm was obtained by exploiting temporal correlation to update a search region within the image. A camera calibration made the extraction of real-world lateral position information and yaw angle data possible. </p><p>This vision-based method proved to be very accurate. The standard deviation of the error in the position detection is 0.012 m within an operating range of ±2 m from the image centre. During continuous road markings the rate of valid data is on average 96 %, whereas it drops to around 56 % for sections with intermittent road markings. The system performs well during lane change manoeuvres, which is an indication that the system tracks the correct road marking. This prototype system is a robust and automatic measurement system, which will benefit VTI in its many driving behaviour research programs.</p>
|
77 |
Variable-aperture PhotographyHasinoff, Samuel William 19 January 2009 (has links)
While modern digital cameras incorporate sophisticated engineering, in terms of their core functionality, cameras have changed remarkably little in more than a hundred years. In particular, from a given viewpoint, conventional photography essentially remains limited to manipulating a basic set of controls: exposure time, focus setting, and aperture setting.
In this dissertation we present three new methods in this domain, each based on capturing multiple photos with different camera settings. In each case, we show how defocus can be exploited to achieve different goals, extending what is possible with conventional photography. These methods are closely connected, in that all rely on analyzing changes in aperture.
First, we present a 3D reconstruction method especially suited for scenes with high geometric complexity, for which obtaining a detailed model is difficult using previous approaches. We show that by controlling both the focus and aperture setting, it is possible compute depth for each pixel independently. To achieve this, we introduce the "confocal constancy" property, which states that as aperture setting varies, the pixel intensity of an in-focus scene point will vary in a scene-independent way that can be predicted by prior calibration.
Second, we describe a method for synthesizing photos with adjusted camera settings in post-capture, to achieve changes in exposure, focus setting, etc. from very few input photos. To do this, we capture photos with varying aperture and other settings fixed, then recover the underlying scene representation best reproducing the input. The key to the approach is our layered formulation, which handles occlusion effects but is tractable to invert. This method works with the built-in "aperture bracketing" mode found on most digital cameras.
Finally, we develop a "light-efficient" method for capturing an in-focus photograph in the shortest time, or with the highest quality for a given time budget. While the standard approach involves reducing the aperture until the desired region is in-focus, we show that by "spanning" the region with multiple large-aperture photos,we can reduce the total capture time and generate the in-focus photo synthetically. Beyond more efficient capture, our method provides 3D shape at no additional cost.
|
78 |
Variable-aperture PhotographyHasinoff, Samuel William 19 January 2009 (has links)
While modern digital cameras incorporate sophisticated engineering, in terms of their core functionality, cameras have changed remarkably little in more than a hundred years. In particular, from a given viewpoint, conventional photography essentially remains limited to manipulating a basic set of controls: exposure time, focus setting, and aperture setting.
In this dissertation we present three new methods in this domain, each based on capturing multiple photos with different camera settings. In each case, we show how defocus can be exploited to achieve different goals, extending what is possible with conventional photography. These methods are closely connected, in that all rely on analyzing changes in aperture.
First, we present a 3D reconstruction method especially suited for scenes with high geometric complexity, for which obtaining a detailed model is difficult using previous approaches. We show that by controlling both the focus and aperture setting, it is possible compute depth for each pixel independently. To achieve this, we introduce the "confocal constancy" property, which states that as aperture setting varies, the pixel intensity of an in-focus scene point will vary in a scene-independent way that can be predicted by prior calibration.
Second, we describe a method for synthesizing photos with adjusted camera settings in post-capture, to achieve changes in exposure, focus setting, etc. from very few input photos. To do this, we capture photos with varying aperture and other settings fixed, then recover the underlying scene representation best reproducing the input. The key to the approach is our layered formulation, which handles occlusion effects but is tractable to invert. This method works with the built-in "aperture bracketing" mode found on most digital cameras.
Finally, we develop a "light-efficient" method for capturing an in-focus photograph in the shortest time, or with the highest quality for a given time budget. While the standard approach involves reducing the aperture until the desired region is in-focus, we show that by "spanning" the region with multiple large-aperture photos,we can reduce the total capture time and generate the in-focus photo synthetically. Beyond more efficient capture, our method provides 3D shape at no additional cost.
|
79 |
Lateral Position Detection Using a Vehicle-Mounted CameraÅgren, Elisabeth January 2003 (has links)
A complete prototype system for measuring vehicle lateral position has been set up during the course of this master’s thesis project. In the development of the software, images acquired from a back-ward looking video camera mounted on the roof of the vehicle were used. The problem of using computer vision to measure lateral position can be divided into road marking detection and lateral position extraction. Since the strongest characteristic of a road marking image are the edges of the road markings, the road marking detection step is based on edge detection. For the detection of the straight edge lines a Hough based method was chosen. Due to peak spreading in Hough space, the difficulty of detecting the correct peak in Hough space was encountered. A flexible Hough peak detection algorithm was developed based on an adaptive window that takes peak spreading into account. The road marking candidate found by the system is verified before the lateral position data is generated. A good performance of the road marking tracking algorithm was obtained by exploiting temporal correlation to update a search region within the image. A camera calibration made the extraction of real-world lateral position information and yaw angle data possible. This vision-based method proved to be very accurate. The standard deviation of the error in the position detection is 0.012 m within an operating range of ±2 m from the image centre. During continuous road markings the rate of valid data is on average 96 %, whereas it drops to around 56 % for sections with intermittent road markings. The system performs well during lane change manoeuvres, which is an indication that the system tracks the correct road marking. This prototype system is a robust and automatic measurement system, which will benefit VTI in its many driving behaviour research programs.
|
80 |
Structure-from-motion For Systems With Perspective And Omnidirectional CamerasBastanlar, Yalin 01 July 2009 (has links) (PDF)
In this thesis, a pipeline for structure-from-motion with mixed camera types is described and methods for the steps of this pipeline to make it effective and automatic are proposed. These steps can be summarized as calibration, feature point matching, epipolar geometry and pose estimation, triangulation and bundle adjustment. We worked with catadioptric omnidirectional and perspective cameras and employed the sphere camera model, which encompasses single-viewpoint catadioptric systems as well as perspective cameras.
For calibration of the sphere camera model, a new technique that has the advantage of linear and automatic parameter initialization is proposed. The projection of 3D points on a catadioptric image is represented linearly with a 6x10
projection matrix using lifted coordinates. This projection matrix is computed with an adequate number of 3D-2D correspondences and decomposed to obtain intrinsic and extrinsic parameters. Then, a non-linear optimization is performed to refine the parameters.
For feature point matching between hybrid camera images, scale invariant feature transform (SIFT) is employed and a method is proposed to improve the SIFT matching output. With the proposed approach, omnidirectional-perspective matching performance significantly increases to enable automatic point matching. In addition, the use of virtual camera plane (VCP) images is evaluated, which are perspective images produced by unwarping the corresponding region in the omnidirectional image.
The hybrid epipolar geometry is estimated using random sample consensus (RANSAC) and alternatives of pose estimation methods are evaluated. A weighting strategy for iterative linear triangulation which improves the structure estimation accuracy is proposed. Finally, multi-view structure-from-motion (SfM) is performed by employing the approach of adding views to the structure one by one. To refine the structure estimated with multiple views, sparse bundle adjustment method is employed with a modification to use the sphere camera model.
Experiments on simulated and real images for the proposed approaches are conducted. Also, the results of hybrid multi-view SfM with real images are demonstrated, emphasizing the cases where it is advantageous to use omnidirectional
cameras with perspective cameras.
|
Page generated in 0.1297 seconds