Spelling suggestions: "subject:"camera calibration."" "subject:"camera alibration.""
91 |
Zpracování stereoskopické videosekvence / Processing of Stereoscopic Video SequenceHasmanda, Martin January 2010 (has links)
The main goal of this master’s thesis was get up used methods for observation the stereoscopic scene with one couple of cameras and find out good solving for processing these resulting pictures for two-view and multiple-view autostereoscopic displays for three-dimensional perception. For methods for acquisition video was introduced two methods. They were method “Off-axis” with parallel camera axis and method “Toe in” with intersections axis. For fit method was choice the method named as “Off-axis“. It was not produces the vertical parallax and in detail was in this work described principle of this method. Further were describe principles off used methods for three-dimensional perception namely from the oldest method named anaglyph after methods for viewing at autostereoscopic displays. The Autostereoscopic displays were main goal of this thesis and so their principles were described in details. For production the result image for autostereoscopic displays was used generation intermediate images between left and right camera. Resulting videos were acquisition for testing scene in created in 3D studio Blender, where was possible setting system of cameras exactly parallel axis. Then were introduce principles processing video where was extract from the couple of cameras where were connected to PC for help digitizing card and next time with two web cameras. Here is not guaranteed exact parallel axis system. Therefore this work try for real cameras achieve exactly parallel axis system by the help of transformations of frames with stereo rectification. Stereo rectification was solving with OpenCV libraries and was used two methods. Both methods work from principles epipolar geometry that was described in this work also in detail. First method rectifies pictures on the basis fundamental matrix and found correspondences points in two images of the scene and second method rectifies pictures from knowledge intrinsic and extrinsic parameters of stereoscopic system of cameras. In the end of this work was described application for implementation introduced methods.
|
92 |
Projekce dat do scény / Projector camera cooperationWalter, Viktor January 2016 (has links)
The focus of this thesis is the cooperation of cameras and projectors in projection of data into a scene. It describes the means and theory necessary to achieve such cooperation, and suggests tasks for demonstration. A part of this project is also a program capable of using a camera and a projector to obtain necessary parameters of these devices. The program can demonstrate the quality of this calibration by projecting a pattern onto an object according to its current pose, as well as reconstruct the shape of an object with structured light. The thesis also describes some challenges and observations from development and testing of the program.
|
93 |
Cloudová aplikace pro analýzu dopravy / Cloud Application for Traffic AnalysisValchář, Vít January 2016 (has links)
The aim of this thesis is to create a cloud application for traffic analysis without knowing anything about the system. The only input is address of the web camera pointing at traffic. This application is build on existing solution which is further enhanced. New modules for removing obstacles (such as lamppost covering part of the road) and splitting overlapping cars were added. The whole cloud solution consists of multiple components which communicates by HTTP messages and are controlled by web interface.
|
94 |
On the suitability of conic sections in a single-photo resection, camera calibration, and photogrammetric triangulationSeedahmed, Gamal H. 03 February 2004 (has links)
No description available.
|
95 |
Structureless Camera Motion Estimation of Unordered Omnidirectional ImagesSastuba, Mark 08 August 2022 (has links)
This work aims at providing a novel camera motion estimation pipeline from large collections of unordered omnidirectional images. In oder to keep the pipeline as general and flexible as possible, cameras are modelled as unit spheres, allowing to incorporate any central camera type. For each camera an unprojection lookup is generated from intrinsics, which is called P2S-map (Pixel-to-Sphere-map), mapping pixels to their corresponding positions on the unit sphere. Consequently the camera geometry becomes independent of the underlying projection model. The pipeline also generates P2S-maps from world map projections with less distortion effects as they are known from cartography. Using P2S-maps from camera calibration and world map projection allows to convert omnidirectional camera images to an appropriate world map projection in oder to apply standard feature extraction and matching algorithms for data association. The proposed estimation pipeline combines the flexibility of SfM (Structure from Motion) - which handles unordered image collections - with the efficiency of PGO (Pose Graph Optimization), which is used as back-end in graph-based Visual SLAM (Simultaneous Localization and Mapping) approaches to optimize camera poses from large image sequences. SfM uses BA (Bundle Adjustment) to jointly optimize camera poses (motion) and 3d feature locations (structure), which becomes computationally expensive for large-scale scenarios. On the contrary PGO solves for camera poses (motion) from measured transformations between cameras, maintaining optimization managable. The proposed estimation algorithm combines both worlds. It obtains up-to-scale transformations between image pairs using two-view constraints, which are jointly scaled using trifocal constraints. A pose graph is generated from scaled two-view transformations and solved by PGO to obtain camera motion efficiently even for large image collections. Obtained results can be used as input data to provide initial pose estimates for further 3d reconstruction purposes e.g. to build a sparse structure from feature correspondences in an SfM or SLAM framework with further refinement via BA.
The pipeline also incorporates fixed extrinsic constraints from multi-camera setups as well as depth information provided by RGBD sensors. The entire camera motion estimation pipeline does not need to generate a sparse 3d structure of the captured environment and thus is called SCME (Structureless Camera Motion Estimation).:1 Introduction
1.1 Motivation
1.1.1 Increasing Interest of Image-Based 3D Reconstruction
1.1.2 Underground Environments as Challenging Scenario
1.1.3 Improved Mobile Camera Systems for Full Omnidirectional Imaging
1.2 Issues
1.2.1 Directional versus Omnidirectional Image Acquisition
1.2.2 Structure from Motion versus Visual Simultaneous Localization and Mapping
1.3 Contribution
1.4 Structure of this Work
2 Related Work
2.1 Visual Simultaneous Localization and Mapping
2.1.1 Visual Odometry
2.1.2 Pose Graph Optimization
2.2 Structure from Motion
2.2.1 Bundle Adjustment
2.2.2 Structureless Bundle Adjustment
2.3 Corresponding Issues
2.4 Proposed Reconstruction Pipeline
3 Cameras and Pixel-to-Sphere Mappings with P2S-Maps
3.1 Types
3.2 Models
3.2.1 Unified Camera Model
3.2.2 Polynomal Camera Model
3.2.3 Spherical Camera Model
3.3 P2S-Maps - Mapping onto Unit Sphere via Lookup Table
3.3.1 Lookup Table as Color Image
3.3.2 Lookup Interpolation
3.3.3 Depth Data Conversion
4 Calibration
4.1 Overview of Proposed Calibration Pipeline
4.2 Target Detection
4.3 Intrinsic Calibration
4.3.1 Selected Examples
4.4 Extrinsic Calibration
4.4.1 3D-2D Pose Estimation
4.4.2 2D-2D Pose Estimation
4.4.3 Pose Optimization
4.4.4 Uncertainty Estimation
4.4.5 PoseGraph Representation
4.4.6 Bundle Adjustment
4.4.7 Selected Examples
5 Full Omnidirectional Image Projections
5.1 Panoramic Image Stitching
5.2 World Map Projections
5.3 World Map Projection Generator for P2S-Maps
5.4 Conversion between Projections based on P2S-Maps
5.4.1 Proposed Workflow
5.4.2 Data Storage Format
5.4.3 Real World Example
6 Relations between Two Camera Spheres
6.1 Forward and Backward Projection
6.2 Triangulation
6.2.1 Linear Least Squares Method
6.2.2 Alternative Midpoint Method
6.3 Epipolar Geometry
6.4 Transformation Recovery from Essential Matrix
6.4.1 Cheirality
6.4.2 Standard Procedure
6.4.3 Simplified Procedure
6.4.4 Improved Procedure
6.5 Two-View Estimation
6.5.1 Evaluation Strategy
6.5.2 Error Metric
6.5.3 Evaluation of Estimation Algorithms
6.5.4 Concluding Remarks
6.6 Two-View Optimization
6.6.1 Epipolar-Based Error Distances
6.6.2 Projection-Based Error Distances
6.6.3 Comparison between Error Distances
6.7 Two-View Translation Scaling
6.7.1 Linear Least Squares Estimation
6.7.2 Non-Linear Least Squares Optimization
6.7.3 Comparison between Initial and Optimized Scaling Factor
6.8 Homography to Identify Degeneracies
6.8.1 Homography for Spherical Cameras
6.8.2 Homography Estimation
6.8.3 Homography Optimization
6.8.4 Homography and Pure Rotation
6.8.5 Homography in Epipolar Geometry
7 Relations between Three Camera Spheres
7.1 Three View Geometry
7.2 Crossing Epipolar Planes Geometry
7.3 Trifocal Geometry
7.4 Relation between Trifocal, Three-View and Crossing Epipolar Planes
7.5 Translation Ratio between Up-To-Scale Two-View Transformations
7.5.1 Structureless Determination Approaches
7.5.2 Structure-Based Determination Approaches
7.5.3 Comparison between Proposed Approaches
8 Pose Graphs
8.1 Optimization Principle
8.2 Solvers
8.2.1 Additional Graph Solvers
8.2.2 False Loop Closure Detection
8.3 Pose Graph Generation
8.3.1 Generation of Synthetic Pose Graph Data
8.3.2 Optimization of Synthetic Pose Graph Data
9 Structureless Camera Motion Estimation
9.1 SCME Pipeline
9.2 Determination of Two-View Translation Scale Factors
9.3 Integration of Depth Data
9.4 Integration of Extrinsic Camera Constraints
10 Camera Motion Estimation Results
10.1 Directional Camera Images
10.2 Omnidirectional Camera Images
11 Conclusion
11.1 Summary
11.2 Outlook and Future Work
Appendices
A.1 Additional Extrinsic Calibration Results
A.2 Linear Least Squares Scaling
A.3 Proof Rank Deficiency
A.4 Alternative Derivation Midpoint Method
A.5 Simplification of Depth Calculation
A.6 Relation between Epipolar and Circumferential Constraint
A.7 Covariance Estimation
A.8 Uncertainty Estimation from Epipolar Geometry
A.9 Two-View Scaling Factor Estimation: Uncertainty Estimation
A.10 Two-View Scaling Factor Optimization: Uncertainty Estimation
A.11 Depth from Adjoining Two-View Geometries
A.12 Alternative Three-View Derivation
A.12.1 Second Derivation Approach
A.12.2 Third Derivation Approach
A.13 Relation between Trifocal Geometry and Alternative Midpoint Method
A.14 Additional Pose Graph Generation Examples
A.15 Pose Graph Solver Settings
A.16 Additional Pose Graph Optimization Examples
Bibliography
|
96 |
Fusão de informações obtidas a partir de múltiplas imagens visando à navegação autônoma de veículos inteligentes em abiente agrícola / Data fusion obtained from multiple images aiming the navigation of autonomous intelligent vehicles in agricultural environmentUtino, Vítor Manha 08 April 2015 (has links)
Este trabalho apresenta um sistema de auxilio à navegação autônoma para veículos terrestres com foco em ambientes estruturados em um cenário agrícola. É gerada a estimativa das posições dos obstáculos baseado na fusão das detecções provenientes do processamento dos dados de duas câmeras, uma estéreo e outra térmica. Foram desenvolvidos três módulos de detecção de obstáculos. O primeiro módulo utiliza imagens monoculares da câmera estéreo para detectar novidades no ambiente através da comparação do estado atual com o estado anterior. O segundo módulo utiliza a técnica Stixel para delimitar os obstáculos acima do plano do chão. Por fim, o terceiro módulo utiliza as imagens térmicas para encontrar assinaturas que evidenciem a presença de obstáculo. Os módulos de detecção são fundidos utilizando a Teoria de Dempster-Shafer que fornece a estimativa da presença de obstáculos no ambiente. Os experimentos foram executados em ambiente agrícola real. Foi executada a validação do sistema em cenários bem iluminados, com terreno irregular e com obstáculos diversos. O sistema apresentou um desempenho satisfatório tendo em vista a utilização de uma abordagem baseada em apenas três módulos de detecção com metodologias que não tem por objetivo priorizar a confirmação de obstáculos, mas sim a busca de novos obstáculos. Nesta dissertação são apresentados os principais componentes de um sistema de detecção de obstáculos e as etapas necessárias para a sua concepção, assim como resultados de experimentos com o uso de um veículo real. / This work presents a support system to the autonomous navigation for ground vehicles with focus on structured environments in an agricultural scenario. The estimated obstacle positions are generated based on the fusion of the detections from the processing of data from two cameras, one stereo and other thermal. Three modules obstacle detection have been developed. The first module uses monocular images of the stereo camera to detect novelties in the environment by comparing the current state with the previous state. The second module uses Stixel technique to delimit the obstacles above the ground plane. Finally, the third module uses thermal images to find signatures that reveal the presence of obstacle. The detection modules are fused using the Dempster-Shafer theory that provides an estimate of the presence of obstacles in the environment. The experiments were executed in real agricultural environment. System validation was performed in well-lit scenarios, with uneven terrain and different obstacles. The system showed satisfactory performance considering the use of an approach based on only three detection modules with methods that do not prioritize obstacle confirmation, but the search for new ones. This dissertation presents the main components of an obstacle detection system and the necessary steps for its design as well as results of experiments with the use of a real vehicle.
|
97 |
Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large WorkspacesRizwan, Macknojia 21 March 2013 (has links)
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version.
The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors.
The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
|
98 |
Development Of A 3-camera Vision System And The Saddle Motion Analysis Of Horses Via This SystemDogan, Gozde 01 September 2009 (has links) (PDF)
One of the purposes of this study is to develop a vision system consisting of 3 inexpensive, commercial cameras. The system is intended to be used for tracking the motion of objects in a large calibration volume, typically 6.5 m. wide and 0.7 m. high. Hence, a mechanism is designed and constructed for the calibration of the cameras.
The second purpose of the study is to develop an algorithm, which can be used to obtain the kinematic data associated with a rigid body, using a vision system. Special filters are implemented in the algorithm to identify the 3 markers attached on the body. Optimal curves are fitted to the position data of the markers after smoothing the data appropriately. The outputs of the algorithm are the position, velocity and acceleration of any point (visible or invisible) on the body and the angular velocity and acceleration of the body. The singularities associated with the algorithm are also determined.
Using the vision setup and the developed algorithm for tracking the kinematics of a rigid body, the motions of the saddles of different horses are investigated for different gaits. Similarities and differences between horses and/or gaits are analyzed to lead to quantitative results. Using the limits induced by the whole body vibration of humans, for the first time in the world, daily, allowable riding time and riding distances are determined for different horses and gaits. Furthermore, novel, quantitative horse comfort indicators are proposed. Via the experiments performed, these indicators are shown to be consistent with the comfort assessment of experienced riders.
Finally, in order to implement the algorithms proposed in this study, a computer code is developed using MATLAB® / .
|
99 |
Deux problèmes dans la formation des images numériques : l'estimation du noyau local de flou d'une caméra et l'accélération de rendus stochastiques par filtrage auto-similaire.Delbracio, Mauricio 25 March 2013 (has links) (PDF)
Cette thèse s'attaque à deux problèmes fondamentaux dans la formation des images numériques : la modélisation et l'estimation du flou introduit par une caméra numérique optique, et la génération rapide des images de synthèse photoréalistes. L'évaluation précise du flou intrinsèque d'une caméra est un problème récurrent en traitement d'image. Des progrès technologiques récents ont eu un impact significatif sur la qualité de l'image. Donc, une amélioration de la précision des procédures de calibration est impérative pour pousser plus loin cette évolution. La première partie de cette thèse présente une théorie mathématique de l'acquisition physique de l'image par un appareil photo numérique. Sur la base de cette modélisation, deux algorithmes automatiques pour estimer le flou intrinsèque de la l'appareil sont proposés. Pour le premier, l'estimation est effectuée à partir d'une photographie d'une mire d'étallonnage spécialement conçue à cet effet. L'une des principales contributions de cette thèse est la preuve qu'une mire portant l'image d'un bruit blanc est proche de l'optimum pour estimer le noyau de flou. Le deuxième algorithme évite l'utilisation d'une mire d'étallonnage, procédure qui peut devenir un peu encombrante. En effet, nous montrons que deux photos d'une scène plane texturée, prises à deux distances différentes avec la même configuration de l'appareil photo, suffisent pour produire une estimation précise. Dans la deuxième partie de cette thèse, nous proposons un algorithme pour accélérer la synthèse d'images réalistes. Plusieurs heures, et même plusieurs jours peuvent être nécessaires pour produire des images de haute qualité. Dans un rendu typique, les pixels d'une image sont formés en établissant la moyenne de la contribution des rayons stochastiques lancés à partir d'une caméra virtuelle. Le principe d'accélération, simple mais puissant, consiste à détecter les pixels similaires en comparant leurs histogrammes de rayons et à leur faire partager leurs rayons. Les résultats montrent une accélération significative qui préserve la qualité de l'image.
|
100 |
Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large WorkspacesMacknojia, Rizwan 21 March 2013 (has links)
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version.
The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors.
The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
|
Page generated in 0.1027 seconds