• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 3
  • Tagged with
  • 8
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Servo Tracking with Parallel Trinocular Cameras

Jiang, Jian-hung 30 June 2005 (has links)
none
2

GPU Based Real-Time Trinocular Stereovision

Yao, Yuanbin 24 August 2012 (has links)
"Stereovision has been applied in many fields including UGV (Unmanned Ground Vehicle) navigation and surgical robotics. Traditionally most stereovision applications are binocular which uses information from a horizontal 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit application like distance finding, object recognition and detection. However, as a result of an extra camera, additional information to be processed would increase computational burden and hence not practical in many time critical applications like robotic navigation and surgical robot. Due to the nature of GPUÂ’s highly parallelized SIMD (Single Instruction Multiple Data) architecture, GPGPU (General Purpose GPU) computing can effectively be used to parallelize the large data processing and greatly accelerate the computation of algorithms used in trinocular stereovision. So the combination of trinocular stereovision and GPGPU would be an innovative and effective method for the development of stereovision application. This work focuses on designing and implementing a real-time trinocular stereovision algorithm with GPU (Graphics Processing Unit). The goal involves the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. Algorithms were developed with many different basic image processing methods and a winner-take-all method is applied to perform fusion of disparities in different directions. The results are compared in accuracy and speed to verify the improvement."
3

Servo Tracking with Divergent Trinocular Cameras

Lin, Ssu-yin 13 July 2006 (has links)
The study and application of machine vision in early years mostly focus on a single camera. However, the trend of research on multiple cameras has been developed recently. Due to highly complicated correlation among multiple images, the arrangement of multiple cameras was restricted to the encirclement layout for acquiring more than one views of a target object. Furthermore, it has been well known that the special architecture of insect compound eyes contributes outstanding capability for precise and efficient observation of moving objects. If this technique can be transferred to the domain of engineering applications, significant improvement on visual tracking of moving objects will be greatly expected. This thesis builds a visual servo system with trinocular cameras by mimicking the configuration of compound eye of insects for tracking an object moving in 2D space. The arrangement of the trinocular cameras is divergent, and this system can function properly without the information of distance between the object and the cameras.
4

Specialised global methods for binocular and trinocular stereo matching

Horna Carranza, Luis Alberto January 2017 (has links)
The problem of estimating depth from two or more images is a fundamental problem in computer vision, which is commonly referred as to stereo matching. The applications of stereo matching range from 3D reconstruction to autonomous robot navigation. Stereo matching is particularly attractive for applications in real life because of its simplicity and low cost, especially compared to costly laser range finders/scanners, such as for the case of 3D reconstruction. However, stereo matching has its very unique problems like convergence issues in the optimisation methods, and challenges to find matches accurately due to changes in lighting conditions, occluded areas, noisy images, etc. It is precisely because of these challenges that stereo matching continues to be a very active field of research. In this thesis we develop a binocular stereo matching algorithm that works with rectified images (i.e. scan lines in two images are aligned) to find a real valued displacement (i.e. disparity) that best matches two pixels. To accomplish this our research has developed techniques to efficiently explore a 3D space, compare potential matches, and an inference algorithm to assign the optimal disparity to each pixel in the image. The proposed approach is also extended to the trinocular case. In particular, the trinocular extension deals with a binocular set of images captured at the same time and a third image displaced in time. This approach is referred as to t +1 trinocular stereo matching, and poses the challenge of recovering camera motion, which is addressed by a novel technique we call baseline recovery. We have extensively validated our binocular and trinocular algorithms using the well known KITTI and Middlebury data sets. The performance of our algorithms is consistent across different data sets, and its performance is among the top performers in the KITTI and Middlebury datasets.
5

Robust Servo Tracking with Divergent Trinocular Cameras

Chang, Chin-Kuei 30 July 2007 (has links)
It has been well known that the architecture of insect compound eyes contributes outstanding capability for precise and efficient observation of moving objects. If this technique can be transferred to the domain of engineering applications, significant improvement on visual tracking of moving objects will be greatly expected. The brightness variation, caused by relative velocity of the camera and environment in a sequence of images, is called optical flow. The advantage of the optical-flow-based visual servo methods is that features of the moving object do not have to be known in advance. Therefore, they can be applied for general positioning and tracking tasks. The purpose of this thesis is to develop a visual servo system with trinocular cameras. For mimicking the configuration of compound eyes of insects, the arrangement of the divergent trinocular cameras is applied. In order to overcome possible difficulties of unknown or uncertain parameters, an image servo technique using the robust discrete-time sliding-mode control algorithm to track an object moving in 2D space is developed.
6

Estimation of translational motion by simplified planar compound-like eye schemes

Lin, Gwo-Long 14 December 2007 (has links)
This dissertation presents a technique for recovering translational motion parameters using two simplified planar compound-like eye schemes, namely a parallel trinocular system and a single-row Superposition-type Planar Compound-like Eye (SPCE). In the parallel trinocular scheme, a least squares estimation algorithm is developed for recovering the translational motion parameters. The proposed approach resolves the matrix singularity problem encountered when attempting to recover motion parameters using a conventional binocular scheme. To further reduce the computational complexity of the motion estimation process, a compact closed-form scheme is also proposed to estimate the translational motion parameters. The closed-form algorithm not only resolves the matrix singularity problem, but also avoids the requirement for matrix manipulation. As a result, it has a low computational complexity and is therefore an ideal solution for performing motion estimation in complex, real-world visual imaging applications following an initial image filtering process. The performance of the closed-form algorithm is evaluated by performing a series of numerical simulations in which translational displacements of various magnitudes in three-dimensional space are recovered in both noise-free and perturbed environments. In general, the results demonstrate that the translational motion parameters can be reconstructed with a high degree of accuracy provided that the motion in the depth direction is limited to small displacements only. Having developed a motion estimation scheme for a parallel trinocular system, additional charge coupled device (CCD) cameras are added in the horizontal direction to create a single-row SPCE. Translational motion models for the SPCE are then constructed by stacking the optical flow equations in the horizontal direction. The ego-translational parameters are then extracted using a simple least squares estimation algorithm. The simulation results reveal that the introduction of additional cameras to the machine vision system ensures an excellent motion estimation performance without the need for filters of any kind even when the viewing field is characterized by significant noise or the CCD deployment within the SPCE configuration has a non-uniform distribution. Overall, the parallel binocular scheme and single-row SPCE configuration presented in this dissertation demonstrate a high degree of robustness toward noise and enable the motion estimation process to be performed in a rapid and computationally efficient manner using a simple least squares approximation approach. Whilst science can not realistically hope to improve upon the visioning capabilities found in the insect world, the techniques presented in this dissertation nonetheless provide a sound foundation for the development of artificial planar-array compound-like eyes which mimic the mechanisms at work in biological compound eyes and attain an enhanced visioning performance as a result.
7

A Novel Approach for Spherical Stereo Vision / Ein Neuer Ansatz für Sphärisches Stereo Vision

Findeisen, Michel 27 April 2015 (has links) (PDF)
The Professorship of Digital Signal Processing and Circuit Technology of Chemnitz University of Technology conducts research in the field of three-dimensional space measurement with optical sensors. In recent years this field has made major progress. For example innovative, active techniques such as the “structured light“-principle are able to measure even homogeneous surfaces and find its way into the consumer electronic market in terms of Microsoft’s Kinect® at the present time. Furthermore, high-resolution optical sensors establish powerful, passive stereo vision systems in the field of indoor surveillance. Thereby they induce new application domains such as security and assistance systems for domestic environments. However, the constraint field of view can be still considered as an essential characteristic of all these technologies. For instance, in order to measure a volume in size of a living space, two to three deployed 3D sensors have to be applied nowadays. This is due to the fact that the commonly utilized perspective projection principle constrains the visible area to a field of view of approximately 120°. On the contrary, novel fish-eye lenses allow the realization of omnidirectional projection models. Therewith, the visible field of view can be enlarged up to more than 180°. In combination with a 3D measurement approach, thus, the number of required sensors for entire room coverage can be reduced considerably. Motivated by the requirements of the field of indoor surveillance, the present work focuses on the combination of the established stereo vision principle and omnidirectional projection methods. The entire 3D measurement of a living space by means of one single sensor can be considered as major objective. As a starting point for this thesis chapter 1 discusses the underlying requirement, referring to various relevant fields of application. Based on this, the distinct purpose for the present work is stated. The necessary mathematical foundations of computer vision are reflected in Chapter 2 subsequently. Based on the geometry of the optical imaging process, the projection characteristics of relevant principles are discussed and a generic method for modeling fish-eye cameras is selected. Chapter 3 deals with the extraction of depth information using classical (perceptively imaging) binocular stereo vision configurations. In addition to a complete recap of the processing chain, especially occurring measurement uncertainties are investigated. In the following, Chapter 4 addresses special methods to convert different projection models. The example of mapping an omnidirectional to a perspective projection is employed, in order to develop a method for accelerating this process and, hereby, for reducing the computational load associated therewith. Any errors that occur, as well as the necessary adjustment of image resolution, are an integral part of the investigation. As a practical example, an application for person tracking is utilized in order to demonstrate to which extend the usage of “virtual views“ can increase the recognition rate for people detectors in the context of omnidirectional monitoring. Subsequently, an extensive search with respect to omnidirectional imaging stereo vision techniques is conducted in chapter 5. It turns out that the complete 3D capture of a room is achievable by the generation of a hemispherical depth map. Therefore, three cameras have to be combined in order to form a trinocular stereo vision system. As a basis for further research, a known trinocular stereo vision method is selected. Furthermore, it is hypothesized that, applying a modified geometric constellation of cameras, more precisely in the form of an equilateral triangle, and using an alternative method to determine the depth map, the performance can be increased considerably. A novel method is presented, which shall require fewer operations to calculate the distance information and which is to avoid a computational costly step for depth map fusion as necessary in the comparative method. In order to evaluate the presented approach as well as the hypotheses, a hemispherical depth map is generated in Chapter 6 by means of the new method. Simulation results, based on artificially generated 3D space information and realistic system parameters, are presented and subjected to a subsequent error estimate. A demonstrator for generating real measurement information is introduced in Chapter 7. In addition, the methods that are applied for calibrating the system intrinsically as well as extrinsically are explained. It turns out that the calibration procedure utilized cannot estimate the extrinsic parameters sufficiently. Initial measurements present a hemispherical depth map and thus con.rm the operativeness of the concept, but also identify the drawbacks of the calibration used. The current implementation of the algorithm shows almost real-time behaviour. Finally, Chapter 8 summarizes the results obtained along the studies and discusses them in the context of comparable binocular and trinocular stereo vision approaches. For example the results of the simulations carried out produced a saving of up to 30% in terms of stereo correspondence operations in comparison with a referred trinocular method. Furthermore, the concept introduced allows the avoidance of a weighted averaging step for depth map fusion based on precision values that have to be calculated costly. The achievable accuracy is still comparable for both trinocular approaches. In summary, it can be stated that, in the context of the present thesis, a measurement system has been developed, which has great potential for future application fields in industry, security in public spaces as well as home environments.
8

A Novel Approach for Spherical Stereo Vision

Findeisen, Michel 23 April 2015 (has links)
The Professorship of Digital Signal Processing and Circuit Technology of Chemnitz University of Technology conducts research in the field of three-dimensional space measurement with optical sensors. In recent years this field has made major progress. For example innovative, active techniques such as the “structured light“-principle are able to measure even homogeneous surfaces and find its way into the consumer electronic market in terms of Microsoft’s Kinect® at the present time. Furthermore, high-resolution optical sensors establish powerful, passive stereo vision systems in the field of indoor surveillance. Thereby they induce new application domains such as security and assistance systems for domestic environments. However, the constraint field of view can be still considered as an essential characteristic of all these technologies. For instance, in order to measure a volume in size of a living space, two to three deployed 3D sensors have to be applied nowadays. This is due to the fact that the commonly utilized perspective projection principle constrains the visible area to a field of view of approximately 120°. On the contrary, novel fish-eye lenses allow the realization of omnidirectional projection models. Therewith, the visible field of view can be enlarged up to more than 180°. In combination with a 3D measurement approach, thus, the number of required sensors for entire room coverage can be reduced considerably. Motivated by the requirements of the field of indoor surveillance, the present work focuses on the combination of the established stereo vision principle and omnidirectional projection methods. The entire 3D measurement of a living space by means of one single sensor can be considered as major objective. As a starting point for this thesis chapter 1 discusses the underlying requirement, referring to various relevant fields of application. Based on this, the distinct purpose for the present work is stated. The necessary mathematical foundations of computer vision are reflected in Chapter 2 subsequently. Based on the geometry of the optical imaging process, the projection characteristics of relevant principles are discussed and a generic method for modeling fish-eye cameras is selected. Chapter 3 deals with the extraction of depth information using classical (perceptively imaging) binocular stereo vision configurations. In addition to a complete recap of the processing chain, especially occurring measurement uncertainties are investigated. In the following, Chapter 4 addresses special methods to convert different projection models. The example of mapping an omnidirectional to a perspective projection is employed, in order to develop a method for accelerating this process and, hereby, for reducing the computational load associated therewith. Any errors that occur, as well as the necessary adjustment of image resolution, are an integral part of the investigation. As a practical example, an application for person tracking is utilized in order to demonstrate to which extend the usage of “virtual views“ can increase the recognition rate for people detectors in the context of omnidirectional monitoring. Subsequently, an extensive search with respect to omnidirectional imaging stereo vision techniques is conducted in chapter 5. It turns out that the complete 3D capture of a room is achievable by the generation of a hemispherical depth map. Therefore, three cameras have to be combined in order to form a trinocular stereo vision system. As a basis for further research, a known trinocular stereo vision method is selected. Furthermore, it is hypothesized that, applying a modified geometric constellation of cameras, more precisely in the form of an equilateral triangle, and using an alternative method to determine the depth map, the performance can be increased considerably. A novel method is presented, which shall require fewer operations to calculate the distance information and which is to avoid a computational costly step for depth map fusion as necessary in the comparative method. In order to evaluate the presented approach as well as the hypotheses, a hemispherical depth map is generated in Chapter 6 by means of the new method. Simulation results, based on artificially generated 3D space information and realistic system parameters, are presented and subjected to a subsequent error estimate. A demonstrator for generating real measurement information is introduced in Chapter 7. In addition, the methods that are applied for calibrating the system intrinsically as well as extrinsically are explained. It turns out that the calibration procedure utilized cannot estimate the extrinsic parameters sufficiently. Initial measurements present a hemispherical depth map and thus con.rm the operativeness of the concept, but also identify the drawbacks of the calibration used. The current implementation of the algorithm shows almost real-time behaviour. Finally, Chapter 8 summarizes the results obtained along the studies and discusses them in the context of comparable binocular and trinocular stereo vision approaches. For example the results of the simulations carried out produced a saving of up to 30% in terms of stereo correspondence operations in comparison with a referred trinocular method. Furthermore, the concept introduced allows the avoidance of a weighted averaging step for depth map fusion based on precision values that have to be calculated costly. The achievable accuracy is still comparable for both trinocular approaches. In summary, it can be stated that, in the context of the present thesis, a measurement system has been developed, which has great potential for future application fields in industry, security in public spaces as well as home environments.:Abstract 7 Zusammenfassung 11 Acronyms 27 Symbols 29 Acknowledgement 33 1 Introduction 35 1.1 Visual Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 1.2 Challenges in Visual Surveillance . . . . . . . . . . . . . . . . . . . . . . . 38 1.3 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2 Fundamentals of Computer Vision Geometry 43 2.1 Projective Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.1.1 Euclidean Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.1.2 Projective Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.2 Camera Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.2.1 Geometrical Imaging Process . . . . . . . . . . . . . . . . . . . . . 45 2.2.1.1 Projection Models . . . . . . . . . . . . . . . . . . . . . . 46 2.2.1.2 Intrinsic Model . . . . . . . . . . . . . . . . . . . . . . . . 47 2.2.1.3 Extrinsic Model . . . . . . . . . . . . . . . . . . . . . . . 50 2.2.1.4 Distortion Models . . . . . . . . . . . . . . . . . . . . . . 51 2.2.2 Pinhole Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.2.2.1 Complete Forward Model . . . . . . . . . . . . . . . . . . 52 2.2.2.2 Back Projection . . . . . . . . . . . . . . . . . . . . . . . 53 2.2.3 Equiangular Camera Model . . . . . . . . . . . . . . . . . . . . . . 54 2.2.4 Generic Camera Models . . . . . . . . . . . . . . . . . . . . . . . . 55 2.2.4.1 Complete Forward Model . . . . . . . . . . . . . . . . . . 56 2.2.4.2 Back Projection . . . . . . . . . . . . . . . . . . . . . . . 58 2.3 Camera Calibration Methods . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.3.1 Perspective Camera Calibration . . . . . . . . . . . . . . . . . . . . 59 2.3.2 Omnidirectional Camera Calibration . . . . . . . . . . . . . . . . . 59 2.4 Two-View Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.4.1 Epipolar Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 2.4.2 The Fundamental Matrix . . . . . . . . . . . . . . . . . . . . . . . 63 2.4.3 Epipolar Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3 Fundamentals of Stereo Vision 67 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.1.1 The Concept Stereo Vision . . . . . . . . . . . . . . . . . . . . . . 67 3.1.2 Overview of a Stereo Vision Processing Chain . . . . . . . . . . . . 68 3.2 Stereo Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.2.1 Extrinsic Stereo Calibration With Respect to the Projective Error 70 3.3 Stereo Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.3.1 A Compact Algorithm for Rectification of Stereo Pairs . . . . . . . 73 3.4 Stereo Correspondence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4.1 Disparity Computation . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4.2 The Correspondence Problem . . . . . . . . . . . . . . . . . . . . . 77 3.5 Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.5.1 Depth Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.5.2 Range Field of Measurement . . . . . . . . . . . . . . . . . . . . . 80 3.5.3 Measurement Accuracy . . . . . . . . . . . . . . . . . . . . . . . . 80 3.5.4 Measurement Errors . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.5.4.1 Quantization Error . . . . . . . . . . . . . . . . . . . . . 82 3.5.4.2 Statistical Distribution of Quantization Errors . . . . . . 83 4 Virtual Cameras 87 4.1 Introduction and Related Works . . . . . . . . . . . . . . . . . . . . . . . 88 4.2 Omni to Perspective Vision . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.1 Forward Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.2 Backward Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.2.3 Fast Backward Mapping . . . . . . . . . . . . . . . . . . . . . . . . 96 4.3 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.4 Accuracy Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.4.1 Intrinsics of the Source Camera . . . . . . . . . . . . . . . . . . . . 102 4.4.2 Intrinsics of the Target Camera . . . . . . . . . . . . . . . . . . . . 102 4.4.3 Marginal Virtual Pixel Size . . . . . . . . . . . . . . . . . . . . . . 104 4.5 Performance Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.6 Virtual Perspective Views for Real-Time People Detection . . . . . . . . . 110 5 Omnidirectional Stereo Vision 113 5.1 Introduction and Related Works . . . . . . . . . . . . . . . . . . . . . . . 113 5.1.1 Geometrical Configuration . . . . . . . . . . . . . . . . . . . . . . . 116 5.1.1.1 H-Binocular Omni-Stereo with Panoramic Views . . . . . 117 5.1.1.2 V-Binocular Omnistereo with Panoramic Views . . . . . 119 5.1.1.3 Binocular Omnistereo with Hemispherical Views . . . . . 120 5.1.1.4 Trinocular Omnistereo . . . . . . . . . . . . . . . . . . . 122 5.1.1.5 Miscellaneous Configurations . . . . . . . . . . . . . . . . 125 5.2 Epipolar Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.2.1 Cylindrical Rectification . . . . . . . . . . . . . . . . . . . . . . . . 127 5.2.2 Epipolar Equi-Distance Rectification . . . . . . . . . . . . . . . . . 128 5.2.3 Epipolar Stereographic Rectification . . . . . . . . . . . . . . . . . 128 5.2.4 Comparison of Rectification Methods . . . . . . . . . . . . . . . . 129 5.3 A Novel Spherical Stereo Vision Setup . . . . . . . . . . . . . . . . . . . . 129 5.3.1 Physical Omnidirectional Camera Configuration . . . . . . . . . . 131 5.3.2 Virtual Rectified Cameras . . . . . . . . . . . . . . . . . . . . . . . 131 6 A Novel Spherical Stereo Vision Algorithm 135 6.1 Matlab Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Extrinsic Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 6.3 Physical Camera Configuration . . . . . . . . . . . . . . . . . . . . . . . . 137 6.4 Virtual Camera Configuration . . . . . . . . . . . . . . . . . . . . . . . . . 137 6.4.1 The Focal Length . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.4.2 Prediscussion of the Field of View . . . . . . . . . . . . . . . . . . 138 6.4.3 Marginal Virtual Pixel Sizes . . . . . . . . . . . . . . . . . . . . . . 139 6.4.4 Calculation of the Field of View . . . . . . . . . . . . . . . . . . . 142 6.4.5 Calculation of the Virtual Pixel Size Ratios . . . . . . . . . . . . . 143 6.4.6 Results of the Virtual Camera Parameters . . . . . . . . . . . . . . 144 6.5 Spherical Depth Map Generation . . . . . . . . . . . . . . . . . . . . . . . 147 6.5.1 Omnidirectional Imaging Process . . . . . . . . . . . . . . . . . . . 148 6.5.2 Rectification Process . . . . . . . . . . . . . . . . . . . . . . . . . . 148 6.5.3 Rectified Depth Map Generation . . . . . . . . . . . . . . . . . . . 150 6.5.4 Spherical Depth Map Generation . . . . . . . . . . . . . . . . . . . 151 6.5.5 3D Reprojection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.6 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 7 Stereo Vision Demonstrator 163 7.1 Physical System Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 7.2 System Calibration Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . 165 7.2.1 Intrinsic Calibration of the Physical Cameras . . . . . . . . . . . . 165 7.2.2 Extrinsic Calibration of the Physical and the Virtual Cameras . . 166 7.2.2.1 Extrinsic Initialization of the Physical Cameras . . . . . 167 7.2.2.2 Extrinsic Initialization of the Virtual Cameras . . . . . . 167 7.2.2.3 Two-View Stereo Calibration and Rectification . . . . . . 167 7.2.2.4 Three-View Stereo Rectification . . . . . . . . . . . . . . 168 7.2.2.5 Extrinsic Calibration Results . . . . . . . . . . . . . . . . 169 7.3 Virtual Camera Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 7.4 Software Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5.1 Qualitative Assessment . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5.2 Performance Measurements . . . . . . . . . . . . . . . . . . . . . . 174 8 Discussion and Outlook 177 8.1 Discussion of the Current Results and Further Need for Research . . . . . 177 8.1.1 Assessment of the Geometrical Camera Configuration . . . . . . . 178 8.1.2 Assessment of the Depth Map Computation . . . . . . . . . . . . . 179 8.1.3 Assessment of the Depth Measurement Error . . . . . . . . . . . . 182 8.1.4 Assessment of the Spherical Stereo Vision Demonstrator . . . . . . 183 8.2 Review of the Different Approaches for Hemispherical Depth Map Generation184 8.2.1 Comparison of the Equilateral and the Right-Angled Three-View Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 8.2.2 Review of the Three-View Approach in Comparison with the Two- View Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 8.3 A Sample Algorithm for Human Behaviour Analysis . . . . . . . . . . . . 187 8.4 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 A Relevant Mathematics 191 A.1 Cross Product by Skew Symmetric Matrix . . . . . . . . . . . . . . . . . . 191 A.2 Derivation of the Quantization Error . . . . . . . . . . . . . . . . . . . . . 191 A.3 Derivation of the Statistical Distribution of Quantization Errors . . . . . . 192 A.4 Approximation of the Quantization Error for Equiangular Geometry . . . 194 B Further Relevant Publications 197 B.1 H-Binocular Omnidirectional Stereo Vision with Panoramic Views . . . . 197 B.2 V-Binocular Omnidirectional Stereo Vision with Panoramic Views . . . . 198 B.3 Binocular Omnidirectional Stereo Vision with Hemispherical Views . . . . 200 B.4 Trinocular Omnidirectional Stereo Vision . . . . . . . . . . . . . . . . . . 201 B.5 Miscellaneous Configurations . . . . . . . . . . . . . . . . . . . . . . . . . 202 Bibliography 209 List of Figures 223 List of Tables 229 Affidavit 231 Theses 233 Thesen 235 Curriculum Vitae 237

Page generated in 0.049 seconds