• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 23
  • 9
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 184
  • 184
  • 64
  • 35
  • 30
  • 26
  • 26
  • 25
  • 24
  • 24
  • 22
  • 21
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A Novel Approach for Spherical Stereo Vision

Findeisen, Michel 23 April 2015 (has links)
The Professorship of Digital Signal Processing and Circuit Technology of Chemnitz University of Technology conducts research in the field of three-dimensional space measurement with optical sensors. In recent years this field has made major progress. For example innovative, active techniques such as the “structured light“-principle are able to measure even homogeneous surfaces and find its way into the consumer electronic market in terms of Microsoft’s Kinect® at the present time. Furthermore, high-resolution optical sensors establish powerful, passive stereo vision systems in the field of indoor surveillance. Thereby they induce new application domains such as security and assistance systems for domestic environments. However, the constraint field of view can be still considered as an essential characteristic of all these technologies. For instance, in order to measure a volume in size of a living space, two to three deployed 3D sensors have to be applied nowadays. This is due to the fact that the commonly utilized perspective projection principle constrains the visible area to a field of view of approximately 120°. On the contrary, novel fish-eye lenses allow the realization of omnidirectional projection models. Therewith, the visible field of view can be enlarged up to more than 180°. In combination with a 3D measurement approach, thus, the number of required sensors for entire room coverage can be reduced considerably. Motivated by the requirements of the field of indoor surveillance, the present work focuses on the combination of the established stereo vision principle and omnidirectional projection methods. The entire 3D measurement of a living space by means of one single sensor can be considered as major objective. As a starting point for this thesis chapter 1 discusses the underlying requirement, referring to various relevant fields of application. Based on this, the distinct purpose for the present work is stated. The necessary mathematical foundations of computer vision are reflected in Chapter 2 subsequently. Based on the geometry of the optical imaging process, the projection characteristics of relevant principles are discussed and a generic method for modeling fish-eye cameras is selected. Chapter 3 deals with the extraction of depth information using classical (perceptively imaging) binocular stereo vision configurations. In addition to a complete recap of the processing chain, especially occurring measurement uncertainties are investigated. In the following, Chapter 4 addresses special methods to convert different projection models. The example of mapping an omnidirectional to a perspective projection is employed, in order to develop a method for accelerating this process and, hereby, for reducing the computational load associated therewith. Any errors that occur, as well as the necessary adjustment of image resolution, are an integral part of the investigation. As a practical example, an application for person tracking is utilized in order to demonstrate to which extend the usage of “virtual views“ can increase the recognition rate for people detectors in the context of omnidirectional monitoring. Subsequently, an extensive search with respect to omnidirectional imaging stereo vision techniques is conducted in chapter 5. It turns out that the complete 3D capture of a room is achievable by the generation of a hemispherical depth map. Therefore, three cameras have to be combined in order to form a trinocular stereo vision system. As a basis for further research, a known trinocular stereo vision method is selected. Furthermore, it is hypothesized that, applying a modified geometric constellation of cameras, more precisely in the form of an equilateral triangle, and using an alternative method to determine the depth map, the performance can be increased considerably. A novel method is presented, which shall require fewer operations to calculate the distance information and which is to avoid a computational costly step for depth map fusion as necessary in the comparative method. In order to evaluate the presented approach as well as the hypotheses, a hemispherical depth map is generated in Chapter 6 by means of the new method. Simulation results, based on artificially generated 3D space information and realistic system parameters, are presented and subjected to a subsequent error estimate. A demonstrator for generating real measurement information is introduced in Chapter 7. In addition, the methods that are applied for calibrating the system intrinsically as well as extrinsically are explained. It turns out that the calibration procedure utilized cannot estimate the extrinsic parameters sufficiently. Initial measurements present a hemispherical depth map and thus con.rm the operativeness of the concept, but also identify the drawbacks of the calibration used. The current implementation of the algorithm shows almost real-time behaviour. Finally, Chapter 8 summarizes the results obtained along the studies and discusses them in the context of comparable binocular and trinocular stereo vision approaches. For example the results of the simulations carried out produced a saving of up to 30% in terms of stereo correspondence operations in comparison with a referred trinocular method. Furthermore, the concept introduced allows the avoidance of a weighted averaging step for depth map fusion based on precision values that have to be calculated costly. The achievable accuracy is still comparable for both trinocular approaches. In summary, it can be stated that, in the context of the present thesis, a measurement system has been developed, which has great potential for future application fields in industry, security in public spaces as well as home environments.:Abstract 7 Zusammenfassung 11 Acronyms 27 Symbols 29 Acknowledgement 33 1 Introduction 35 1.1 Visual Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 1.2 Challenges in Visual Surveillance . . . . . . . . . . . . . . . . . . . . . . . 38 1.3 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2 Fundamentals of Computer Vision Geometry 43 2.1 Projective Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.1.1 Euclidean Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.1.2 Projective Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.2 Camera Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.2.1 Geometrical Imaging Process . . . . . . . . . . . . . . . . . . . . . 45 2.2.1.1 Projection Models . . . . . . . . . . . . . . . . . . . . . . 46 2.2.1.2 Intrinsic Model . . . . . . . . . . . . . . . . . . . . . . . . 47 2.2.1.3 Extrinsic Model . . . . . . . . . . . . . . . . . . . . . . . 50 2.2.1.4 Distortion Models . . . . . . . . . . . . . . . . . . . . . . 51 2.2.2 Pinhole Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.2.2.1 Complete Forward Model . . . . . . . . . . . . . . . . . . 52 2.2.2.2 Back Projection . . . . . . . . . . . . . . . . . . . . . . . 53 2.2.3 Equiangular Camera Model . . . . . . . . . . . . . . . . . . . . . . 54 2.2.4 Generic Camera Models . . . . . . . . . . . . . . . . . . . . . . . . 55 2.2.4.1 Complete Forward Model . . . . . . . . . . . . . . . . . . 56 2.2.4.2 Back Projection . . . . . . . . . . . . . . . . . . . . . . . 58 2.3 Camera Calibration Methods . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.3.1 Perspective Camera Calibration . . . . . . . . . . . . . . . . . . . . 59 2.3.2 Omnidirectional Camera Calibration . . . . . . . . . . . . . . . . . 59 2.4 Two-View Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.4.1 Epipolar Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 2.4.2 The Fundamental Matrix . . . . . . . . . . . . . . . . . . . . . . . 63 2.4.3 Epipolar Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3 Fundamentals of Stereo Vision 67 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.1.1 The Concept Stereo Vision . . . . . . . . . . . . . . . . . . . . . . 67 3.1.2 Overview of a Stereo Vision Processing Chain . . . . . . . . . . . . 68 3.2 Stereo Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.2.1 Extrinsic Stereo Calibration With Respect to the Projective Error 70 3.3 Stereo Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.3.1 A Compact Algorithm for Rectification of Stereo Pairs . . . . . . . 73 3.4 Stereo Correspondence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4.1 Disparity Computation . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4.2 The Correspondence Problem . . . . . . . . . . . . . . . . . . . . . 77 3.5 Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.5.1 Depth Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.5.2 Range Field of Measurement . . . . . . . . . . . . . . . . . . . . . 80 3.5.3 Measurement Accuracy . . . . . . . . . . . . . . . . . . . . . . . . 80 3.5.4 Measurement Errors . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.5.4.1 Quantization Error . . . . . . . . . . . . . . . . . . . . . 82 3.5.4.2 Statistical Distribution of Quantization Errors . . . . . . 83 4 Virtual Cameras 87 4.1 Introduction and Related Works . . . . . . . . . . . . . . . . . . . . . . . 88 4.2 Omni to Perspective Vision . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.1 Forward Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.2 Backward Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.2.3 Fast Backward Mapping . . . . . . . . . . . . . . . . . . . . . . . . 96 4.3 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.4 Accuracy Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.4.1 Intrinsics of the Source Camera . . . . . . . . . . . . . . . . . . . . 102 4.4.2 Intrinsics of the Target Camera . . . . . . . . . . . . . . . . . . . . 102 4.4.3 Marginal Virtual Pixel Size . . . . . . . . . . . . . . . . . . . . . . 104 4.5 Performance Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.6 Virtual Perspective Views for Real-Time People Detection . . . . . . . . . 110 5 Omnidirectional Stereo Vision 113 5.1 Introduction and Related Works . . . . . . . . . . . . . . . . . . . . . . . 113 5.1.1 Geometrical Configuration . . . . . . . . . . . . . . . . . . . . . . . 116 5.1.1.1 H-Binocular Omni-Stereo with Panoramic Views . . . . . 117 5.1.1.2 V-Binocular Omnistereo with Panoramic Views . . . . . 119 5.1.1.3 Binocular Omnistereo with Hemispherical Views . . . . . 120 5.1.1.4 Trinocular Omnistereo . . . . . . . . . . . . . . . . . . . 122 5.1.1.5 Miscellaneous Configurations . . . . . . . . . . . . . . . . 125 5.2 Epipolar Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.2.1 Cylindrical Rectification . . . . . . . . . . . . . . . . . . . . . . . . 127 5.2.2 Epipolar Equi-Distance Rectification . . . . . . . . . . . . . . . . . 128 5.2.3 Epipolar Stereographic Rectification . . . . . . . . . . . . . . . . . 128 5.2.4 Comparison of Rectification Methods . . . . . . . . . . . . . . . . 129 5.3 A Novel Spherical Stereo Vision Setup . . . . . . . . . . . . . . . . . . . . 129 5.3.1 Physical Omnidirectional Camera Configuration . . . . . . . . . . 131 5.3.2 Virtual Rectified Cameras . . . . . . . . . . . . . . . . . . . . . . . 131 6 A Novel Spherical Stereo Vision Algorithm 135 6.1 Matlab Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Extrinsic Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 6.3 Physical Camera Configuration . . . . . . . . . . . . . . . . . . . . . . . . 137 6.4 Virtual Camera Configuration . . . . . . . . . . . . . . . . . . . . . . . . . 137 6.4.1 The Focal Length . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.4.2 Prediscussion of the Field of View . . . . . . . . . . . . . . . . . . 138 6.4.3 Marginal Virtual Pixel Sizes . . . . . . . . . . . . . . . . . . . . . . 139 6.4.4 Calculation of the Field of View . . . . . . . . . . . . . . . . . . . 142 6.4.5 Calculation of the Virtual Pixel Size Ratios . . . . . . . . . . . . . 143 6.4.6 Results of the Virtual Camera Parameters . . . . . . . . . . . . . . 144 6.5 Spherical Depth Map Generation . . . . . . . . . . . . . . . . . . . . . . . 147 6.5.1 Omnidirectional Imaging Process . . . . . . . . . . . . . . . . . . . 148 6.5.2 Rectification Process . . . . . . . . . . . . . . . . . . . . . . . . . . 148 6.5.3 Rectified Depth Map Generation . . . . . . . . . . . . . . . . . . . 150 6.5.4 Spherical Depth Map Generation . . . . . . . . . . . . . . . . . . . 151 6.5.5 3D Reprojection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.6 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 7 Stereo Vision Demonstrator 163 7.1 Physical System Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 7.2 System Calibration Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . 165 7.2.1 Intrinsic Calibration of the Physical Cameras . . . . . . . . . . . . 165 7.2.2 Extrinsic Calibration of the Physical and the Virtual Cameras . . 166 7.2.2.1 Extrinsic Initialization of the Physical Cameras . . . . . 167 7.2.2.2 Extrinsic Initialization of the Virtual Cameras . . . . . . 167 7.2.2.3 Two-View Stereo Calibration and Rectification . . . . . . 167 7.2.2.4 Three-View Stereo Rectification . . . . . . . . . . . . . . 168 7.2.2.5 Extrinsic Calibration Results . . . . . . . . . . . . . . . . 169 7.3 Virtual Camera Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 7.4 Software Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5.1 Qualitative Assessment . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5.2 Performance Measurements . . . . . . . . . . . . . . . . . . . . . . 174 8 Discussion and Outlook 177 8.1 Discussion of the Current Results and Further Need for Research . . . . . 177 8.1.1 Assessment of the Geometrical Camera Configuration . . . . . . . 178 8.1.2 Assessment of the Depth Map Computation . . . . . . . . . . . . . 179 8.1.3 Assessment of the Depth Measurement Error . . . . . . . . . . . . 182 8.1.4 Assessment of the Spherical Stereo Vision Demonstrator . . . . . . 183 8.2 Review of the Different Approaches for Hemispherical Depth Map Generation184 8.2.1 Comparison of the Equilateral and the Right-Angled Three-View Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 8.2.2 Review of the Three-View Approach in Comparison with the Two- View Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 8.3 A Sample Algorithm for Human Behaviour Analysis . . . . . . . . . . . . 187 8.4 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 A Relevant Mathematics 191 A.1 Cross Product by Skew Symmetric Matrix . . . . . . . . . . . . . . . . . . 191 A.2 Derivation of the Quantization Error . . . . . . . . . . . . . . . . . . . . . 191 A.3 Derivation of the Statistical Distribution of Quantization Errors . . . . . . 192 A.4 Approximation of the Quantization Error for Equiangular Geometry . . . 194 B Further Relevant Publications 197 B.1 H-Binocular Omnidirectional Stereo Vision with Panoramic Views . . . . 197 B.2 V-Binocular Omnidirectional Stereo Vision with Panoramic Views . . . . 198 B.3 Binocular Omnidirectional Stereo Vision with Hemispherical Views . . . . 200 B.4 Trinocular Omnidirectional Stereo Vision . . . . . . . . . . . . . . . . . . 201 B.5 Miscellaneous Configurations . . . . . . . . . . . . . . . . . . . . . . . . . 202 Bibliography 209 List of Figures 223 List of Tables 229 Affidavit 231 Theses 233 Thesen 235 Curriculum Vitae 237
42

Design, implementation & analysis of a low-cost, portable, medical measurement system through computer vision

Van der Westhuizen, Gareth 03 1900 (has links)
Thesis (MScEng (Mechanical and Mechatronic Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: The In the Physiotherapy Division of the Faculty of Health Sciences on the Tygerberg Hospital Campus of the University of Stellenbosch, the challenge arose to develop a portable, affordable and yet accurate 3D measurement machine for the assessment of posture in school children in their classroom environment. Currently Division already uses a state-of-the-art VICON commercial medical measuring machine to measure human posture in 3D in their physiotherapy clinic, but the system is not portable and is too expensive to cart around to different places for testing. To respond to this challenge, this Master’s thesis designed and analyzed a machine and its supporting system through both research on stereo-vision methodologies and empirical appraisal in the field. In the development process, the research was required to overcome the limitations posed by small image resolutions and lens distortions that are typical of cheap cameras. The academic challenge lay in the development of an error prediction model through Jacobian derivation and Error Propagation Law, to predict uncertainties of angular measurement calculated by the system. The research culminated in a system that is comparable in accuracy to the VICON within 3mm, and that has 1.5mm absolute accuracy within its own system for a measurement volume radius of 2.5 m. As such, the developed error model is an exact predictor of the angular error to within 0.02° of arc. These results, for both system accuracy and the error model, exceed the expectations on the basis of the initial challenge of the system. The development of the machine was successful in providing a prototype tool that is suitable for commercial development for use by physiotherapists in human posture measurement and assessment. In its current incarnation, the machine will also serve the Engineering Faculty as the most fundamental form of a three-dimensional measuring apparatus using only basic theories and algorithms of stereo-vision, thereby providing a basic experimental platform from which further scientific research on the theory and application of computer vision can be conducted. / AFRIKAANSE OPSOMMING: Die Fisioterapie Afdeling van die Fakulteit Gesondheidswetenskappe op die Tygerberg kampus van die Universiteit van Stellenbosch gebruik ’n allernuutste VICON kommersiële mediese meettoestel om menslike postuur in drie dimensies te meet. Vanuit hierdie Afdeling het die uitdaging ontstaan om ’n draagbare, bekostigbare, maar tog akkurate, drie-dimensionele meetapparaat geskik vir die meet van die postuur van skoolkinders in die klaskamer te ontwikkel. In aanvaarding van hierdie uitdaging, het hierdie Magistertesis ’n toestel en ondersteuningstels ontwerp en ontleed deur beide navorsing in stereo-visie metodiek en terplaatse beoordeling. In die ontwikkelingsproses moes die navorsing die beperkings wat deur klein-beeld resolusie en lens-distorsie (tipies van goedkoop kameras) meegebring word, oorkom. Die akademiese uitdaging lê in die ontwikkeling van ’n voorspellende foutmodel deur van die Jacobianse-afleiding en die Fout Propageringswet gebruik te maak om onsekerheid van hoeksberekening deur die stelsel te voorspel. Die navorsing het gelei tot ’n stelsel wat binne 3mm vergelykbaar is in akkuraatheid met dié van die VICON en ook 1.5mm absolute interne akkuraatheid het in ’n meet-volume radius van 2.5m radius. Die ontwikkelde foutmodel is dus ’n presiese voorspeller van hoekfout tot binne 0.02° van boog. Die resultate met betrekking tot beide die akkuraatheid en die foutmodel het die oorspronklike verwagtinge van die uitdaging oortref. Die ontwikkeling was suksesvol in die skep van ’n prototipe-toestel geskik vir kommersiële ontwikkeling, vir gebruik deur fisioterapeute in die meting en evaluering van menslike postuur. Die stelsel is in sy fundamentele vorm, deur die gebruik van slegs basiese teorieë en algoritmes van stereo-visie, funksioneer as ’n drie-dimensionele meetapparaat. In die fundamentele vorm sal die stelsel die Ingenieursfakulteit dien as ’n basiese eksperimentele platform waarop verdere wetenskaplike navorsing in die teorie en toepassing van rekenaar-visie gedoen kan word.
43

Compression of visual data into symbol-like descriptors in terms of a cognitive real-time vision system / Die Verdichtung der Videoeingabe in symbolische Deskriptoren im Rahmen des kognitiven Echtzeitvisionsystems

Abramov, Alexey 18 July 2012 (has links)
No description available.
44

Real-Time Stereo Vision for Resource Limited Systems

Tippetts, Beau J. 01 March 2012 (has links) (PDF)
A significant amount of research in the field of stereo vision has been published in the past decade. Considerable progress has been made in improving accuracy of results as well as achieving real-time performance in obtaining those results. Although much of the literature does not address it, many applications are sensitive to the tradeoff between accuracy and speed that exists among stereo vision algorithms. Overall, this work aims to organize existing efforts and encourage new ones in the development of stereo vision algorithms for resource limited systems. It does this through a review of the status quo as well as providing both software and hardware designs of new stereo vision algorithms that offer an efficient tradeoff between speed and accuracy. A comprehensive review and analysis of stereo vision algorithms is provided with specific emphasis on real-time performance and suitability for resource limited systems. An attempt has been made to compile and present accuracy and runtime performance data for all stereo vision algorithms developed in the past decade. The tradeoff in accuracy that is typically made to achieve real-time performance is examined with an example of an existing highly accurate stereo vision that is modified to see how much speedup can be achieved. Two new stereo vision algorithms, GA Spline and Profile Shape Matching, are presented with a hardware design of the latter also being provided, making Profile Shape Matching available to both embedded processor-based and programmable hardware-based resource limited systems.
45

Smooth Central and Non-Central Camera Models in Object Space

Rueß, Dominik 24 January 2024 (has links)
In den letzten Jahren sind immer mehr erschwingliche Kamera-Sensoren mit einer zunehmenden Vielfalt optischer Abbildungsfunktionen verfügbar geworden. Low-Cost-Optiken können aufgrund höherer Toleranzen und unterschiedlicher optischer Materialien von der gewünschten Lochkamera Metrik abweichen. Weitwinkel- und Fischaugenobjektive, verzerrende katadioptrische Objektive (spiegelnd und refraktiv) und andere ungewöhnliche Objektive weichen von der Annahme des Modells einer Lochkamera mit einer Brennweite ab. Actionkameras können die gesamte Umgebung mit zwei Objektiven abbilden, diese entsprechen meist nicht mehr dem Lochkameramodell. Kameras werden auch für Messaufgaben hinter zusätzlichen optischen Elementen eingesetzt. Die vorliegende Arbeit erweitert die ersten Erkenntnisse im Bereich der differenzierbaren (glatten) Kameramodelle ohne Einschränkungen. Viele existierende Modelle sind auf bestimmte Objektivtypen spezialisiert. In dieser Arbeit werden mehrere solcher allgemeinen Modelle eingeführt, ohne dass eine global feste Brennweite und spezielle Anforderungen an die Symmetrie der Abbildung erforderlich sind. Eine Einführung alternativer Fehlermetriken im Objektraum bringt auch enorme Rechenvorteile, da eine Abbildungsrichtung analytisch berechnet und viele der Berechnungsergebnisse konstant gehalten werden können. Zur Initialisierung solcher Modelle wird in dieser Arbeit eine generische lineare Kamera vorgestellt. Das wesentliche Merkmal dabei ist eine künstliche Transformation in höhere Dimensionen, welche mit linearen Verfahren weiterverwendet werden. Sie modellieren bereits nichtlineare Verzerrungen und Asymmetrien. Eine Multikamera-Kalibrierungssoftware wird ebenfalls beschrieben und implementiert. Das Ergebnis der Arbeit ist ein theoretischer Rahmen für glatte Kameramodelle im Objektraum selbst – anstelle der Abbildung in den Bildraum – mit mehreren konkreten Modellvorschlägen, Implementierungen und dem angepassten und erweiterten Kalibrierungsprozess. / In recent years, more and more affordable camera sensors with an increasing variety of optical imaging features have become available. Low-cost optics may deviate from the desired pinhole metric due to higher tolerances and different optical materials. Wide-angle and fisheye lenses, distorting catadioptric lenses (specular and refractive) and other unusual lenses deviate from the single focal pinhole camera model assumption, which is sometimes intentional. Action cameras can map the entire environment using two lenses, these usually no longer correspond to the pinhole camera model. Cameras are also used for measuring tasks behind additional optical elements – with unforeseeable deviations in the line of sight. The present work expands the first findings in the field of differentiable (smooth) camera models without constraints. Many existing models specialise in certain types of lenses. In this work, several such general models are introduced without requiring fixed global focal length and symmetry requirements. An introduction of alternative error metrics in the object space also gives enormous computational advantages, since one imaging direction can be calculated analytically and many of the calculation results can be kept constant. For the generation of meaningful starting values of such models, this work introduces a generic linear camera. The essential feature of is an artificial transformation into higher dimensions. These transformed coordinates can then continue to be used with linear methods. They already model non-linear distortions and asymmetries. A multi-camera calibration software that efficiently implements these models is also described and implemented. The result of the work is a theoretical framework for smooth camera models in the object space itself - instead of the established mapping into the image space - with several concrete model proposals, implementations and the adapted and extended calibration process.
46

Avaliação e proposta de sistemas de câmeras estéreo para detecção de pedestres em veículos inteligentes / Stereo cameras systems evaluation and proposal for pedestrian detection on intelligent vehicles

Nakamura, Angelica Tiemi Mizuno 06 December 2017 (has links)
Detecção de pedestres é uma importante área em visão computacional com o potencial de salvar vidas quando aplicada em veículos. Porém, essa aplicação exige detecções em tempo real, com alta acurácia e menor quantidade de falsos positivos possível. Durante os últimos anos, diversas ideias foram exploradas e os métodos mais recentes que utilizam arquiteturas profundas de redes neurais possibilitaram um grande avanço nesta área, melhorando significativamente o desempenho das detecções. Apesar desse progresso, a detecção de pedestres que estão distantes do veículo continua sendo um grande desafio devido às suas pequenas escalas na imagem, sendo necessária a avaliação da eficácia dos métodos atuais em evitar ou atenuar a gravidade dos acidentes de trânsito que envolvam pedestres. Dessa forma, como primeira proposta deste trabalho, foi realizado um estudo para avaliar a aplicabilidade dos métodos estado-da-arte para evitar colisões em cenários urbanos. Para isso, a velocidade e dinâmica do veículo, o tempo de reação e desempenho dos métodos de detecção foram considerados. Através do estudo, observou-se que em ambientes de tráfego rápido ainda não é possível utilizar métodos visuais de detecção de pedestres para assistir o motorista, pois nenhum deles é capaz de detectar pedestres que estão distantes do veículo e, ao mesmo tempo, operar em tempo real. Mas, ao considerar apenas pedestres em maiores escalas, os métodos tradicionais baseados em janelas deslizantes já conseguem atingir um bom desempenho e rápida execução. Dessa forma, com a finalidade de restringir a operação dos detectores apenas para pedestres em maiores escalas e assim, possibilitar a aplicação de métodos visuais em veículos, foi proposta uma configuração de câmeras que possibilitou obter imagens para um maior intervalo de distância à frente do veículo com pedestres em resolução quase duas vezes maior em comparação à uma câmera comercial. Resultados experimentais mostraram considerável melhora no desempenho das detecções, possibilitando superar a dificuldade causada pelas pequenas escalas dos pedestres nas imagens. / Pedestrian detection is an important area in computer vision with the potential to save lives when applied on vehicles. This application requires accurate detections and real-time operation, keeping the number of false positives as minimal as possible. Over the past few years, several ideas were explored, including approaches with deep network architectures, which have reached considerably better performances. However, detecting pedestrians far from the camera is still challenging due to their small sizes on images, making it necessary to evaluate the effectiveness of existing approaches on avoiding or reducing traffic accidents that involves pedestrians. Thus, as the first proposal of this work, a study was done to verify the state-of-the-art methods applicability for collision avoidance in urban scenarios. For this, the speed and dynamics of the vehicle, the reaction time and performance of the detection methods were considered. The results from this study show that it is still not possible to use a vision-based pedestrian detector for driver assistance on urban roads with fast moving traffic, since none of them is able to handle real-time pedestrian detection. However, for large-scale pedestrians on images, methods based on sliding window approach can already perform reliably well with fast inference time. Thus, in order to restrict the operation of detectors only for pedestrians in larger scales and enable the application of vision-based methods in vehicles, it was proposed a camera setup that provided to get images for a larger range of distances in front of the vehicle with pedestrians resolution almost twice as large compared to a commercial camera. Experimental results reveal a considerable enhancement on detection performance, overcoming the difficulty caused by the reduced scales that far pedestrians have on images.
47

Análise comparativa de algoritmos de correlação local baseados em intensidade luminosa. / Comparative analysis of intensity based local correlation algorithm.

Nishimura, Claudio Massumi Oda 05 May 2008 (has links)
Este trabalho apresentou uma análise comparativa de algumas técnicas de correlações locais baseadas em intensidade luminosa, as quais são: Soma das Diferenças Absolutas, Soma dos Quadrados das Diferenças, Correlação Cruzada Normalizada, Transformada Rank e Transformada Censo. Para as comparações foram adotadas imagens estéreos disponíveis em repositórios de universidades e suas variantes com a inclusão de ruído e variação de intensidade luminosa. Após a implementação dos algoritmos escolhidos e a comparação de seus resultados, foi obtido que a Transformada Censo é um dos métodos com os piores resultados apresentando grande quantidade de correlações erradas. Foram apresentadas modificações para melhorar a performance desse método e os resultados obtidos foram melhores. / This work presents a comparative analysis of some local area intensity based correlation algorithm, which are: Sum of Absolute Differences, Sum of Squared Differences, Normalized Cross-Correlation, Rank Transform and Census Transform. For the tests stereo data sets are adopted. These data sets are available at universities websites and their variants with the inclusion of noise and variation of luminosity are created. After implementing the chosen algorithms a comparison were performed and the Census Transform was one of the methods that got the worst results showing large quantity of false correlations. On this work was presented some modifications to improve the performance of the Census Transform and the results obtained were better than the original Census Transform.
48

Criação de mapas de disparidades empregando análise multi-resolução e agrupamento perceptual / Disparity maps generation employing multi-resolution analysis and Gestalt Grouping

Laureano, Gustavo Teodoro 06 March 2008 (has links)
O trabalho apresentado por essa dissertação busca contribuir com a atenuação do problema da correspondência em visão estéreo a partir de uma abordagem local de soluções. São usadas duas estratégias como solução às ambigüidades e às oclusões da cena: a análise multi-resolução das imagens empregando a estrutura piramidal, e a força de agrupamento perceptual, conhecida como Gestalt theory na psicologia. Inspirado no sistema visual humano, a visão estéreo é uma área de grande interesse em visão computacional, e está relacionada à recuperação de informações tridimensionais de uma cena a partir de imagens da mesma. Para isso, as imagens são capturadas em posições diferentes para o futuro relacionamento das várias projeções de um mesmo ponto 3D. Apesar de ser estudada há quase quatro décadas, ela ainda apresenta problemas de difícil solução devido às dificuldades relacionadas às distorções produzidas pela mudança da perspectiva de visualização. Dentre esses problemas destacam-se os relacionados à oclusão de pontos e também à ambigüidade gerada pela repetição ou ausência de textura nas imagens. Esses por sua vez compõem a base do problema estéreo, chamado de problema da correspondência. Os resultados obtidos são equivalentes aos obtidos por técnicas globais, com a vantagem de requerer menor complexidade computacional. O uso da teoria de agrupamento perceptual faz desse trabalho um método moderno de estimação de disparidades, visto que essa técnica é alvo de atenção especial em recentes estudos na área de visão computacional. / This work aims to give a contribution to the correspondence problem using a local approach. Two strategies are used as solution to ambiguities and occlusions: the multi-resolution analysis with irnages pyramids and the other is the perceptual grouping weight, called Gestalt theory in the psychology. Inspired by human vision system, the stereo vision is an very important area in computer vision. It is related with the 3D information recovery from a pair of images. The images are captured frorn different positions to hereafter association of the 3D point projections. Although it has being studied for quite a long time, stereo vision presents some difficult problems, related to the change of visualisation perspective. Among the different problems originated from point of view changes, occlusions and ambiguities have special attention and compose the foundation of stereo problem, named correspondence problem. The results obtained were closer to the ones generated by global techniques, with the advantage of requiring less computational complexity. The use of Gestalt theory makes this a modern disparity estimation method, as this theory has been received special attention in computer vision researchs.
49

Stereo Vision-based Autonomous Vehicle Navigation

Meira, Guilherme Tebaldi 26 April 2016 (has links)
Research efforts on the development of autonomous vehicles date back to the 1920s and recent announcements indicate that those cars are close to becoming commercially available. However, the most successful prototypes that are currently being demonstrated rely on an expensive set of sensors. This study investigates the use of an affordable vision system as a planner for the Robocart, an autonomous golf cart prototype developed by the Wireless Innovation Laboratory at WPI. The proposed approach relies on a stereo vision system composed of a pair of Raspberry Pi computers, each one equipped with a Camera Module. They are connected to a server and their clocks are synchronized using the Precision Time Protocol (PTP). The server uses timestamps to obtain a pair of simultaneously captured images. Images are processed to generate a disparity map using stereo matching and points in this map are reprojected to the 3D world as a point cloud. Then, an occupancy grid is built and used as input for an A* graph search that finds a collision-free path for the robot. Due to the non-holonomic constraints of a car-like robot, a Pure Pursuit algorithm is used as the control method to guide the robot along the computed path. The cameras are also used by a Visual Odometry algorithm that tracks points on a sequence of images to estimate the position and orientation of the vehicle. The algorithms were implemented using the C++ language and the open source library OpenCV. Tests in a controlled environment show promising results and the interfaces between the server and the Robocart have been defined, so that the proposed method can be used on the golf cart as soon as the mechanical systems are fully functional.
50

Pedestrian Detection Based on Data and Decision Fusion Using Stereo Vision and Thermal Imaging

Sun, Roy 25 April 2016 (has links)
Pedestrian detection is a canonical instance of object detection that remains a popular topic of research and a key problem in computer vision due to its diverse applications. These applications have the potential to positively improve the quality of life. In recent years, the number of approaches to detecting pedestrians in monocular and binocular images has grown steadily. However, the use of multispectral imaging is still uncommon. This thesis work presents a novel approach to data and feature fusion of a multispectral imaging system for pedestrian detection. It also includes the design and building of a test rig which allows for quick data collection of real-world driving. An application of the mathematical theory of trifocal tensor is used to post process this data. This allows for pixel level data fusion across a multispectral set of data. Performance results based on commonly used SVM classification architectures are evaluated against the collected data set. Lastly, a novel cascaded SVM architecture used in both classification and detection is discussed. Performance improvements through the use of feature fusion is demonstrated.

Page generated in 0.1145 seconds