• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 109
  • 24
  • 9
  • 1
  • Tagged with
  • 278
  • 132
  • 72
  • 72
  • 71
  • 71
  • 62
  • 49
  • 43
  • 43
  • 42
  • 21
  • 20
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Experimental comparison of a macroscopic draping simulation for dry non-crimp fabric preforming on a complex geometry by means of optical measurement

Mallach, Annegret, Härtel, Frank, Heieck, Frieder, Fuhr, Jan-Philipp, Middendorf, Peter, Gude, Maik 29 October 2019 (has links)
Scope of the presented work is a detailed comparison of a macroscopic draping model with real fibre architecture on a complex non-crimp-fabric preform using a new robot-based optical measurement system. By means of a preliminary analytical process design approach, a preforming test centre is set up to manufacture dry non-crimp-fabric preforms. A variable blank holder setup is used to investigate the effect of different process parameters on the fibre architecture.The real fibre architecture of those preforms is captured by the optical measurement system, which generates a threedimensional model containing information about the fibre orientation along the entire surface of the preform. The measured and calculated fiber orientations are then compared with the simulation results in a three-dimensional overlay file. The results show that the analytical approach is able to predict local hot spots with high shear angles on the preform. Macroscopic simulations show a higher sensitivity towards changes in blank holder pressure than reality and limit the approach to precisely predict fibre architecture parameters on complex geometries.
272

Superpixels and their Application for Visual Place Recognition in Changing Environments

Neubert, Peer 01 December 2015 (has links)
Superpixels are the results of an image oversegmentation. They are an established intermediate level image representation and used for various applications including object detection, 3d reconstruction and semantic segmentation. While there are various approaches to create such segmentations, there is a lack of knowledge about their properties. In particular, there are contradicting results published in the literature. This thesis identifies segmentation quality, stability, compactness and runtime to be important properties of superpixel segmentation algorithms. While for some of these properties there are established evaluation methodologies available, this is not the case for segmentation stability and compactness. Therefore, this thesis presents two novel metrics for their evaluation based on ground truth optical flow. These two metrics are used together with other novel and existing measures to create a standardized benchmark for superpixel algorithms. This benchmark is used for extensive comparison of available algorithms. The evaluation results motivate two novel segmentation algorithms that better balance trade-offs of existing algorithms: The proposed Preemptive SLIC algorithm incorporates a local preemption criterion in the established SLIC algorithm and saves about 80 % of the runtime. The proposed Compact Watershed algorithm combines Seeded Watershed segmentation with compactness constraints to create regularly shaped, compact superpixels at the even higher speed of the plain watershed transformation. Operating autonomous systems over the course of days, weeks or months, based on visual navigation, requires repeated recognition of places despite severe appearance changes as they are for example induced by illumination changes, day-night cycles, changing weather or seasons - a severe problem for existing methods. Therefore, the second part of this thesis presents two novel approaches that incorporate superpixel segmentations in place recognition in changing environments. The first novel approach is the learning of systematic appearance changes. Instead of matching images between, for example, summer and winter directly, an additional prediction step is proposed. Based on superpixel vocabularies, a predicted image is generated that shows, how the summer scene could look like in winter or vice versa. The presented results show that, if certain assumptions on the appearance changes and the available training data are met, existing holistic place recognition approaches can benefit from this additional prediction step. Holistic approaches to place recognition are known to fail in presence of viewpoint changes. Therefore, this thesis presents a new place recognition system based on local landmarks and Star-Hough. Star-Hough is a novel approach to incorporate the spatial arrangement of local image features in the computation of image similarities. It is based on star graph models and Hough voting and particularly suited for local features with low spatial precision and high outlier rates as they are expected in the presence of appearance changes. The novel landmarks are a combination of local region detectors and descriptors based on convolutional neural networks. This thesis presents and evaluates several new approaches to incorporate superpixel segmentations in local region detection. While the proposed system can be used with different types of local regions, in particular the combination with regions obtained from the novel multiscale superpixel grid shows to perform superior to the state of the art methods - a promising basis for practical applications.
273

MEMS-Laser-Display-System / MEMS Laser Display System

Specht, Hendrik 19 October 2011 (has links) (PDF)
In der vorliegenden Arbeit werden die im Zusammenhang mit der Strahlablenkung stehenden Systemaspekte der auf MEMS-Scanner basierenden Laser-Display-Technologie theoretisch analysiert und aus den Ergebnissen die praktische Implementierung eines Laser-Display-Systems als Testplattform vorgenommen. Dabei werden mit einem Ansatz auf Basis zweier 1D-Scanner und einem weiteren Ansatz mit einem 2D-Scanner zwei Varianten realisiert. Darüber hinaus erfolgt die Entwicklung eines bildbasierten Multiparametertestverfahrens, welches sowohl für den Test komplettierter Strahlablenkeinheiten bzw. Projektionsmodule als auch zum umfassenden und zeiteffizienten Test von MEMS-Scannern auf Wafer-Level geeignet ist. Mit diesem Verfahren erfolgt eine Charakterisierung der zwei realisierten Varianten des Laser-Displays. Ausgehend von den Eigenschaften des menschlichen visuellen Systems und den daraus resultierenden Anforderungen an das Bild sowie einer systemtheoretischen Betrachtung des mechanischen Verhaltens von MEMS-Scannern bildet die Ansteuersignalerzeugung für den resonanten Betrieb der schnellen und den quasistatischen Betrieb der langsamen Achse einen Schwerpunkt. Neben dem reinen digitalen Regler- bzw. Filterentwurf sowie mehreren Linearisierungsmaßnahmen beinhaltet dieser auch die Herleitung einer FPGA-basierten Videosignalverarbeitung zur Konvertierung von Scannpattern, Zeitregime und Auflösung mit einer entsprechenden Synchronisierung von Strahlablenkung und Lasermodulation. Auf Grundlage der daraus resultierenden Erkenntnisse über den Zusammenhang zwischen Scanner-/Systemparametern und Bildparametern werden Testbild-Bildverarbeitungsalgorithmus-Kombinationen entwickelt und diese, angeordnet in einer Sequenz, mit einem Kalibrierverfahren zu einem Testverfahren für MEMS-Scanner vervollständigt. Die Ergebnisse dieser Arbeit entstanden im Rahmen von industriell beauftragten F&E-Projekten und fließen in die andauernde Fortführung des Themas beim Auftraggeber ein.
274

Modelling cortical laminae with 7T magnetic resonance imaging

Wähnert, Miriam 12 May 2014 (has links)
To fully understand how the brain works, it is necessary to relate the brain’s function to its anatomy. Cortical anatomy is subject-specific. It is character- ized by the thickness and number of intracortical layers, which differ from one cortical area to the next. Each cortical area fulfills a certain function. With magnetic res- onance imaging (MRI) it is possible to study structure and function in-vivo within the same subject. The resolution of ultra-high field MRI at 7T allows to resolve intracortical anatomy. This opens the possibility to relate cortical function of a sub- ject to its corresponding individual structural area, which is one of the main goals of neuroimaging. To parcellate the cortex based on its intracortical structure in-vivo, firstly, im- ages have to be quantitative and homogeneous so that they can be processed fully- automatically. Moreover, the resolution has to be high enough to resolve intracortical layers. Therefore, the in-vivo MR images acquired for this work are quantitative T1 maps at 0.5 mm isotropic resolution. Secondly, computational tools are needed to analyze the cortex observer-independ- ently. The most recent tools designed for this task are presented in this thesis. They comprise the segmentation of the cortex, and the construction of a novel equi-volume coordinate system of cortical depth. The equi-volume model is not restricted to in- vivo data, but is used on ultra-high resolution post-mortem data from MRI as well. It could also be used on 3D volumes reconstructed from 2D histological stains. An equi-volume coordinate system yields firstly intracortical surfaces that follow anatomical layers all along the cortex, even within areas that are severely folded where previous models fail. MR intensities can be mapped onto these equi-volume surfaces to identify the location and size of some structural areas. Surfaces com- puted with previous coordinate systems are shown to cross into different anatomical layers, and therefore also show artefactual patterns. Secondly, with the coordinate system one can compute cortical traverses perpendicularly to the intracortical sur- faces. Sampling intensities along equi-volume traverses results in cortical profiles that reflect an anatomical layer pattern, which is specific to every structural area. It is shown that profiles constructed with previous coordinate systems of cortical depth disguise the anatomical layer pattern or even show a wrong pattern. In contrast to equi-volume profiles these profiles from previous models are not suited to analyze the cortex observer-independently, and hence can not be used for automatic delineations of cortical areas. Equi-volume profiles from four different structural areas are presented. These pro- files show area-specific shapes that are to a certain degree preserved across subjects. Finally, the profiles are used to classify primary areas observer-independently.:1 Introduction p. 1 2 Theoretical Background p. 5 2.1 Neuroanatomy of the human cerebral cortex . . . .p. 5 2.1.1 Macroscopical structure . . . . . . . . . . . .p. 5 2.1.2 Neurons: cell bodies and fibers . . . . . . . .p. 5 2.1.3 Cortical layers in cyto- and myeloarchitecture . . .p. 7 2.1.4 Microscopical structure: cortical areas and maps . .p. 11 2.2 Nuclear Magnetic Resonance . . . . . . . . . . . . . .p. 13 2.2.1 Proton spins in a static magnetic field B0 . . . . .p. 13 2.2.2 Excitation with B1 . . . . . . . . . . . . . . . . .p. 15 2.2.3 Relaxation times T1, T2 and T∗ 2 . . . . . . . . . .p. 16 2.2.4 The Bloch equations . . . . . . . . . . . . . . . . p. 17 2.3 Magnetic Resonance Imaging . . . . . . . . . . . . . .p. 20 2.3.1 Encoding of spatial location and k-space . . . . . .p. 20 2.3.2 Sequences and contrasts . . . . . . . . . . . . . . p. 22 2.3.3 Ultra-high resolution MRI . . . . . . . . . . . . . p. 24 2.3.4 Intracortical MRI: different contrasts and their sources p. 25 3 Image analysis with computed cortical laminae p. 29 3.1 Segmentation challenges of ultra-high resolution images p. 30 3.2 Reconstruction of cortical surfaces with the level set method p. 31 3.3 Myeloarchitectonic patterns on inflated hemispheres . . . . p. 33 3.4 Profiles revealing myeloarchitectonic laminar patterns . . .p. 36 3.5 Standard computational cortical layering models . . . . . . p. 38 3.6 Curvature bias of computed laminae and profiles . . . . . . p. 39 4 Materials and methods p. 41 4.1 Histology . . . . . p. 41 4.2 MR scanning . . . . p. 44 4.2.1 Ultra-high resolution post-mortem data p. 44 4.2.2 The MP2RAGE sequence . . . . . . . . p. 45 4.2.3 High-resolution in-vivo T1 maps . . . .p. 46 4.2.4 High-resolution in-vivo T∗ 2-weighted images p. 47 4.3 Image preprocessing and experiments . . . . . .p. 48 4.3.1 Fully-automatic tissue segmentation . . . . p. 48 4.3.2 Curvature Estimation . . . . . . . . . . . . p. 49 4.3.3 Preprocessing of post-mortem data . . . . . .p. 50 4.3.4 Experiments with occipital pole post-mortem data .p. 51 4.3.5 Preprocessing of in-vivo data . . . . . . . . . . p. 52 4.3.6 Evaluation experiments on in-vivo data . . . . . .p. 56 4.3.7 Application experiments on in-vivo data . . . . . p. 56 4.3.8 Software . . . . . . . . . . . . . . . . . . . . .p. 58 5 Computational cortical layering models p. 59 5.1 Implementation of standard models . .p. 60 5.1.1 The Laplace model . . . . . . . . .p. 60 5.1.2 The level set method . . . . . . . p. 61 5.1.3 The equidistant model . . . . . . .p. 62 5.2 The novel anatomically motivated equi-volume model p. 63 5.2.1 Bok’s equi-volume principle . . . . . .p. 63 5.2.2 Computational equi-volume layering . . p. 66 6 Validation of the novel equi-volume model p. 73 6.1 The equi-volume model versus previous models on post-mortem samples p. 73 6.1.1 Comparing computed surfaces and anatomical layers . . . . . . . . p. 73 6.1.2 Cortical profiles reflecting an anatomical layer . . . . . . . . .p. 79 6.2 The equi-volume model versus previous models on in-vivo data . . . .p. 82 6.2.1 Comparing computed surfaces and anatomical layers . . . . . . . . p. 82 6.2.2 Cortical profiles reflecting an anatomical layer . . . . . . . . .p. 85 6.3 Dependence of computed surfaces on cortical curvature . . . . .p. 87 6.3.1 Within a structural area . . . . . . . . . . . . . . . . . . p. 87 6.3.2 Artifactual patterns on inflated surfaces . . . . . . . . . .p. 87 7 Applying the equi-volume model: Analyzing cortical architecture in-vivo in different structural areas p. 91 7.1 Impact of resolution on cortical profiles . . . . . . . . . . . . . p. 91 7.2 Intersubject variability of cortical profiles . . . . . . . . . . . p. 94 7.3 Myeloarchitectonic patterns on inflated hemispheres . . . . . . .p. 95 7.3.1 Comparison of patterns with inflated labels . . . . . . . . . .p. 97 7.3.2 Patterns at different cortical depths . . . . . . . . . . . . .p. 97 7.4 Fully-automatic primary-area classification using cortical profiles p. 99 8 Discussion p. 105 8.1 The novel equi-volume model . . . . . . . . . . . . . . . . . . . . .p. 105 8.2 Analyzing cortical myeloarchitecture in-vivo with T1 maps . . . . . .p. 109 9 Conclusion and outlook p. 113 Bibliography p. 117 List of Figures p. 127
275

Schlussbericht zum InnoProfile Forschungsvorhaben sachsMedia - Cooperative Producing, Storage, Retrieval, and Distribution of Audiovisual Media (FKZ: 03IP608): Schlussbericht zum InnoProfile ForschungsvorhabensachsMedia - Cooperative Producing, Storage, Retrieval, and Distribution of Audiovisual Media(FKZ: 03IP608)

Berger, Arne, Eibl, Maximilian, Heinich, Stephan, Knauf, Robert, Kürsten, Jens, Kurze, Albrecht, Rickert, Markus, Ritter, Marc January 2012 (has links)
In den letzten 20 Jahren haben sich in Sachsen mit ca. 60 Sendern die meisten privaten regionalen Fernsehsender der Bundesrepublik etabliert. Diese übernehmen dabei oft Aufgaben der Informationsversorgung, denen die öffentlich-rechtlichen Sender nur unzureichend nachkommen. Das InnoProfile Forschungsvorhaben sachsMedia fokussierte auf die existentielle und facettenreiche Umbruchschwelle kleiner und mittelständischer Unternehmen aus dem Bereich der regionalen Medienverbreitung. Besonders kritisch für die Medienbranche war der Übergang von analoger zu digitaler Fernsehausstrahlung im Jahr 2010. Die Forschungsinitiative sachsMedia nahm sich der zugrundeliegenden Problematiken an und bearbeitete grundlegende Forschungsfragen in den beiden Themenkomplexen Annotation & Retrieval und Mediendistribution. Der vorliegende Forschungsbericht fasst die erreichten Ergebnisse zusammen.
276

A Novel Approach for Spherical Stereo Vision

Findeisen, Michel 23 April 2015 (has links)
The Professorship of Digital Signal Processing and Circuit Technology of Chemnitz University of Technology conducts research in the field of three-dimensional space measurement with optical sensors. In recent years this field has made major progress. For example innovative, active techniques such as the “structured light“-principle are able to measure even homogeneous surfaces and find its way into the consumer electronic market in terms of Microsoft’s Kinect® at the present time. Furthermore, high-resolution optical sensors establish powerful, passive stereo vision systems in the field of indoor surveillance. Thereby they induce new application domains such as security and assistance systems for domestic environments. However, the constraint field of view can be still considered as an essential characteristic of all these technologies. For instance, in order to measure a volume in size of a living space, two to three deployed 3D sensors have to be applied nowadays. This is due to the fact that the commonly utilized perspective projection principle constrains the visible area to a field of view of approximately 120°. On the contrary, novel fish-eye lenses allow the realization of omnidirectional projection models. Therewith, the visible field of view can be enlarged up to more than 180°. In combination with a 3D measurement approach, thus, the number of required sensors for entire room coverage can be reduced considerably. Motivated by the requirements of the field of indoor surveillance, the present work focuses on the combination of the established stereo vision principle and omnidirectional projection methods. The entire 3D measurement of a living space by means of one single sensor can be considered as major objective. As a starting point for this thesis chapter 1 discusses the underlying requirement, referring to various relevant fields of application. Based on this, the distinct purpose for the present work is stated. The necessary mathematical foundations of computer vision are reflected in Chapter 2 subsequently. Based on the geometry of the optical imaging process, the projection characteristics of relevant principles are discussed and a generic method for modeling fish-eye cameras is selected. Chapter 3 deals with the extraction of depth information using classical (perceptively imaging) binocular stereo vision configurations. In addition to a complete recap of the processing chain, especially occurring measurement uncertainties are investigated. In the following, Chapter 4 addresses special methods to convert different projection models. The example of mapping an omnidirectional to a perspective projection is employed, in order to develop a method for accelerating this process and, hereby, for reducing the computational load associated therewith. Any errors that occur, as well as the necessary adjustment of image resolution, are an integral part of the investigation. As a practical example, an application for person tracking is utilized in order to demonstrate to which extend the usage of “virtual views“ can increase the recognition rate for people detectors in the context of omnidirectional monitoring. Subsequently, an extensive search with respect to omnidirectional imaging stereo vision techniques is conducted in chapter 5. It turns out that the complete 3D capture of a room is achievable by the generation of a hemispherical depth map. Therefore, three cameras have to be combined in order to form a trinocular stereo vision system. As a basis for further research, a known trinocular stereo vision method is selected. Furthermore, it is hypothesized that, applying a modified geometric constellation of cameras, more precisely in the form of an equilateral triangle, and using an alternative method to determine the depth map, the performance can be increased considerably. A novel method is presented, which shall require fewer operations to calculate the distance information and which is to avoid a computational costly step for depth map fusion as necessary in the comparative method. In order to evaluate the presented approach as well as the hypotheses, a hemispherical depth map is generated in Chapter 6 by means of the new method. Simulation results, based on artificially generated 3D space information and realistic system parameters, are presented and subjected to a subsequent error estimate. A demonstrator for generating real measurement information is introduced in Chapter 7. In addition, the methods that are applied for calibrating the system intrinsically as well as extrinsically are explained. It turns out that the calibration procedure utilized cannot estimate the extrinsic parameters sufficiently. Initial measurements present a hemispherical depth map and thus con.rm the operativeness of the concept, but also identify the drawbacks of the calibration used. The current implementation of the algorithm shows almost real-time behaviour. Finally, Chapter 8 summarizes the results obtained along the studies and discusses them in the context of comparable binocular and trinocular stereo vision approaches. For example the results of the simulations carried out produced a saving of up to 30% in terms of stereo correspondence operations in comparison with a referred trinocular method. Furthermore, the concept introduced allows the avoidance of a weighted averaging step for depth map fusion based on precision values that have to be calculated costly. The achievable accuracy is still comparable for both trinocular approaches. In summary, it can be stated that, in the context of the present thesis, a measurement system has been developed, which has great potential for future application fields in industry, security in public spaces as well as home environments.:Abstract 7 Zusammenfassung 11 Acronyms 27 Symbols 29 Acknowledgement 33 1 Introduction 35 1.1 Visual Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 1.2 Challenges in Visual Surveillance . . . . . . . . . . . . . . . . . . . . . . . 38 1.3 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2 Fundamentals of Computer Vision Geometry 43 2.1 Projective Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.1.1 Euclidean Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.1.2 Projective Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.2 Camera Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.2.1 Geometrical Imaging Process . . . . . . . . . . . . . . . . . . . . . 45 2.2.1.1 Projection Models . . . . . . . . . . . . . . . . . . . . . . 46 2.2.1.2 Intrinsic Model . . . . . . . . . . . . . . . . . . . . . . . . 47 2.2.1.3 Extrinsic Model . . . . . . . . . . . . . . . . . . . . . . . 50 2.2.1.4 Distortion Models . . . . . . . . . . . . . . . . . . . . . . 51 2.2.2 Pinhole Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.2.2.1 Complete Forward Model . . . . . . . . . . . . . . . . . . 52 2.2.2.2 Back Projection . . . . . . . . . . . . . . . . . . . . . . . 53 2.2.3 Equiangular Camera Model . . . . . . . . . . . . . . . . . . . . . . 54 2.2.4 Generic Camera Models . . . . . . . . . . . . . . . . . . . . . . . . 55 2.2.4.1 Complete Forward Model . . . . . . . . . . . . . . . . . . 56 2.2.4.2 Back Projection . . . . . . . . . . . . . . . . . . . . . . . 58 2.3 Camera Calibration Methods . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.3.1 Perspective Camera Calibration . . . . . . . . . . . . . . . . . . . . 59 2.3.2 Omnidirectional Camera Calibration . . . . . . . . . . . . . . . . . 59 2.4 Two-View Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.4.1 Epipolar Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 2.4.2 The Fundamental Matrix . . . . . . . . . . . . . . . . . . . . . . . 63 2.4.3 Epipolar Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3 Fundamentals of Stereo Vision 67 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.1.1 The Concept Stereo Vision . . . . . . . . . . . . . . . . . . . . . . 67 3.1.2 Overview of a Stereo Vision Processing Chain . . . . . . . . . . . . 68 3.2 Stereo Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.2.1 Extrinsic Stereo Calibration With Respect to the Projective Error 70 3.3 Stereo Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.3.1 A Compact Algorithm for Rectification of Stereo Pairs . . . . . . . 73 3.4 Stereo Correspondence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4.1 Disparity Computation . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4.2 The Correspondence Problem . . . . . . . . . . . . . . . . . . . . . 77 3.5 Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.5.1 Depth Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.5.2 Range Field of Measurement . . . . . . . . . . . . . . . . . . . . . 80 3.5.3 Measurement Accuracy . . . . . . . . . . . . . . . . . . . . . . . . 80 3.5.4 Measurement Errors . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.5.4.1 Quantization Error . . . . . . . . . . . . . . . . . . . . . 82 3.5.4.2 Statistical Distribution of Quantization Errors . . . . . . 83 4 Virtual Cameras 87 4.1 Introduction and Related Works . . . . . . . . . . . . . . . . . . . . . . . 88 4.2 Omni to Perspective Vision . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.1 Forward Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.2 Backward Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.2.3 Fast Backward Mapping . . . . . . . . . . . . . . . . . . . . . . . . 96 4.3 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.4 Accuracy Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.4.1 Intrinsics of the Source Camera . . . . . . . . . . . . . . . . . . . . 102 4.4.2 Intrinsics of the Target Camera . . . . . . . . . . . . . . . . . . . . 102 4.4.3 Marginal Virtual Pixel Size . . . . . . . . . . . . . . . . . . . . . . 104 4.5 Performance Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.6 Virtual Perspective Views for Real-Time People Detection . . . . . . . . . 110 5 Omnidirectional Stereo Vision 113 5.1 Introduction and Related Works . . . . . . . . . . . . . . . . . . . . . . . 113 5.1.1 Geometrical Configuration . . . . . . . . . . . . . . . . . . . . . . . 116 5.1.1.1 H-Binocular Omni-Stereo with Panoramic Views . . . . . 117 5.1.1.2 V-Binocular Omnistereo with Panoramic Views . . . . . 119 5.1.1.3 Binocular Omnistereo with Hemispherical Views . . . . . 120 5.1.1.4 Trinocular Omnistereo . . . . . . . . . . . . . . . . . . . 122 5.1.1.5 Miscellaneous Configurations . . . . . . . . . . . . . . . . 125 5.2 Epipolar Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.2.1 Cylindrical Rectification . . . . . . . . . . . . . . . . . . . . . . . . 127 5.2.2 Epipolar Equi-Distance Rectification . . . . . . . . . . . . . . . . . 128 5.2.3 Epipolar Stereographic Rectification . . . . . . . . . . . . . . . . . 128 5.2.4 Comparison of Rectification Methods . . . . . . . . . . . . . . . . 129 5.3 A Novel Spherical Stereo Vision Setup . . . . . . . . . . . . . . . . . . . . 129 5.3.1 Physical Omnidirectional Camera Configuration . . . . . . . . . . 131 5.3.2 Virtual Rectified Cameras . . . . . . . . . . . . . . . . . . . . . . . 131 6 A Novel Spherical Stereo Vision Algorithm 135 6.1 Matlab Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Extrinsic Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 6.3 Physical Camera Configuration . . . . . . . . . . . . . . . . . . . . . . . . 137 6.4 Virtual Camera Configuration . . . . . . . . . . . . . . . . . . . . . . . . . 137 6.4.1 The Focal Length . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.4.2 Prediscussion of the Field of View . . . . . . . . . . . . . . . . . . 138 6.4.3 Marginal Virtual Pixel Sizes . . . . . . . . . . . . . . . . . . . . . . 139 6.4.4 Calculation of the Field of View . . . . . . . . . . . . . . . . . . . 142 6.4.5 Calculation of the Virtual Pixel Size Ratios . . . . . . . . . . . . . 143 6.4.6 Results of the Virtual Camera Parameters . . . . . . . . . . . . . . 144 6.5 Spherical Depth Map Generation . . . . . . . . . . . . . . . . . . . . . . . 147 6.5.1 Omnidirectional Imaging Process . . . . . . . . . . . . . . . . . . . 148 6.5.2 Rectification Process . . . . . . . . . . . . . . . . . . . . . . . . . . 148 6.5.3 Rectified Depth Map Generation . . . . . . . . . . . . . . . . . . . 150 6.5.4 Spherical Depth Map Generation . . . . . . . . . . . . . . . . . . . 151 6.5.5 3D Reprojection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.6 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 7 Stereo Vision Demonstrator 163 7.1 Physical System Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 7.2 System Calibration Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . 165 7.2.1 Intrinsic Calibration of the Physical Cameras . . . . . . . . . . . . 165 7.2.2 Extrinsic Calibration of the Physical and the Virtual Cameras . . 166 7.2.2.1 Extrinsic Initialization of the Physical Cameras . . . . . 167 7.2.2.2 Extrinsic Initialization of the Virtual Cameras . . . . . . 167 7.2.2.3 Two-View Stereo Calibration and Rectification . . . . . . 167 7.2.2.4 Three-View Stereo Rectification . . . . . . . . . . . . . . 168 7.2.2.5 Extrinsic Calibration Results . . . . . . . . . . . . . . . . 169 7.3 Virtual Camera Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 7.4 Software Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5.1 Qualitative Assessment . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5.2 Performance Measurements . . . . . . . . . . . . . . . . . . . . . . 174 8 Discussion and Outlook 177 8.1 Discussion of the Current Results and Further Need for Research . . . . . 177 8.1.1 Assessment of the Geometrical Camera Configuration . . . . . . . 178 8.1.2 Assessment of the Depth Map Computation . . . . . . . . . . . . . 179 8.1.3 Assessment of the Depth Measurement Error . . . . . . . . . . . . 182 8.1.4 Assessment of the Spherical Stereo Vision Demonstrator . . . . . . 183 8.2 Review of the Different Approaches for Hemispherical Depth Map Generation184 8.2.1 Comparison of the Equilateral and the Right-Angled Three-View Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 8.2.2 Review of the Three-View Approach in Comparison with the Two- View Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 8.3 A Sample Algorithm for Human Behaviour Analysis . . . . . . . . . . . . 187 8.4 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 A Relevant Mathematics 191 A.1 Cross Product by Skew Symmetric Matrix . . . . . . . . . . . . . . . . . . 191 A.2 Derivation of the Quantization Error . . . . . . . . . . . . . . . . . . . . . 191 A.3 Derivation of the Statistical Distribution of Quantization Errors . . . . . . 192 A.4 Approximation of the Quantization Error for Equiangular Geometry . . . 194 B Further Relevant Publications 197 B.1 H-Binocular Omnidirectional Stereo Vision with Panoramic Views . . . . 197 B.2 V-Binocular Omnidirectional Stereo Vision with Panoramic Views . . . . 198 B.3 Binocular Omnidirectional Stereo Vision with Hemispherical Views . . . . 200 B.4 Trinocular Omnidirectional Stereo Vision . . . . . . . . . . . . . . . . . . 201 B.5 Miscellaneous Configurations . . . . . . . . . . . . . . . . . . . . . . . . . 202 Bibliography 209 List of Figures 223 List of Tables 229 Affidavit 231 Theses 233 Thesen 235 Curriculum Vitae 237
277

Visual Place Recognition in Changing Environments using Additional Data-Inherent Knowledge

Schubert, Stefan 15 November 2023 (has links)
Visual place recognition is the task of finding same places in a set of database images for a given set of query images. This becomes particularly challenging for long-term applications when the environmental condition changes between or within the database and query set, e.g., from day to night. Visual place recognition in changing environments can be used if global position data like GPS is not available or very inaccurate, or for redundancy. It is required for tasks like loop closure detection in SLAM, candidate selection for global localization, or multi-robot/multi-session mapping and map merging. In contrast to pure image retrieval, visual place recognition can often build upon additional information and data for improvements in performance, runtime, or memory usage. This includes additional data-inherent knowledge about information that is contained in the image sets themselves because of the way they were recorded. Using data-inherent knowledge avoids the dependency on other sensors, which increases the generality of methods for an integration into many existing place recognition pipelines. This thesis focuses on the usage of additional data-inherent knowledge. After the discussion of basics about visual place recognition, the thesis gives a systematic overview of existing data-inherent knowledge and corresponding methods. Subsequently, the thesis concentrates on a deeper consideration and exploitation of four different types of additional data-inherent knowledge. This includes 1) sequences, i.e., the database and query set are recorded as spatio-temporal sequences so that consecutive images are also adjacent in the world, 2) knowledge of whether the environmental conditions within the database and query set are constant or continuously changing, 3) intra-database similarities between the database images, and 4) intra-query similarities between the query images. Except for sequences, all types have received only little attention in the literature so far. For the exploitation of knowledge about constant conditions within the database and query set (e.g., database: summer, query: winter), the thesis evaluates different descriptor standardization techniques. For the alternative scenario of continuous condition changes (e.g., database: sunny to rainy, query: sunny to cloudy), the thesis first investigates the qualitative and quantitative impact on the performance of image descriptors. It then proposes and evaluates four unsupervised learning methods, including our novel clustering-based descriptor standardization method K-STD and three PCA-based methods from the literature. To address the high computational effort of descriptor comparisons during place recognition, our novel method EPR for efficient place recognition is proposed. Given a query descriptor, EPR uses sequence information and intra-database similarities to identify nearly all matching descriptors in the database. For a structured combination of several sources of additional knowledge in a single graph, the thesis presents our novel graphical framework for place recognition. After the minimization of the graph's error with our proposed ICM-based optimization, the place recognition performance can be significantly improved. For an extensive experimental evaluation of all methods in this thesis and beyond, a benchmark for visual place recognition in changing environments is presented, which is composed of six datasets with thirty sequence combinations.
278

MEMS-Laser-Display-System: Analyse, Implementierung und Testverfahrenentwicklung

Specht, Hendrik 20 May 2011 (has links)
In der vorliegenden Arbeit werden die im Zusammenhang mit der Strahlablenkung stehenden Systemaspekte der auf MEMS-Scanner basierenden Laser-Display-Technologie theoretisch analysiert und aus den Ergebnissen die praktische Implementierung eines Laser-Display-Systems als Testplattform vorgenommen. Dabei werden mit einem Ansatz auf Basis zweier 1D-Scanner und einem weiteren Ansatz mit einem 2D-Scanner zwei Varianten realisiert. Darüber hinaus erfolgt die Entwicklung eines bildbasierten Multiparametertestverfahrens, welches sowohl für den Test komplettierter Strahlablenkeinheiten bzw. Projektionsmodule als auch zum umfassenden und zeiteffizienten Test von MEMS-Scannern auf Wafer-Level geeignet ist. Mit diesem Verfahren erfolgt eine Charakterisierung der zwei realisierten Varianten des Laser-Displays. Ausgehend von den Eigenschaften des menschlichen visuellen Systems und den daraus resultierenden Anforderungen an das Bild sowie einer systemtheoretischen Betrachtung des mechanischen Verhaltens von MEMS-Scannern bildet die Ansteuersignalerzeugung für den resonanten Betrieb der schnellen und den quasistatischen Betrieb der langsamen Achse einen Schwerpunkt. Neben dem reinen digitalen Regler- bzw. Filterentwurf sowie mehreren Linearisierungsmaßnahmen beinhaltet dieser auch die Herleitung einer FPGA-basierten Videosignalverarbeitung zur Konvertierung von Scannpattern, Zeitregime und Auflösung mit einer entsprechenden Synchronisierung von Strahlablenkung und Lasermodulation. Auf Grundlage der daraus resultierenden Erkenntnisse über den Zusammenhang zwischen Scanner-/Systemparametern und Bildparametern werden Testbild-Bildverarbeitungsalgorithmus-Kombinationen entwickelt und diese, angeordnet in einer Sequenz, mit einem Kalibrierverfahren zu einem Testverfahren für MEMS-Scanner vervollständigt. Die Ergebnisse dieser Arbeit entstanden im Rahmen von industriell beauftragten F&E-Projekten und fließen in die andauernde Fortführung des Themas beim Auftraggeber ein.

Page generated in 0.0872 seconds