261 |
Towards Dense Visual SLAMPietzsch, Tobias 07 June 2011 (has links)
Visual Simultaneous Localisation and Mapping (SLAM) is concerned with simultaneously estimating the pose of a camera and a map of the environment from a sequence of images. Traditionally, sparse maps comprising isolated point features have been employed, which facilitate robust localisation but are not well suited to advanced applications. In this thesis, we present map representations that allow a more dense description of the environment. In one approach, planar features are used to represent textured planar surfaces in the scene. This model is applied within a visual SLAM framework based on the Extended Kalman Filter. We presents solutions to several challenges which arise from this approach.
|
262 |
Studies on two specific inverse problems from imaging and financeRückert, Nadja 20 July 2012 (has links) (PDF)
This thesis deals with regularization parameter selection methods in the context of Tikhonov-type regularization with Poisson distributed data, in particular the reconstruction of images, as well as with the identification of the volatility surface from observed option prices.
In Part I we examine the choice of the regularization parameter when reconstructing an image, which is disturbed by Poisson noise, with Tikhonov-type regularization. This type of regularization is a generalization of the classical Tikhonov regularization in the Banach space setting and often called variational regularization. After a general consideration of Tikhonov-type regularization for data corrupted by Poisson noise, we examine the methods for choosing the regularization parameter numerically on the basis of two test images and real PET data.
In Part II we consider the estimation of the volatility function from observed call option prices with the explicit formula which has been derived by Dupire using the Black-Scholes partial differential equation. The option prices are only available as discrete noisy observations so that the main difficulty is the ill-posedness of the numerical differentiation. Finite difference schemes, as regularization by discretization of the inverse and ill-posed problem, do not overcome these difficulties when they are used to evaluate the partial derivatives. Therefore we construct an alternative algorithm based on the weak formulation of the dual Black-Scholes partial differential equation and evaluate the performance of the finite difference schemes and the new algorithm for synthetic and real option prices.
|
263 |
Studies on two specific inverse problems from imaging and financeRückert, Nadja 16 July 2012 (has links)
This thesis deals with regularization parameter selection methods in the context of Tikhonov-type regularization with Poisson distributed data, in particular the reconstruction of images, as well as with the identification of the volatility surface from observed option prices.
In Part I we examine the choice of the regularization parameter when reconstructing an image, which is disturbed by Poisson noise, with Tikhonov-type regularization. This type of regularization is a generalization of the classical Tikhonov regularization in the Banach space setting and often called variational regularization. After a general consideration of Tikhonov-type regularization for data corrupted by Poisson noise, we examine the methods for choosing the regularization parameter numerically on the basis of two test images and real PET data.
In Part II we consider the estimation of the volatility function from observed call option prices with the explicit formula which has been derived by Dupire using the Black-Scholes partial differential equation. The option prices are only available as discrete noisy observations so that the main difficulty is the ill-posedness of the numerical differentiation. Finite difference schemes, as regularization by discretization of the inverse and ill-posed problem, do not overcome these difficulties when they are used to evaluate the partial derivatives. Therefore we construct an alternative algorithm based on the weak formulation of the dual Black-Scholes partial differential equation and evaluate the performance of the finite difference schemes and the new algorithm for synthetic and real option prices.
|
264 |
Data Fusion for Multi-Sensor Nondestructive Detection of Surface Cracks in Ferromagnetic MaterialsHeideklang, René 28 November 2018 (has links)
Ermüdungsrissbildung ist ein gefährliches und kostenintensives Phänomen, welches frühzeitig erkannt werden muss. Weil kleine Fehlstellen jedoch hohe Testempfindlichkeit erfordern, wird die Prüfzuverlässigkeit durch Falschanzeigen vermindert. Diese Arbeit macht sich deshalb die Diversität unterschiedlicher zerstörungsfreier Oberflächenprüfmethoden zu Nutze, um mittels Datenfusion die Zuverlässigkeit der Fehlererkennung zu erhöhen.
Der erste Beitrag dieser Arbeit in neuartigen Ansätzen zur Fusion von Prüfbildern. Diese werden durch Oberflächenabtastung mittels Wirbelstromprüfung, thermischer Prüfung und magnetischer Streuflussprüfung gewonnen. Die Ergebnisse zeigen, dass schon einfache algebraische Fusionsregeln gute Ergebnisse liefern, sofern die Daten adäquat vorverarbeitet wurden. So übertrifft Datenfusion den besten Einzelsensor in der pixelbasierten Falscherkennungsrate um den Faktor sechs bei einer Nutentiefe von 10 μm.
Weiterhin wird die Fusion im Bildtransformationsbereich untersucht. Jedoch werden die theoretischen Vorteile solcher richtungsempfindlichen Transformationen in der Praxis mit den vorliegenden Daten nicht erreicht. Nichtsdestotrotz wird der Vorteil der Fusion gegenüber Einzelsensorprüfung auch hier bestätigt.
Darüber hinaus liefert diese Arbeit neuartige Techniken zur Fusion auch auf höheren Ebenen der Signalabstraktion. Ein Ansatz, der auf Kerndichtefunktionen beruht, wird eingeführt, um örtlich verteilte Detektionshypothesen zu integrieren. Er ermöglicht, die praktisch unvermeidbaren Registrierungsfehler explizit zu modellieren. Oberflächenunstetigkeiten von 30 μm Tiefe können zuverlässig durch Fusion gefunden werden, wogegen das beste Einzelverfahren erst Tiefen ab 40–50 μm erfolgreich auffindet. Das Experiment wird auf einem zweiten Prüfkörper bestätigt.
Am Ende der Arbeit werden Richtlinien für den Einsatz von Datenfusion gegeben, und die Notwendigkeit einer Initiative zum Teilen von Messdaten wird betont, um zukünftige Forschung zu fördern. / Fatigue cracking is a dangerous and cost-intensive phenomenon that requires early detection. But at high test sensitivity, the abundance of false indications limits the reliability of conventional materials testing. This thesis exploits the diversity of physical principles that different nondestructive surface inspection methods offer, by applying data fusion techniques to increase the reliability of defect detection.
The first main contribution are novel approaches for the fusion of NDT images. These surface scans are obtained from state-of-the-art inspection procedures in Eddy Current Testing, Thermal Testing and Magnetic Flux Leakage Testing. The implemented image fusion strategy demonstrates that simple algebraic fusion rules are sufficient for high performance, given adequate signal normalization. Data fusion reduces the rate of false positives is reduced by a factor of six over the best individual sensor at a 10 μm deep groove.
Moreover, the utility of state-of-the-art image representations, like the Shearlet domain, are explored. However, the theoretical advantages of such directional transforms are not attained in practice with the given data. Nevertheless, the benefit of fusion over single-sensor inspection is confirmed a second time.
Furthermore, this work proposes novel techniques for fusion at a high level of signal abstraction. A kernel-based approach is introduced to integrate spatially scattered detection hypotheses. This method explicitly deals with registration errors that are unavoidable in practice. Surface discontinuities as shallow as 30 μm are reliably found by fusion, whereas the best individual sensor requires depths of 40–50 μm for successful detection. The experiment is replicated on a similar second test specimen.
Practical guidelines are given at the end of the thesis, and the need for a data sharing initiative is stressed to promote future research on this topic.
|
265 |
Superpixels and their Application for Visual Place Recognition in Changing EnvironmentsNeubert, Peer 03 December 2015 (has links) (PDF)
Superpixels are the results of an image oversegmentation. They are an established intermediate level image representation and used for various applications including object detection, 3d reconstruction and semantic segmentation. While there are various approaches to create such segmentations, there is a lack of knowledge about their properties. In particular, there are contradicting results published in the literature. This thesis identifies segmentation quality, stability, compactness and runtime to be important properties of superpixel segmentation algorithms. While for some of these properties there are established evaluation methodologies available, this is not the case for segmentation stability and compactness. Therefore, this thesis presents two novel metrics for their evaluation based on ground truth optical flow. These two metrics are used together with other novel and existing measures to create a standardized benchmark for superpixel algorithms. This benchmark is used for extensive comparison of available algorithms. The evaluation results motivate two novel segmentation algorithms that better balance trade-offs of existing algorithms: The proposed Preemptive SLIC algorithm incorporates a local preemption criterion in the established SLIC algorithm and saves about 80 % of the runtime. The proposed Compact Watershed algorithm combines Seeded Watershed segmentation with compactness constraints to create regularly shaped, compact superpixels at the even higher speed of the plain watershed transformation.
Operating autonomous systems over the course of days, weeks or months, based on visual navigation, requires repeated recognition of places despite severe appearance changes as they are for example induced by illumination changes, day-night cycles, changing weather or seasons - a severe problem for existing methods. Therefore, the second part of this thesis presents two novel approaches that incorporate superpixel segmentations in place recognition in changing environments. The first novel approach is the learning of systematic appearance changes. Instead of matching images between, for example, summer and winter directly, an additional prediction step is proposed. Based on superpixel vocabularies, a predicted image is generated that shows, how the summer scene could look like in winter or vice versa. The presented results show that, if certain assumptions on the appearance changes and the available training data are met, existing holistic place recognition approaches can benefit from this additional prediction step. Holistic approaches to place recognition are known to fail in presence of viewpoint changes. Therefore, this thesis presents a new place recognition system based on local landmarks and Star-Hough. Star-Hough is a novel approach to incorporate the spatial arrangement of local image features in the computation of image similarities. It is based on star graph models and Hough voting and particularly suited for local features with low spatial precision and high outlier rates as they are expected in the presence of appearance changes. The novel landmarks are a combination of local region detectors and descriptors based on convolutional neural networks. This thesis presents and evaluates several new approaches to incorporate superpixel segmentations in local region detection. While the proposed system can be used with different types of local regions, in particular the combination with regions obtained from the novel multiscale superpixel grid shows to perform superior to the state of the art methods - a promising basis for practical applications.
|
266 |
New Algorithms for Macromolecular Structure Determination / Neue Algorithmen zur Strukturbestimmung von MakromolekülenHeisen, Burkhard Clemens 08 September 2009 (has links)
No description available.
|
267 |
A Novel Approach for Spherical Stereo Vision / Ein Neuer Ansatz für Sphärisches Stereo VisionFindeisen, Michel 27 April 2015 (has links) (PDF)
The Professorship of Digital Signal Processing and Circuit Technology of Chemnitz University of Technology conducts research in the field of three-dimensional space measurement with optical sensors. In recent years this field has made major progress.
For example innovative, active techniques such as the “structured light“-principle are able to measure even homogeneous surfaces and find its way into the consumer electronic market in terms of Microsoft’s Kinect® at the present time. Furthermore, high-resolution optical sensors establish powerful, passive stereo vision systems in the field of indoor surveillance. Thereby they induce new application domains such as security and assistance systems for domestic environments.
However, the constraint field of view can be still considered as an essential characteristic of all these technologies. For instance, in order to measure a volume in size of a living space, two to three deployed 3D sensors have to be applied nowadays. This is due to the fact that the commonly utilized perspective projection principle constrains the visible area to a field of view of approximately 120°. On the contrary, novel fish-eye lenses allow the realization of omnidirectional projection models. Therewith, the visible field of view can be enlarged up to more than 180°. In combination with a 3D measurement approach, thus, the number of required sensors for entire room coverage can be reduced considerably.
Motivated by the requirements of the field of indoor surveillance, the present work focuses on the combination of the established stereo vision principle and omnidirectional projection methods. The entire 3D measurement of a living space by means of one single sensor can be considered as major objective.
As a starting point for this thesis chapter 1 discusses the underlying requirement, referring to various relevant fields of application. Based on this, the distinct purpose for the present work is stated.
The necessary mathematical foundations of computer vision are reflected in Chapter 2 subsequently. Based on the geometry of the optical imaging process, the projection characteristics of relevant principles are discussed and a generic method for modeling fish-eye cameras is selected.
Chapter 3 deals with the extraction of depth information using classical (perceptively imaging) binocular stereo vision configurations. In addition to a complete recap of the processing chain, especially occurring measurement uncertainties are investigated.
In the following, Chapter 4 addresses special methods to convert different projection models. The example of mapping an omnidirectional to a perspective projection is employed, in order to develop a method for accelerating this process and, hereby, for reducing the computational load associated therewith. Any errors that occur, as well as the necessary adjustment of image resolution, are an integral part of the investigation. As a practical example, an application for person tracking is utilized in order to demonstrate to which extend the usage of “virtual views“ can increase the recognition rate for people detectors in the context of omnidirectional monitoring.
Subsequently, an extensive search with respect to omnidirectional imaging stereo vision techniques is conducted in chapter 5. It turns out that the complete 3D capture of a room is achievable by the generation of a hemispherical depth map. Therefore, three cameras have to be combined in order to form a trinocular stereo vision system. As a basis for further research, a known trinocular stereo vision method is selected. Furthermore, it is hypothesized that, applying a modified geometric constellation of cameras, more precisely in the form of an equilateral triangle, and using an alternative method to determine the depth map, the performance can be increased considerably. A novel method is presented, which shall require fewer operations to calculate the distance information and which is to avoid a computational costly step for depth map fusion as necessary in the comparative method.
In order to evaluate the presented approach as well as the hypotheses, a hemispherical depth map is generated in Chapter 6 by means of the new method. Simulation results, based on artificially generated 3D space information and realistic system parameters, are presented and subjected to a subsequent error estimate.
A demonstrator for generating real measurement information is introduced in Chapter 7. In addition, the methods that are applied for calibrating the system intrinsically as well as extrinsically are explained. It turns out that the calibration procedure utilized cannot estimate the extrinsic parameters sufficiently. Initial measurements present a hemispherical depth map and thus con.rm the operativeness of the concept, but also identify the drawbacks of the calibration used. The current implementation of the algorithm shows almost real-time behaviour.
Finally, Chapter 8 summarizes the results obtained along the studies and discusses them in the context of comparable binocular and trinocular stereo vision approaches. For example the results of the simulations carried out produced a saving of up to 30% in terms of stereo correspondence operations in comparison with a referred trinocular method. Furthermore, the concept introduced allows the avoidance of a weighted averaging step for depth map fusion based on precision values that have to be calculated costly. The achievable accuracy is still comparable for both trinocular approaches.
In summary, it can be stated that, in the context of the present thesis, a measurement system has been developed, which has great potential for future application fields in industry, security in public spaces as well as home environments.
|
268 |
Modelling cortical laminae with 7T magnetic resonance imagingWähnert, Miriam 28 January 2015 (has links) (PDF)
To fully understand how the brain works, it is necessary to relate the
brain’s function to its anatomy. Cortical anatomy is subject-specific. It is character-
ized by the thickness and number of intracortical layers, which differ from one cortical
area to the next. Each cortical area fulfills a certain function. With magnetic res-
onance imaging (MRI) it is possible to study structure and function in-vivo within
the same subject. The resolution of ultra-high field MRI at 7T allows to resolve
intracortical anatomy. This opens the possibility to relate cortical function of a sub-
ject to its corresponding individual structural area, which is one of the main goals of
neuroimaging.
To parcellate the cortex based on its intracortical structure in-vivo, firstly, im-
ages have to be quantitative and homogeneous so that they can be processed fully-
automatically. Moreover, the resolution has to be high enough to resolve intracortical
layers. Therefore, the in-vivo MR images acquired for this work are quantitative T1
maps at 0.5 mm isotropic resolution.
Secondly, computational tools are needed to analyze the cortex observer-independ-
ently. The most recent tools designed for this task are presented in this thesis. They
comprise the segmentation of the cortex, and the construction of a novel equi-volume
coordinate system of cortical depth. The equi-volume model is not restricted to in-
vivo data, but is used on ultra-high resolution post-mortem data from MRI as well.
It could also be used on 3D volumes reconstructed from 2D histological stains.
An equi-volume coordinate system yields firstly intracortical surfaces that follow
anatomical layers all along the cortex, even within areas that are severely folded
where previous models fail. MR intensities can be mapped onto these equi-volume
surfaces to identify the location and size of some structural areas. Surfaces com-
puted with previous coordinate systems are shown to cross into different anatomical
layers, and therefore also show artefactual patterns. Secondly, with the coordinate
system one can compute cortical traverses perpendicularly to the intracortical sur-
faces. Sampling intensities along equi-volume traverses results in cortical profiles that
reflect an anatomical layer pattern, which is specific to every structural area. It is
shown that profiles constructed with previous coordinate systems of cortical depth
disguise the anatomical layer pattern or even show a wrong pattern. In contrast to
equi-volume profiles these profiles from previous models are not suited to analyze the
cortex observer-independently, and hence can not be used for automatic delineations
of cortical areas.
Equi-volume profiles from four different structural areas are presented. These pro-
files show area-specific shapes that are to a certain degree preserved across subjects.
Finally, the profiles are used to classify primary areas observer-independently.
|
269 |
Workshop Audiovisuelle MedienEibl, Maximilian, Kürsten, Jens, Ritter, Marc 03 June 2009 (has links) (PDF)
Audiovisuelle Medien stellen Archive vor zunehmende Probleme. Ein stark wachsender (Web-)TV-Markt mit Sende- oder Rohmaterial, zunehmender Einsatz von medial aufbereitetem Lehrmaterial in Schulen, Hochschulen und Firmen, die Verbreitung der Videoanalyse als Forschungs- und Lehrmethode, die Ausbreitung von Überwachungskameras sowie die immer günstigeren Produktionsbedingungen vom professionellen Produzenten bis zum Heimvideo sind nur einige Stichworte um die neuen quantitativen Dimensionen zu umreißen. Die archivarischen und dokumentarischen Werkzeuge sind heute mit dieser Situation überfordert.
Der Workshop versucht hier Probleme und Lösungsmöglichkeiten zu umreißen und beschäftigt sich mit den technologischen Fragestellungen rund um die Archivierung audiovisueller Medien, seien es analoge, digitalisierte oder digitale Medien. Dabei werden zum einen die technologischen Probleme angesprochen, die zum Aufbau und Management eines Archivs bewältigt werden müssen. Zum anderen wird der praktische Einsatz von der Gestaltung der Benutzungsoberfläche bis zur Frage des Umgangs mit kritischem Material diskutiert.
|
270 |
Schlussbericht zum InnoProfile Forschungsvorhaben sachsMedia - Cooperative Producing, Storage, Retrieval, and Distribution of Audiovisual Media (FKZ: 03IP608)Berger, Arne, Eibl, Maximilian, Heinich, Stephan, Knauf, Robert, Kürsten, Jens, Kurze, Albrecht, Rickert, Markus, Ritter, Marc 29 September 2012 (has links) (PDF)
In den letzten 20 Jahren haben sich in Sachsen mit ca. 60 Sendern die meisten privaten regionalen Fernsehsender der Bundesrepublik etabliert. Diese übernehmen dabei oft Aufgaben der Informationsversorgung, denen die öffentlich-rechtlichen Sender nur unzureichend nachkommen. Das InnoProfile Forschungsvorhaben sachsMedia fokussierte auf die existentielle und facettenreiche Umbruchschwelle kleiner und mittelständischer Unternehmen aus dem Bereich der regionalen Medienverbreitung. Besonders kritisch für die Medienbranche war der Übergang von analoger zu digitaler Fernsehausstrahlung im Jahr 2010. Die Forschungsinitiative sachsMedia nahm sich der zugrundeliegenden Problematiken an und bearbeitete grundlegende Forschungsfragen in den beiden Themenkomplexen Annotation & Retrieval und Mediendistribution. Der vorliegende Forschungsbericht fasst die erreichten Ergebnisse zusammen.
|
Page generated in 0.0865 seconds