• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A novel approach to restoration of Poissonian images

Shaked, Elad 09 February 2010 (has links)
The problem of reconstruction of digital images from their degraded measurements is regarded as a problem of central importance in various fields of engineering and imaging sciences. In such cases, the degradation is typically caused by the resolution limitations of an imaging device in use and/or by the destructive influence of measurement noise. Specifically, when the noise obeys a Poisson probability law, standard approaches to the problem of image reconstruction are based on using fixed-point algorithms which follow the methodology proposed by Richardson and Lucy in the beginning of the 1970s. The practice of using such methods, however, shows that their convergence properties tend to deteriorate at relatively high noise levels (which typically takes place in so-called low-count settings). This work introduces a novel method for de-noising and/or de-blurring of digital images that have been corrupted by Poisson noise. The proposed method is derived using the framework of MAP estimation, under the assumption that the image of interest can be sparsely represented in the domain of a properly designed linear transform. Consequently, a shrinkage-based iterative procedure is proposed, which guarantees the maximization of an associated maximum-a-posteriori criterion. It is shown in a series of both computer-simulated and real-life experiments that the proposed method outperforms a number of existing alternatives in terms of stability, precision, and computational efficiency.
2

A novel approach to restoration of Poissonian images

Shaked, Elad 09 February 2010 (has links)
The problem of reconstruction of digital images from their degraded measurements is regarded as a problem of central importance in various fields of engineering and imaging sciences. In such cases, the degradation is typically caused by the resolution limitations of an imaging device in use and/or by the destructive influence of measurement noise. Specifically, when the noise obeys a Poisson probability law, standard approaches to the problem of image reconstruction are based on using fixed-point algorithms which follow the methodology proposed by Richardson and Lucy in the beginning of the 1970s. The practice of using such methods, however, shows that their convergence properties tend to deteriorate at relatively high noise levels (which typically takes place in so-called low-count settings). This work introduces a novel method for de-noising and/or de-blurring of digital images that have been corrupted by Poisson noise. The proposed method is derived using the framework of MAP estimation, under the assumption that the image of interest can be sparsely represented in the domain of a properly designed linear transform. Consequently, a shrinkage-based iterative procedure is proposed, which guarantees the maximization of an associated maximum-a-posteriori criterion. It is shown in a series of both computer-simulated and real-life experiments that the proposed method outperforms a number of existing alternatives in terms of stability, precision, and computational efficiency.
3

Accuracy and variability of item parameter estimates from marginal maximum a posteriori estimation and Bayesian inference via Gibbs samplers

Wu, Yi-Fang 01 August 2015 (has links)
Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and variability of the item parameter estimates from the marginal maximum a posteriori estimation via an expectation-maximization algorithm (MMAP/EM) and the Markov chain Monte Carlo Gibbs sampling (MCMC/GS) approach. In the study, the various factors which have an impact on the accuracy and variability of the item parameter estimates are discussed, and then further evaluated through a large scale simulation. The factors of interest include the composition and length of tests, the distribution of underlying latent traits, the size of samples, and the prior distributions of discrimination, difficulty, and pseudo-guessing parameters. The results of the two estimation methods are compared to determine the lower limit--in terms of test length, sample size, test characteristics, and prior distributions of item parameters--at which the methods can satisfactorily recover item parameters and efficiently function in reality. For practitioners, the results help to define limits on the appropriate use of the BILOG-MG (which implements MMAP/EM) and also, to assist in deciding the utility of OpenBUGS (which carries out MCMC/GS) for item parameter estimation in practice.
4

The Stixel World

Pfeiffer, David 31 August 2012 (has links)
Die Stixel-Welt ist eine neuartige und vielseitig einsetzbare Zwischenrepräsentation zur effizienten Beschreibung dreidimensionaler Szenen. Heutige stereobasierte Sehsysteme ermöglichen die Bestimmung einer Tiefenmessung für nahezu jeden Bildpunkt in Echtzeit. Das erlaubt zum einen die Anwendung neuer leistungsfähiger Algorithmen, doch gleichzeitig steigt die zu verarbeitende Datenmenge und der dadurch notwendig werdende Aufwand massiv an. Gerade im Hinblick auf die limitierte Rechenleistung jener Systeme, wie sie in der videobasierten Fahrerassistenz zum Einsatz kommen, ist dies eine große Herausforderung. Um dieses Problem zu lösen, bietet die Stixel-Welt eine generische Abstraktion der Rohdaten des Sensors. Jeder Stixel repräsentiert individuell einen Teil eines Objektes im Raum und segmentiert so die Umgebung in Freiraum und Objekte. Die Arbeit stellt die notwendigen Verfahren vor, um die Stixel-Welt mittels dynamischer Programmierung in einem einzigen globalen Optimierungsschritt in Echtzeit zu extrahieren. Dieser Prozess wird durch eine Vielzahl unterschiedlicher Annahmen über unsere von Menschenhand geschaffene Umgebung gestützt. Darauf aufbauend wird ein Kalmanfilter-basiertes Verfahren zur präzisen Bewegungsschätzung anderer Objekte vorgestellt. Die Arbeit stellt umfangreiche Bewertungen der zu erwartenden Leistungsfähigkeit aller vorgestellten Verfahren an. Dafür kommen sowohl vergleichende Ansätze als auch diverse Referenzsensoren, wie beispielsweise LIDAR, RADAR oder hochpräzise Inertialmesssysteme, zur Anwendung. Die Stixel-Welt ist eine extrem kompakte Abstraktion der dreidimensionalen Umgebung und bietet gleichzeitig einfachsten Zugriff auf alle essentiellen Informationen der Szene. Infolge dieser Arbeit war es möglich, die Effizienz vieler auf der Stixel-Welt aufbauender Algorithmen deutlich zu verbessern. / The Stixel World is a novel and versatile medium-level representation to efficiently bridge the gap between pixel-based processing and high-level vision. Modern stereo matching schemes allow to obtain a depth measurement for almost every pixel of an image in real-time, thus allowing the application of new and powerful algorithms. However, it also results in a large amount of measurement data that has to be processed and evaluated. With respect to vision-based driver assistance, these algorithms are executed on highly integrated low-power processing units that leave no room for algorithms with an intense calculation effort. At the same time, the growing number of independently executed vision tasks asks for new concepts to manage the resulting system complexity. These challenges are tackled by introducing a pre-processing step to extract all required information in advance. Each Stixel approximates a part of an object along with its distance and height. The Stixel World is computed in a single unified optimization scheme. Strong use is made of physically motivated a priori knowledge about our man-made three-dimensional environment. Relying on dynamic programming guarantees to extract the globally optimal segmentation for the entire scenario. Kalman filtering techniques are used to precisely estimate the motion state of all tracked objects. Particular emphasis is put on a thorough performance evaluation. Different comparative strategies are followed which include LIDAR, RADAR, and IMU reference sensors, manually created ground truth data, and real-world tests. Altogether, the Stixel World is ideally suited to serve as the basic building block for today''s increasingly complex vision systems. It is an extremely compact abstraction of the actual world giving access to the most essential information about the current scenario. Thanks to this thesis, the efficiency of subsequently executed vision algorithms and applications has improved significantly.

Page generated in 0.1688 seconds