• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 16
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 99
  • 99
  • 27
  • 14
  • 14
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic blind deconvolution

Ahmed, Alaa Eldin Abdel-Rehim January 1994 (has links)
No description available.
2

An investigation into spatially and temporally varying blind deconvolution

Kerry, Michael P. January 1999 (has links)
No description available.
3

Algorithms in image reconstruction from projections

Drossos, S. N. January 1984 (has links)
No description available.
4

Image Restoration in consideration of thermal noise

Zeng, Ping-Cheng 06 September 2007 (has links)
Recently Kalman filter has been well applied to the problems of image restoration. In this thesis, we apply Kalman filter to estimate the optical transfer function for an imaging system. The signal model is the optical transfer function obtained from a ratio of the defected and clean pictures in frequency domain. There is thermal noise involved in sampling the optical image signal. We model this thermal noise as the additive measurement noise. We remove the thermal noise by Winner filtering. This filtered image is finally restored by the above estimated the optical transfer function. The experiments are set up by the instruments including the video camera, capture card, and personal computer. Experimental results, including the estimation of gamma and noise power, have demonstrated that the estimated optical transfer function is useful for image restoration.
5

IMAGE RESTORATIONS USING DEEP LEARNING TECHNIQUES

Chi, Zhixiang January 2018 (has links)
Conventional methods for solving image restoration problems are typically built on an image degradation model and on some priors of the latent image. The model of the degraded image and the prior knowledge of the latent image are necessary because the restoration is an ill posted inverse problem. However, for some applications, such as those addressed in this thesis, the image degradation process is too complex to model precisely; in addition, mathematical priors, such as low rank and sparsity of the image signal, are often too idealistic for real world images. These difficulties limit the performance of existing image restoration algorithms, but they can be, to certain extent, overcome by the techniques of machine learning, particularly deep convolutional neural networks. Machine learning allows large sample statistics far beyond what is available in a single input image to be exploited. More importantly, the big data can be used to train deep neural networks to learn the complex non-linear mapping between the degraded and original images. This circumvents the difficulty of building an explicit realistic mathematical model when the degradation causes are complex and compounded. In this thesis, we design and implement deep convolutional neural networks (DCNN) for two challenging image restoration problems: reflection removal and joint demosaicking-deblurring. The first problem is one of blind source separation; its DCNN solution requires a large set of paired clean and mixed images for training. As these paired training images are very difficult, if not impossible, to acquire in the real world, we develop a novel technique to synthesize the required training images that satisfactorily approximate the real ones. For the joint demosaicking-deblurring problem, we propose a new multiscale DCNN architecture consisting of a cascade of subnetworks so that the underlying blind deconvolution task can be broken into smaller subproblems and solved more effectively and robustly. In both cases extensive experiments are carried out. Experimental results demonstrate clear advantages of the proposed DCNN methods over existing ones. / Thesis / Master of Applied Science (MASc)
6

Superresolution techniques for passive millimetre wave images

Rollason, Malcolm January 1999 (has links)
No description available.
7

Image Restoration for Multiplicative Noise with Unknown Parameters

Chen, Ren-Chi 28 July 2006 (has links)
First, we study a Poisson model a polluted random screen. In this model, the defects on random screen are assumed Poisson-distribution and overlapped. The transmittance effects of overlapping defects are multiplicative. We can compute the autocorrelation function of the screen is obtained by defects' density, radius, and transmittance. Using the autocorrelation function, we then restore the telescope astronomy images. These image signals are generally degraded by their propagation through the random scattering in atmosphere. To restore the images, we estimate the three key parameters by three methods. They are expectation- maximization (EM) method and two Maximum-Entropy (ME) methods according to two different definitions. The restoration are successful and demonstrated in this thesis.
8

Design of an Automated Book Reader as an Assistive Technology for Blind Persons

Wang, Lu 13 November 2007 (has links)
This dissertation introduces a novel automated book reader as an assistive technology tool for persons with blindness. The literature shows extensive work in the area of optical character recognition, but the current methodologies available for the automated reading of books or bound volumes remain inadequate and are severely constrained during document scanning or image acquisition processes. The goal of the book reader design is to automate and simplify the task of reading a book while providing a user-friendly environment with a realistic but affordable system design. This design responds to the main concerns of (a) providing a method of image acquisition that maintains the integrity of the source (b) overcoming optical character recognition errors created by inherent imaging issues such as curvature effects and barrel distortion, and (c) determining a suitable method for accurate recognition of characters that yields an interface with the ability to read from any open book with a high reading accuracy nearing 98%. This research endeavor focuses in its initial aim on the development of an assistive technology tool to help persons with blindness in the reading of books and other bound volumes. But its secondary and broader aim is to also find in this design the perfect platform for the digitization process of bound documentation in line with the mission of the Open Content Alliance (OCA), a nonprofit Alliance at making reading materials available in digital form. The theoretical perspective of this research relates to the mathematical developments that are made in order to resolve both the inherent distortions due to the properties of the camera lens and the anticipated distortions of the changing page curvature as one leafs through the book. This is evidenced by the significant increase of the recognition rate of characters and a high accuracy read-out through text to speech processing. This reasonably priced interface with its high performance results and its compatibility to any computer or laptop through universal serial bus connectors extends greatly the prospects for universal accessibility to documentation.
9

Enhancement of noisy planar nuclear medicine images using mean field annealing

Falk, Daniyel Lennard 29 February 2008 (has links)
Abstract Nuclear Medicine (NM) images inherently suffer from large amounts of noise and blur. The purpose of this research is to reduce the noise and blur while maintaining image integrity for improved diagnosis. The proposal is to further improve image quality after the standard pre- and post-processing undertaken by a gamma camera system. Mean Field Annealing (MFA), the image processing technique used in this research is a well known image processing approach. The MFA algorithm uses two techniques to achieve image restoration. Gradient descent is used as the minimisation technique, while a deterministic approximation to Simulated Annealing (SA) is used for optimisation. The algorithm anisotropically diffuses an image, iteratively smoothing regions that are considered non-edges and still preserving edge integrity until a global minimum is obtained. A known advantage of MFA is that it is able to minimise to this global minimum, skipping over local minima while still providing comparable results to SA with significantly less computational effort. Image blur is measured using either a point or line source. Both allow for the derivation of a Point Spread Function (PSF) that is used to de-blur the image. The noise variance can be measured using a flood source. The noise is due to the random fluctuations in the environment as well as other contributors. Noisy blurred NM images can be difficult to diagnose particularly at regions with steep intensity gradients and for this reason MFA is considered suitable for image restoration. From the literature it is evident that MFA can be applied successfully to digital phantom images providing improved performance over Wiener filters. In this paper MFA is shown to yield image enhancement of planar NM images by implementing a sharpening filter as a post MFA processing technique.
10

Color Aware Neural ISP

Souza, Matheus 03 1900 (has links)
Image signal processors (ISPs) are historically grown legacy software systems for reconstructing color images from noisy raw sensor measurements. They are usually composited of many heuristic blocks for denoising, demosaicking, and color restoration. Color reproduction in this context is of particular importance, since the raw colors are often severely distorted, and each smart phone manufacturer has developed their own characteristic heuristics for improving the color rendition, for example of skin tones and other visually important colors. In recent years there has been strong interest in replacing the historically grown ISP systems with deep learned pipelines. Much progress has been made in approximating legacy ISPs with such learned models. However, so far the focus of these efforts has been on reproducing the structural features of the images, with less attention paid to color rendition. Here we present Color Rendition ISP (CRISPnet), the first learned ISP model to specifically target color rendition accuracy relative to a complex, legacy smart phone ISP. We achieve this by utilizing both image metadata (like a legacy ISP would), as well as by learning simple global semantics based on image classification – similar to what a legacy ISP does to determine the scene type. We also contribute a new ISP image dataset consisting of both high dynamic range monitor data, as well as real-world data, both captured with an actual cell phone ISP pipeline under a variety of lighting conditions, exposure times, and gain settings.

Page generated in 0.1116 seconds