• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 8
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 23
  • 20
  • 17
  • 12
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Mathematical approaches to digital color image denoising

Deng, Hao 14 September 2009 (has links)
Many mathematical models have been designed to remove noise from images. Most of them focus on grey value images with additive artificial noise. Only very few specifically target natural color photos taken by a digital camera with real noise. Noise in natural color photos have special characteristics that are substantially different from those that have been added artificially. In this thesis previous denoising models are reviewed. We analyze the strengths and weakness of existing denoising models by showing where they perform well and where they don't. We put special focus on two models: The steering kernel regression model and the non-local model. For Kernel Regression model, an adaptive bilateral filter is introduced as complementary to enhance it. Also a non-local bilateral filter is proposed as an application of the idea of non-local means filter. Then the idea of cross-channel denoising is proposed in this thesis. It is effective in denoising monochromatic images by understanding the characteristics of digital noise in natural color images. A non-traditional color space is also introduced specifically for this purpose. The cross-channel paradigm can be applied to most of the exisiting models to greatly improve their performance for denoising natural color images.
42

Denoising And Inpainting Of Images : A Transform Domain Based Approach

Gupta, Pradeep Kumar 07 1900 (has links)
Many scientific data sets are contaminated by noise, either because of data acquisition process, or because of naturally occurring phenomena. A first step in analyzing such data sets is denoising, i.e., removing additive noise from a noisy image. For images, noise suppression is a delicate and a difficult task. A trade of between noise reduction and the preservation of actual image features has to be made in a way that enhances the relevant image content. The beginning chapter in this thesis is introductory in nature and discusses the Popular denoising techniques in spatial and frequency domains. Wavelet transform has wide applications in image processing especially in denoising of images. Wavelet systems are a set of building blocks that represent a signal in an expansion set involving indices for time and scale. These systems allow the multi-resolution representation of signals. Several well known denoising algorithms exist in wavelet domain which penalize the noisy coefficients by threshold them. We discuss the wavelet transform based denoising of images using bit planes. This approach preserves the edges in an image. The proposed approach relies on the fact that wavelet transform allows the denoising strategy to adapt itself according to directional features of coefficients in respective sub-bands. Further, issues related to low complexity implementation of this algorithm are discussed. The proposed approach has been tested on different sets images under different noise intensities. Studies have shown that this approach provides a significant reduction in normalized mean square error (NMSE). The denoised images are visually pleasing. Many of the image compression techniques still use the redundancy reduction property of the discrete cosine transform (DCT). So, the development of a denoising algorithm in DCT domain has a practical significance. In chapter 3, a DCT based denoising algorithm is presented. In general, the design of filters largely depends on the a-priori knowledge about the type of noise corrupting the image and image features. This makes the standard filters to be application and image specific. The most popular filters such as average, Gaussian and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated approach to design filters based on DCT is proposed in chapter 3. This algorithm reorganizes DCT coefficients in a wavelet transform manner to get the better energy clustering at desired spatial locations. An adaptive threshold is chosen because such adaptively can improve the wavelet threshold performance as it allows additional local information of the image to be incorporated in the algorithm. Evaluation results show that the proposed filter is robust under various noise distributions and does not require any a-priori Knowledge about the image. Inpainting is another application that comes under the category of image processing. In painting provides a way for reconstruction of small damaged portions of an image. Filling-in missing data in digital images has a number of applications such as, image coding and wireless image transmission for recovering lost blocks, special effects (e.g., removal of objects) and image restoration (e.g., removal of solid lines, scratches and noise removal). In chapter 4, a wavelet based in painting algorithm is presented for reconstruction of small missing and damaged portion of an image while preserving the overall image quality. This approach exploits the directional features that exist in wavelet coefficients in respective sub-bands. The concluding chapter presents a brief review of the three new approaches: wavelet and DCT based denoising schemes and wavelet based inpainting method.
43

Odšumování obrazu pomocí vážené lokální regrese / Image Denoising Using Weighted Local Regression

Šťasta, Jakub January 2017 (has links)
The problem of accurately simulating light transport using Monte Carlo integration can be very difficult. In particular, scenes with complex illumination effects or complex materials can cause a scene to converge very slowly and demand a lot of computational time. To overcome this problem, image denoising algorithms have become popular in recent years. In this work we first review known approaches to denoising and adaptive rendering. We implement one of the promising algorithm by Moon et al. [2014] in a commercial rendering system Corona Standalone Renderer, evaluate its performance, strengths and weaknesses on 14 test scenes. These include difficult to denoise and converge rendering effects such as fine sub-pixel geometry, participating media, extreme depth of field of highlights, motion blur, and others. We propose corrections which make the algorithm more stable and robust. We show that it is possible to denoise renderings with Linear Weighted Regression only using a CPU. However, still even after our propositions, it is not possible to filter scenes in a consistent manner without over-blurring or not filtering where desired.
44

Um algoritmo genético híbrido para supressão de ruídos em imagens / A hybrid genetic algorithm for image denoising

Jônatas Lopes de Paiva 01 December 2015 (has links)
Imagens digitais são utilizadas para diversas finalidades, variando de uma simples foto com os amigos até a identificação de doenças em exames médicos. Por mais que as tecnologias de captura de imagens tenham evoluído, toda imagem adquirida digitalmente possui um ruído intrínseco a ela que normalmente é adquirido durante os processo de captura ou transmissão da imagem. O grande desafio neste tipo de problema consiste em recuperar a imagem perdendo o mínimo possível de características importantes da imagem, como cantos, bordas e texturas. Este trabalho propõe uma abordagem baseada em um Algoritmo Genético Híbrido (AGH) para lidar com este tipo de problema. O AGH combina um algoritmo genético com alguns dos melhores métodos de supressão de ruídos em imagens encontrados na literatura, utilizando-os como operadores de busca local. O AGH foi testado em imagens normalmente utilizadas como benchmark corrompidas com um ruído branco aditivo Gaussiano (N; 0), com diversos níveis de desvio padrão para o ruído. Seus resultados, medidos pelas métricas PSNR e SSIM, são comparados com os resultados obtidos por diferentes métodos. O AGH também foi testado para recuperar imagens SAR (Synthetic Aperture Radar), corrompidas com um ruído Speckle multiplicativo, e também teve seus resultados comparados com métodos especializados em recuperar imagens SAR. Através dessa abordagem híbrida, o AGH foi capaz de obter resultados competitivos em ambos os tipos de testes, chegando inclusive a obter melhores resultados em diversos casos em relação aos métodos da literatura. / Digital images are used for many purposes, ranging from a simple picture with friends to the identification of diseases in medical exams. Even though the technology for acquiring pictures has been evolving, every image digitally acquired has a noise intrinsic to it that is normally gotten during the processes of transmission or capture of the image. A big challenge in this kind of problem consists in recovering the image while losing the minimum amount of important features of the image, such as corners, borders and textures. This work proposes an approach based on a Hybrid Genetic Algorithm (HGA) to deal with this kind of problem. The HGA combines a genetic algorithm with some of the best image denoising methods found in literature, using them as local search operators. The HGA was tested on benchmark images corrupted with an additive white Gaussian noise (N;0) with many levels of standard deviation for the noise. The HGAs results, which were measured by the PSNR and SSIM metrics, were compared to the results obtained by different methods. The HGA was also tested to recover SAR (Synthetic Aperture Radar) images that were corrupted by a multiplicative Speckle noise and had its results compared against the results by other methods specialized in recovering with SAR images. Through this hybrid approach, the HGA was able to obtain results competitive in both types of tests, even being able to obtain the best results in many cases, when compared to the other methods found in the literature.
45

Restauration d'images de noyaux cellulaires en microscopie 3D par l'introduction de connaissance a priori / Denoising 3D microscopy images of cell nuclei using shape priors

Bouyrie, Mathieu 29 November 2016 (has links)
Cette thèse aborde la problématique de la restauration d’images 3D de noyaux cellulaires fluorescents issues de la microscopie 2-photons à balayage laser d’animaux observés in vivo et in toto au cours de leur développement embryonnaire. La dégradation originale de ces images provient des limitations des systèmes optiques, du bruit intrinsèque des systèmes de détection ansi que de l’absorption et la diffusion de la lumière dans la profondeur des tissus. A la différence des propositions de “débruitage” de l’état de l’art, nous proposons ici une méthode qui prend en compte les particularités des données biologiques. Cette méthode, adaptation à la troisième dimension d’un algorithme utilisé dans l’analyse d’image astronomique, tire parti de connaissances a priori sur les images étudiées. Les hypothèses émises portent à la fois sur la détérioration du signal par un bruit supposé Mixe Poisson Gaussien (MPG) et sur la nature des objets observés. Nous traitons ici le cas de noyaux de cellules embryonnaires que nous supposons quasi sphériques.L’implémentation en 3D doit prendre en compte les dimensions de la grille d’échantillonnage de l’image. En effet ces dimensions ne sont pas identiques dans les trois directions de l’espace et un objet sphérique échantillonné sur cette grille perd cette caractéristique. Pour adapter notre méthode à une telle grille, nous avons ré-interprété le processus de filtrage, au coeur de la théorie originale, comme un processus physique de diffusion. / In this this document, we present a method to denoise 3D images acquired by 2-photon microscopy and displaying cell nuclei of animal embryos. The specimens are observed in toto and in vivo during their early development. Image deterioration can be explained by the microscope optical flaws, the acquisition system limitations, and light absorption and diffusion through the tissue depth.The proposed method is a 3D adaptation of a 2D method so far applied to astronomical images and it also differs from state-of the of-the-art methods by the introduction of priors on the biological data. Our hypotheses include assuming that noise statistics are Mixed Poisson Gaussian (MPG) and that cell nuclei are quasi spherical.To implement our method in 3D, we had to take into account the sampling grid dimensions which are different in the x, y or z directions. A spherical object imaged on this grid loses this property. To deal with such a grid, we had to interpret the filtering process, which is a core element of the original theory, as a diffusion process.
46

Waveletová analýza a zvýrazňování MR tomografických a ultrazvukových obrazů / Wavelet analysis and enhancement of MR tomography and ultrasound images

Matoušek, Luděk January 2008 (has links)
Tomographic MR (Magnetic Resonance) and sonographic biosignal processing are important non-invasive diagnostic methods used in a medicine. A noise added into processed data by an amplifier of tomograph receiving part and by circuits of sonograph is resulting in a body organ diagnosis degradation. Image data are stored in a standardized DICOM medical file format. Methods using wavelet analysis for noise suppression in image data have been designed and their comparation with classical methods has been made in this work. The MATLAB was utilized for data processing and data rewriting back to the DICOM format.
47

Moderní směrové způsoby reprezentace obrazů / Modern methods of directional image representation

Mucha, Martin January 2013 (has links)
Transformation methods are used to describe the image based on defined shapes, which are called bases or frames. Thanks to these shapes it is possible to transform the image with the help of calculated transformation coefficients and further work with this image. It is possible to image denoising, reconstruct the image, transform it and do other things. There are several types of methods of the image processing. In this field a significiant development could be seen. This study is focused on analysis of characteristics of individual well known methods of transformation such as Fouriers´s or Wavelet´s. For comparison, there are also new chosen methods of transformation described: Ripplet, Curvelet, Surelet, Tetrolet, Contourlet and Shearlet. Functional toolboxes were used for comparison of individual methods and their characteristics. These functional toolboxes were modified for the possibility of limitation of transformation coefficients for their potential use in subsequent reconstruction.
48

Odstranění šumu z obrazů kalibračních vzorků získaných elektronovým mikroskopem / Denoising of Images from Electron Microscope

Holub, Zbyněk January 2017 (has links)
Tato Diplomová práce je zaměřena na odstranění šumu ze snímků získaných pomocí Transmisního elektronového mikroskopu. V práci jsou popsány principy digitalizace výsledných snímků a popis jednotlivých šumových složek, které vznikají při digitalizaci snímků. Tyto nechtěné složky ovlivňují kvalitu výsledného snímku. Proto byly vybrány filtrační metody založené na minimalizaci totální variace, jejichž principy jsou v této práci popsány. Jako referenční filtrační metoda byla vybrána filtrace pomocí Non-local means filtru. Tento filtr byl vybrán, jelikož v dnešní dobře patří mezi nejvíce využívané metody, které mají vysokou účinnost. Pro objektivní hodnocení kvality filtrací byly použity tyto hodnotící kritéria – SNR, PSNR a SSIM. V závěru této práce, jsou všechny získané výsledky zobrazeny a jsou diskutovány účinnosti jednotlivých filtrační metod.
49

MACHINE LEARNING-BASED ARTERIAL SPIN LABELING PERFUSION MRI SIGNAL PROCESSING

Xie, Danfeng January 2020 (has links)
Arterial spin labeling (ASL) perfusion Magnetic Resonance Imaging (MRI) is a noninvasive technique for measuring quantitative cerebral blood flow (CBF) but subject to an inherently low signal-to-noise-ratio (SNR), resulting in a big challenge for data processing. Traditional post-processing methods have been proposed to reduce artifacts, suppress non-local noise, and remove outliers. However, these methods are based on either implicit or explicit models of the data, which may not be accurate and may change across subjects. Deep learning (DL) is an emerging machine learning technique that can learn a transform function from acquired data without using any explicit hypothesis about that function. Such flexibility may be particularly beneficial for ASL denoising. In this dissertation, three different machine learning-based methods are proposed to improve the image quality of ASL MRI: 1) a learning-from-noise method, which does not require noise-free references for DL training, was proposed for DL-based ASL denoising and BOLD-to-ASL prediction; 2) a novel deep learning neural network that combines dilated convolution and wide activation residual blocks was proposed to improve the image quality of ASL CBF while reducing ASL acquisition time; 3) a prior-guided and slice-wise adaptive outlier cleaning algorithm was developed for ASL MRI. In the first part of this dissertation, a learning-from-noise method is proposed for DL-based method for ASL denoising. The proposed learning-from-noise method shows that DL-based ASL denoising models can be trained using only noisy image pairs, without any deliberate post-processing for obtaining the quasi-noise-free reference during the training process. This learning-from-noise method can also be applied to DL-based ASL perfusion prediction from BOLD fMRI as ASL references are extremely noisy in this BOLD-to-ASL prediction. Experimental results demonstrate that this learning-from-noise method can reliably denoise ASL MRI and predict ASL perfusion from BOLD fMRI, result in improved signal-to-noise-ration (SNR) of ASL MRI. Moreover, by using this method, more training data can be generated, as it requires fewer samples to generate quasi-noise-free references, which is particularly useful when ASL CBF data are limited. In the second part of this dissertation, we propose a novel deep learning neural network, i.e., Dilated Wide Activation Network (DWAN), that is optimized for ASL denoising. Our method presents two novelties: first, we incorporated the wide activation residual blocks with a dilated convolution neural network to achieve improved denoising performance in term of several quantitative and qualitative measurements; second, we evaluated our proposed model given different inputs and references to show that our denoising model can be generalized to input with different levels of SNR and yields images with better quality than other methods. In the final part of this dissertation, a prior-guided and slice-wise adaptive outlier cleaning (PAOCSL) method is proposed to improve the original Adaptive Outlier Cleaning (AOC) method. Prior information guided reference CBF maps are used to avoid bias from extreme outliers in the early iterations of outlier cleaning, ensuring correct identification of the true outliers. Slice-wise outlier rejection is adapted to reserve slices with CBF values in the reasonable range even they are within the outlier volumes. Experimental results show that the proposed outlier cleaning method improves both CBF quantification quality and CBF measurement stability. / Electrical and Computer Engineering
50

SSIM-Inspired Quality Assessment, Compression, and Processing for Visual Communications

Rehman, Abdul January 2013 (has links)
Objective Image and Video Quality Assessment (I/VQA) measures predict image/video quality as perceived by human beings - the ultimate consumers of visual data. Existing research in the area is mainly limited to benchmarking and monitoring of visual data. The use of I/VQA measures in the design and optimization of image/video processing algorithms and systems is more desirable, challenging and fruitful but has not been well explored. Among the recently proposed objective I/VQA approaches, the structural similarity (SSIM) index and its variants have emerged as promising measures that show superior performance as compared to the widely used mean squared error (MSE) and are computationally simple compared with other state-of-the-art perceptual quality measures. In addition, SSIM has a number of desirable mathematical properties for optimization tasks. The goal of this research is to break the tradition of using MSE as the optimization criterion for image and video processing algorithms. We tackle several important problems in visual communication applications by exploiting SSIM-inspired design and optimization to achieve significantly better performance. Firstly, the original SSIM is a Full-Reference IQA (FR-IQA) measure that requires access to the original reference image, making it impractical in many visual communication applications. We propose a general purpose Reduced-Reference IQA (RR-IQA) method that can estimate SSIM with high accuracy with the help of a small number of RR features extracted from the original image. Furthermore, we introduce and demonstrate the novel idea of partially repairing an image using RR features. Secondly, image processing algorithms such as image de-noising and image super-resolution are required at various stages of visual communication systems, starting from image acquisition to image display at the receiver. We incorporate SSIM into the framework of sparse signal representation and non-local means methods and demonstrate improved performance in image de-noising and super-resolution. Thirdly, we incorporate SSIM into the framework of perceptual video compression. We propose an SSIM-based rate-distortion optimization scheme and an SSIM-inspired divisive optimization method that transforms the DCT domain frame residuals to a perceptually uniform space. Both approaches demonstrate the potential to largely improve the rate-distortion performance of state-of-the-art video codecs. Finally, in real-world visual communications, it is a common experience that end-users receive video with significantly time-varying quality due to the variations in video content/complexity, codec configuration, and network conditions. How human visual quality of experience (QoE) changes with such time-varying video quality is not yet well-understood. We propose a quality adaptation model that is asymmetrically tuned to increasing and decreasing quality. The model improves upon the direct SSIM approach in predicting subjective perceptual experience of time-varying video quality.

Page generated in 0.1011 seconds