• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 24
  • 20
  • 16
  • 10
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 250
  • 112
  • 52
  • 52
  • 47
  • 42
  • 38
  • 33
  • 30
  • 28
  • 25
  • 25
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Analyzing the efficacy of the FFT in Image Denoising and DigitalWatermarking

Fagerström, Emil January 2023 (has links)
The FFT has been a staple in the field of mathematics and computer science for almost 60 years. And yet, it still endures as an efficient algorithm in a multitude of fields. However, as significant technical advances has been made since its inception the demand on methods constantly get higher and higher, and the FFT is seldom enough to solve the problems of this day and age on its own. So how does the FFT perform on its own with today’s standards? This thesis aims to use the FFT to create two algorithms, an Image Denoising algorithm and a Digital Watermarking algorithm respectively, and analyse the efficacy of the algorithms with today’s standards. The results showed that the FFT on its own competently tackles problems well, however with increased demands on the algorithms, the limitations of the FFT became apparent. This underscores the prevalent trend of integrating the FFT with other specializedmethods, ensuring its continued relevance in an era of continuously advancing technologicaldemands.
52

Statistical image modeling in the contourlet domain with application to texture segmentation

Long, Zhiling 15 December 2007 (has links)
The contourlet transform is an emerging multiscale multidirection image processing technique. It effectively represents smooth curvature details typical of natural images, overcoming a major drawback of the 2-D wavelet transform. To further exploit its potential, in this research, a statistical model, the contourlet contextual hidden Markov model (C-CHMM), has been developed to characterize contourlet images. A systematic mutual information based context construction procedure has been developed to form an appropriate context for the model. With this contourlet image model, a multiscale segmentation method has also been established for the application to texture images. The segmentation method combines a model comparison approach with a multiscale fusion and a multi-neighbor combination process. It also features a neighborhood selection scheme based on a smoothed context map, for both the model estimation and the neighbor combination. The effectiveness of the image model has been verified through a series of denoising and segmentation experiments. As demonstrated with the denoising performance, this new model for contourlet images is more promising than the state of the art, the contourlet hidden Markov tree (C-HMT) model. The other model being compared with in this work is the wavelet contextual hidden Markov model (W-CHMM). Through the denoising experiments, the presented C-CHMM shows better robustness against noise than the W-CHMM. Moreover, the new model demonstrates its superiority to the wavelet model in the segmentation performance. Through the segmentation experiments, the value of the systematic context construction procedure has been proven. The C-CHMM based segmentation method has also been validated. In comparison with the state of the art methods for the same type, the presented technique shows improved accuracy in segmenting texture patterns of diversified nature. This success in segmentation has further manifested the potential of the newly developed contourlet image model.
53

Online Denoising Solutions for Forecasting Applications

Khadivi, Pejman 08 September 2016 (has links)
Dealing with noisy time series is a crucial task in many data-driven real-time applications. Due to the inaccuracies in data acquisition, time series suffer from noise and instability which leads to inaccurate forecasting results. Therefore, in order to improve the performance of time series forecasting, an important pre-processing step is the denoising of data before performing any action. In this research, we will propose various approaches to tackle the noisy time series in forecasting applications. For this purpose, we use different machine learning methods and information theoretical approaches to develop online denoising algorithms. In this dissertation, we propose four categories of time series denoising methods that can be used in different situations, depending on the noise and time series properties. In the first category, a seasonal regression technique is proposed for the denoising of time series with seasonal behavior. In the second category, multiple discrete universal denoisers are developed that can be used for the online denoising of discrete value time series. In the third category, we develop a noisy channel reversal model based on the similarities between time series forecasting and data communication and use that model to deploy an out-of-band noise filtering in forecasting applications. The last category of the proposed methods is deep-learning based denoisers. We use information theoretic concepts to analyze a general feed-forward deep neural network and to prove theoretical bounds for deep neural networks behavior. Furthermore, we propose a denoising deep neural network method for the online denoising of time series. Real-world and synthetic time series are used for numerical experiments and performance evaluations. Experimental results show that the proposed methods can efficiently denoise the time series and improve their quality. / Ph. D.
54

Applications of Fourier Transform and Wavelet Transform in ECG Signal Denoising

Falk, Jonathan January 2024 (has links)
Both the fast Fourier transform and discrete wavelet transform have been used extensively for signal denoising. Therefore, comparing the two for the purpose of denoising an electrocardiogram is of high interest. In this report, we outline the theory that both methods are built on, as well as develop MATLAB codes able to denoise an electrocardiogram using both methods. It was shown that the discrete wavelet transform performed significantly better in this context, which shows why it is the preferred method.
55

Denoising And Inpainting Of Images : A Transform Domain Based Approach

Gupta, Pradeep Kumar 07 1900 (has links)
Many scientific data sets are contaminated by noise, either because of data acquisition process, or because of naturally occurring phenomena. A first step in analyzing such data sets is denoising, i.e., removing additive noise from a noisy image. For images, noise suppression is a delicate and a difficult task. A trade of between noise reduction and the preservation of actual image features has to be made in a way that enhances the relevant image content. The beginning chapter in this thesis is introductory in nature and discusses the Popular denoising techniques in spatial and frequency domains. Wavelet transform has wide applications in image processing especially in denoising of images. Wavelet systems are a set of building blocks that represent a signal in an expansion set involving indices for time and scale. These systems allow the multi-resolution representation of signals. Several well known denoising algorithms exist in wavelet domain which penalize the noisy coefficients by threshold them. We discuss the wavelet transform based denoising of images using bit planes. This approach preserves the edges in an image. The proposed approach relies on the fact that wavelet transform allows the denoising strategy to adapt itself according to directional features of coefficients in respective sub-bands. Further, issues related to low complexity implementation of this algorithm are discussed. The proposed approach has been tested on different sets images under different noise intensities. Studies have shown that this approach provides a significant reduction in normalized mean square error (NMSE). The denoised images are visually pleasing. Many of the image compression techniques still use the redundancy reduction property of the discrete cosine transform (DCT). So, the development of a denoising algorithm in DCT domain has a practical significance. In chapter 3, a DCT based denoising algorithm is presented. In general, the design of filters largely depends on the a-priori knowledge about the type of noise corrupting the image and image features. This makes the standard filters to be application and image specific. The most popular filters such as average, Gaussian and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated approach to design filters based on DCT is proposed in chapter 3. This algorithm reorganizes DCT coefficients in a wavelet transform manner to get the better energy clustering at desired spatial locations. An adaptive threshold is chosen because such adaptively can improve the wavelet threshold performance as it allows additional local information of the image to be incorporated in the algorithm. Evaluation results show that the proposed filter is robust under various noise distributions and does not require any a-priori Knowledge about the image. Inpainting is another application that comes under the category of image processing. In painting provides a way for reconstruction of small damaged portions of an image. Filling-in missing data in digital images has a number of applications such as, image coding and wireless image transmission for recovering lost blocks, special effects (e.g., removal of objects) and image restoration (e.g., removal of solid lines, scratches and noise removal). In chapter 4, a wavelet based in painting algorithm is presented for reconstruction of small missing and damaged portion of an image while preserving the overall image quality. This approach exploits the directional features that exist in wavelet coefficients in respective sub-bands. The concluding chapter presents a brief review of the three new approaches: wavelet and DCT based denoising schemes and wavelet based inpainting method.
56

[en] LOW RATE CODECS OPERATING IN NOISY ENVIRONMENT AND IP NETWORKS / [pt] CODIFICADORES DE VOZ A BAIXAS TAXAS OPERANDO EM AMBIENTES RUIDOSOS E REDES IP

FRED BERKOWICZ BORGES 19 April 2005 (has links)
[pt] Este trabalho examina o impacto da quantização vetorial das LSFs sobre a qualidade de voz em codecs a baixas taxas operando em redes IP e em diversos ambientes ruidosos. São considerados diferentes esquemas de quantização vetorial (QV) multiestágio com busca em árvore envolvendo QV sem memória e QV preditiva chaveada com 2 e 4 classes. A distribuição de perda de quadros em redes IP foi modelada de acordo com o Modelo de Gilbert e a avaliação de desempenho foi realizada tanto em termos das distorções espectrais como da qualidade de voz resultante de codecs a baixas taxas. Ainda neste trabalho, foi avaliada a qualidade da voz codificada após a utilização de uma técnica de supressão de ruído baseada em transformadas wavelets (Wavelet Denoising). / [en] This work investigates the impact of LSF vector quantisation over the voice quality in low rate codecs operating in IP networks. Tree-structured multistage vector quantisation (VQ) schemes involving memoryless VQ and switched-predictive VQ with 2 and 4 classes are considered. The packet loss frame distribution in IP networks was modelled according to the Gilbert Model and the performance was carried out both in terms of spectral distortions and the speech quality at the out put of low rate codecs. In this work, we also evaluated the quality of the coded speech after employing Wavelet Denoising.
57

On Maximizing The Performance Of The Bilateral Filter For Image Denoising

Kishan, Harini 03 1900 (has links) (PDF)
We address the problem of image denoising for additive white Gaussian noise (AWGN), Poisson noise, and Chi-squared noise scenarios. Thermal noise in electronic circuitry in camera hardware can be modeled as AWGN. Poisson noise is used to model the randomness associated with photon counting during image acquisition. Chi-squared noise statistics are appropriate in imaging modalities such as Magnetic Resonance Imaging (MRI). AWGN is additive, while Poisson noise is neither additive nor multiplicative. Although Chi-squared noise is derived from AWGN statistics, it is non-additive. Mean-square error (MSE) is the most widely used metric to quantify denoising performance. In parametric denoising approaches, the optimal parameters of the denoising function are chosen by employing a minimum mean-square-error (MMSE) criterion. However, the dependence of MSE on the noise-free signal makes MSE computation infeasible in practical scenarios. We circumvent the problem by adopting an MSE estimation approach. The ground-truth-independent estimates of MSE are Stein’s unbiased risk estimate (SURE), Poisson unbiased risk estimate (PURE) and Chi-square unbiased risk estimate (CURE) for AWGN, Poison and Chi-square noise models, respectively. The denoising function is optimized to achieve maximum noise suppression by minimizing the MSE estimates. We have chosen the bilateral filter as the denoising function. Bilateral filter is a nonlinear edge-preserving smoother. The performance of the bilateral filter is governed by the choice of its parameters, which can be optimized to minimize the MSE or its estimate. However, in practical scenarios, MSE cannot be computed due to inaccessibility of the noise-free image. We derive SURE, PURE, and CURE in the context of bilateral filtering and compute the parameters of the bilateral filter that yield the minimum cost (SURE/PURE/CURE). On processing the noisy input with bilateral filter whose optimal parameters are chosen by minimizing MSE estimates (SURE/PURE/CURE), we obtain the estimate closest to the ground truth. We denote the bilateral filter with optimal parameters as SURE-optimal bilateral filter (SOBF), PURE-optimal bilateral filter (POBF) and CURE-optimal bilateral filter (COBF) for AWGN, Poisson and Chi-Squared noise scenarios, respectively. In addition to the globally optimal bilateral filters (SOBF and POBF), we propose spatially adaptive bilateral filter variants, namely, SURE-optimal patch-based bilateral filter (SPBF) and PURE-optimal patch-based bilateral filter (PPBF). SPBF and PPBF yield significant improvements in performance and preserve edges better when compared with their globally-optimal counterparts, SOBF and POBF, respectively. We also propose the SURE-optimal multiresolution bilateral filter (SMBF) where we couple SOBF with wavelet thresholding. For Poisson noise suppression, we propose PURE-optimal multiresolution bilateral filter (PMBF), which is the Poisson counterpart of SMBF. We com-pare the performance of SMBF and PMBF with the state-of-the-art denoising algorithms for AWGN and Poisson noise, respectively. The proposed multiresolution-based bilateral filtering techniques yield denoising performance that is competent with that of the state-of-the-art techniques.
58

Improved Direction Of Arrival Estimation By Nonlinear Wavelet Denoising And Application To Source Localization In Ocean

Pramod, N C 12 1900 (has links) (PDF)
No description available.
59

Fully Unsupervised Image Denoising, Diversity Denoising and Image Segmentation with Limited Annotations

Prakash, Mangal 06 April 2022 (has links)
Understanding the processes of cellular development and the interplay of cell shape changes, division and migration requires investigation of developmental processes at the spatial resolution of single cell. Biomedical imaging experiments enable the study of dynamic processes as they occur in living organisms. While biomedical imaging is essential, a key component of exposing unknown biological phenomena is quantitative image analysis. Biomedical images, especially microscopy images, are usually noisy owing to practical limitations such as available photon budget, sample sensitivity, etc. Additionally, microscopy images often contain artefacts due to the optical aberrations in microscopes or due to imperfections in camera sensor and internal electronics. The noisy nature of images as well as the artefacts prohibit accurate downstream analysis such as cell segmentation. Although countless approaches have been proposed for image denoising, artefact removal and segmentation, supervised Deep Learning (DL) based content-aware algorithms are currently the best performing for all these tasks. Supervised DL based methods are plagued by many practical limitations. Supervised denoising and artefact removal algorithms require paired corrupted and high quality images for training. Obtaining such image pairs can be very hard and virtually impossible in most biomedical imaging applications owing to photosensitivity and the dynamic nature of the samples being imaged. Similarly, supervised DL based segmentation methods need copious amounts of annotated data for training, which is often very expensive to obtain. Owing to these restrictions, it is imperative to look beyond supervised methods. The objective of this thesis is to develop novel unsupervised alternatives for image denoising, and artefact removal as well as semisupervised approaches for image segmentation. The first part of this thesis deals with unsupervised image denoising and artefact removal. For unsupervised image denoising task, this thesis first introduces a probabilistic approach for training DL based methods using parametric models of imaging noise. Next, a novel unsupervised diversity denoising framework is presented which addresses the fundamentally non-unique inverse nature of image denoising by generating multiple plausible denoised solutions for any given noisy image. Finally, interesting properties of the diversity denoising methods are presented which make them suitable for unsupervised spatial artefact removal in microscopy and medical imaging applications. In the second part of this thesis, the problem of cell/nucleus segmentation is addressed. The focus is especially on practical scenarios where ground truth annotations for training DL based segmentation methods are scarcely available. Unsupervised denoising is used as an aid to improve segmentation performance in the presence of limited annotations. Several training strategies are presented in this work to leverage the representations learned by unsupervised denoising networks to enable better cell/nucleus segmentation in microscopy data. Apart from DL based segmentation methods, a proof-of-concept is introduced which views cell/nucleus segmentation from the perspective of solving a label fusion problem. This method, through limited human interaction, learns to choose the best possible segmentation for each cell/nucleus using only a pool of diverse (and possibly faulty) segmentation hypotheses as input. In summary, this thesis seeks to introduce new unsupervised denoising and artefact removal methods as well as semi-supervised segmentation methods which can be easily deployed to directly and immediately benefit biomedical practitioners with their research.
60

[en] A SELF-SUPERVISED METHOD FOR BLIND DENOISING OF SEISMIC SHOT GATHERS / [pt] UM MÉTODO AUTOSUPERVISIONADO PARA ATENUAÇÃO CEGA DE RUÍDOS DE SISMOGRAMAS

ANTONIO JOSE GRANDSON BUSSON 24 May 2022 (has links)
[pt] Nos últimos anos, a geofísicos tem se dedicado ao aprimoramento da qualidade dos dados sísmicos por meio da atenuação de ruído e interpolação de sismogramas usando métodos puramente baseados em CNN. Métodos baseados em CNN podem alcançar resultados estado-da-arte para remoção de ruídos. No entanto, eles não se aplicam a cenários sem dados de treinamento emparelhados (ou seja, dados sísmicos ruidosos e dados sísmicos sem ruído correspondentes). Neste trabalho, tratamos a atenuação de ruídos de dados sísmicos como um problema de atenuação de ruído cega, que consiste em remover ruídos desconhecidos sem dados pareados. Em outras palavras, a base usada pelo modelo de denoiser é aprendida a partir das próprias amostras ruidosas durante o treinamento. Motivado por este contexto, o principal objetivo deste trabalho é propor um método autosupervisionado para atenuação cega de dados sísmicos, que não requer análise prévia do sinal sísmico, nenhuma estimativa do ruído e nenhum dado de treinamento pareado. O método proposto assume dois conjuntos de dados: um contendo shot gathers com ruídos e o outro com shot gathers sem ruídos. A partir desses dados, treinamos dois modelos: (1) Seismic Noise Transfer (SNT), que aprende a produzir shot gathers com ruído sintético contendo o ruído dos shot gathers com ruído e o sinal dos shot gathers sem ruído; E (2) Sismic Neural Denoiser (SND), que aprende a mapear os shot gathers com ruído sintético de volta aos shot gathers sem ruído original. Após o treinamento, o SND sozinho é usado para remover o ruído das capturas ruidosas originais. Nosso modelo SNT adapta o algoritmo Neural Style Transfer (NST) ao domínio sísmico. Além disso, nosso modelo SND consiste em uma nova arquitetura CNN baseada em fusão de atributos em várias escalas para eliminação de ruído em shot gathers. Nosso método produziu resultados promissores em experimentos, alcançando um ganho de PSNR de 0,9 em comparação com outros modelos de última geração. / [en] In the last years, the geophysics community has been devoted to seismic data quality enhancement by noise attenuation and seismogram interpolation using CNN-based methods. Discriminative CNN-based methods can achieve state-of-the-art denoising results. However, they do not apply to scenarios without paired training data (i.e., noisy seismic data and corresponding ground-truth noise-free seismic data). In this work, we treat seismic data denoising as a blind denoising problem to remove unknown noise from noisy shot gathers without ground truth training data. The basis used by the denoiser model is learned from the noisy samples themselves during training. Motivated by this context, the main goal of this work is to propose a selfsupervised method for blind denoising of seismic data, which requires no prior seismic signal analysis, no estimate of the noise, and no paired training data. Our proposed self-supervised method assumes two given datasets: one containing noisy shot gathers and the other noise-free shot gathers. From this data, we train two models: (1) Seismic Noise Transfer (SNT), which learns to produce synthetic-noisy shot gathers containing the noise from noisy shot gathers and the signal from noise-free shot gathers; And (2) Seismic Neural Denoiser (SND), which learns to map the syntheticnoisy shot gather back to original noise-free shot gather. After training, SND alone is used to remove the noise from the original noisy shot gathers. Our SNT model adapts the Neural Style Transfer (NST) algorithm to the seismic domain. In addition, our SND model consists of a novel multi-scale feature-fusion-based CNN architecture for seismic shot gather denoising. Our method produced promising results in a holdout experiment, achieving a PSNR gain of 0.9 compared to other state-of-the-art models.

Page generated in 0.0309 seconds