• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 24
  • 20
  • 16
  • 10
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 250
  • 112
  • 52
  • 52
  • 47
  • 42
  • 38
  • 33
  • 30
  • 28
  • 25
  • 25
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Mathematical theory of the Flutter Shutter : its paradoxes and their solution

Tendero, Yohann 22 June 2012 (has links) (PDF)
This thesis provides theoretical and practical solutions to two problems raised by digital photography of moving scenes, and infrared photography. Until recently photographing moving objects could only be done using short exposure times. Yet, two recent groundbreaking works have proposed two new designs of camera allowing arbitrary exposure times. The flutter shutter of Agrawal et al. creates an invertible motion blur by using a clever shutter technique to interrupt the photon flux during the exposure time according to a well chosen binary sequence. The motion-invariant photography of Levin et al. gets the same result by accelerating the camera at a constant rate. Both methods follow computational photography as a new paradigm. The conception of cameras is rethought to include sophisticated digital processing. This thesis proposes a method for evaluating the image quality of these new cameras. The leitmotiv of the analysis is the SNR (signal to noise ratio) of the image after deconvolution. It gives the efficiency of these new camera design in terms of image quality. The theory provides explicit formulas for the SNR. It raises two paradoxes of these cameras, and resolves them. It provides the underlying motion model of each flutter shutter, including patented ones. A shorter second part addresses the the main quality problem in infrared video imaging, the non-uniformity. This perturbation is a time-dependent noise caused by the infrared sensor, structured in columns. The conclusion of this work is that it is not only possible but also efficient and robust to perform the correction on a single image. This permits to ensure the absence of ''ghost artifacts'', a classic of the literature on the subject, coming from inadequate processing relative to the acquisition model.
172

SSIM-Inspired Quality Assessment, Compression, and Processing for Visual Communications

Rehman, Abdul January 2013 (has links)
Objective Image and Video Quality Assessment (I/VQA) measures predict image/video quality as perceived by human beings - the ultimate consumers of visual data. Existing research in the area is mainly limited to benchmarking and monitoring of visual data. The use of I/VQA measures in the design and optimization of image/video processing algorithms and systems is more desirable, challenging and fruitful but has not been well explored. Among the recently proposed objective I/VQA approaches, the structural similarity (SSIM) index and its variants have emerged as promising measures that show superior performance as compared to the widely used mean squared error (MSE) and are computationally simple compared with other state-of-the-art perceptual quality measures. In addition, SSIM has a number of desirable mathematical properties for optimization tasks. The goal of this research is to break the tradition of using MSE as the optimization criterion for image and video processing algorithms. We tackle several important problems in visual communication applications by exploiting SSIM-inspired design and optimization to achieve significantly better performance. Firstly, the original SSIM is a Full-Reference IQA (FR-IQA) measure that requires access to the original reference image, making it impractical in many visual communication applications. We propose a general purpose Reduced-Reference IQA (RR-IQA) method that can estimate SSIM with high accuracy with the help of a small number of RR features extracted from the original image. Furthermore, we introduce and demonstrate the novel idea of partially repairing an image using RR features. Secondly, image processing algorithms such as image de-noising and image super-resolution are required at various stages of visual communication systems, starting from image acquisition to image display at the receiver. We incorporate SSIM into the framework of sparse signal representation and non-local means methods and demonstrate improved performance in image de-noising and super-resolution. Thirdly, we incorporate SSIM into the framework of perceptual video compression. We propose an SSIM-based rate-distortion optimization scheme and an SSIM-inspired divisive optimization method that transforms the DCT domain frame residuals to a perceptually uniform space. Both approaches demonstrate the potential to largely improve the rate-distortion performance of state-of-the-art video codecs. Finally, in real-world visual communications, it is a common experience that end-users receive video with significantly time-varying quality due to the variations in video content/complexity, codec configuration, and network conditions. How human visual quality of experience (QoE) changes with such time-varying video quality is not yet well-understood. We propose a quality adaptation model that is asymmetrically tuned to increasing and decreasing quality. The model improves upon the direct SSIM approach in predicting subjective perceptual experience of time-varying video quality.
173

Contributions to Signal Processing for MRI

Björk, Marcus January 2015 (has links)
Magnetic Resonance Imaging (MRI) is an important diagnostic tool for imaging soft tissue without the use of ionizing radiation. Moreover, through advanced signal processing, MRI can provide more than just anatomical information, such as estimates of tissue-specific physical properties. Signal processing lies at the very core of the MRI process, which involves input design, information encoding, image reconstruction, and advanced filtering. Based on signal modeling and estimation, it is possible to further improve the images, reduce artifacts, mitigate noise, and obtain quantitative tissue information. In quantitative MRI, different physical quantities are estimated from a set of collected images. The optimization problems solved are typically nonlinear, and require intelligent and application-specific algorithms to avoid suboptimal local minima. This thesis presents several methods for efficiently solving different parameter estimation problems in MRI, such as multi-component T2 relaxometry, temporal phase correction of complex-valued data, and minimizing banding artifacts due to field inhomogeneity. The performance of the proposed algorithms is evaluated using both simulation and in-vivo data. The results show improvements over previous approaches, while maintaining a relatively low computational complexity. Using new and improved estimation methods enables better tissue characterization and diagnosis. Furthermore, a sequence design problem is treated, where the radio-frequency excitation is optimized to minimize image artifacts when using amplifiers of limited quality. In turn, obtaining higher fidelity images enables improved diagnosis, and can increase the estimation accuracy in quantitative MRI.
174

Urban Change Detection Using Multitemporal SAR Images

Yousif, Osama January 2015 (has links)
Multitemporal SAR images have been increasingly used for the detection of different types of environmental changes. The detection of urban changes using SAR images is complicated due to the complex mixture of the urban environment and the special characteristics of SAR images, for example, the existence of speckle. This thesis investigates urban change detection using multitemporal SAR images with the following specific objectives: (1) to investigate unsupervised change detection, (2) to investigate effective methods for reduction of the speckle effect in change detection, (3) to investigate spatio-contextual change detection, (4) to investigate object-based unsupervised change detection, and (5) to investigate a new technique for object-based change image generation. Beijing and Shanghai, the largest cities in China, were selected as study areas. Multitemporal SAR images acquired by ERS-2 SAR and ENVISAT ASAR sensors were used for pixel-based change detection. For the object-based approaches, TerraSAR-X images were used. In Paper I, the unsupervised detection of urban change was investigated using the Kittler-Illingworth algorithm. A modified ratio operator that combines positive and negative changes was used to construct the change image. Four density function models were tested and compared. Among them, the log-normal and Nakagami ratio models achieved the best results. Despite the good performance of the algorithm, the obtained results suffer from the loss of fine geometric detail in general. This was a consequence of the use of local adaptive filters for speckle suppression. Paper II addresses this problem using the nonlocal means (NLM) denoising algorithm for speckle suppression and detail preservation. In this algorithm, denoising was achieved through a moving weighted average. The weights are a function of the similarity of small image patches defined around each pixel in the image. To decrease the computational complexity, principle component analysis (PCA) was used to reduce the dimensionality of the neighbourhood feature vectors. Simple methods to estimate the number of significant PCA components to be retained for weights computation and the required noise variance were proposed. The experimental results showed that the NLM algorithm successfully suppressed speckle effects, while preserving fine geometric detail in the scene. The analysis also indicates that filtering the change image instead of the individual SAR images was effective in terms of the quality of the results and the time needed to carry out the computation. The Markov random field (MRF) change detection algorithm showed limited capacity to simultaneously maintain fine geometric detail in urban areas and combat the effect of speckle. To overcome this problem, Paper III utilizes the NLM theory to define a nonlocal constraint on pixels class-labels. The iterated conditional mode (ICM) scheme for the optimization of the MRF criterion function is extended to include a new step that maximizes the nonlocal probability model. Compared with the traditional MRF algorithm, the experimental results showed that the proposed algorithm was superior in preserving fine structural detail, effective in reducing the effect of speckle, less sensitive to the value of the contextual parameter, and less affected by the quality of the initial change map. Paper IV investigates object-based unsupervised change detection using very high resolution TerraSAR-X images over urban areas. Three algorithms, i.e., Kittler-Illingworth, Otsu, and outlier detection, were tested and compared. The multitemporal images were segmented using multidate segmentation strategy. The analysis reveals that the three algorithms achieved similar accuracies. The achieved accuracies were very close to the maximum possible, given the modified ratio image as an input. This maximum, however, was not very high. This was attributed, partially, to the low capacity of the modified ratio image to accentuate the difference between changed and unchanged areas. Consequently, Paper V proposes a new object-based change image generation technique. The strong intensity variations associated with high resolution and speckle effects render object mean intensity unreliable feature. The modified ratio image is, therefore, less efficient in emphasizing the contrast between the classes. An alternative representation of the change data was proposed. To measure the intensity of change at the object in isolation of disturbances caused by strong intensity variations and speckle effects, two techniques based on the Fourier transform and the Wavelet transform of the change signal were developed. Qualitative and quantitative analyses of the result show that improved change detection accuracies can be obtained by classifying the proposed change variables. / <p>QC 20150529</p>
175

Ultra-wideband channel estimation with application towards time-of-arrival estimation

Liu, Ted C.-K. 25 August 2009 (has links)
Ultra-wideband (UWB) technology is the next viable solution for applications in wireless personal area network (WPAN), body area network (BAN) and wireless sensor network (WSN). However, as application evolves toward a more realistic situation, wideband channel characteristics such as pulse distortion must be accounted for in channel modeling. Furthermore, application-oriented services such as ranging and localization demand fast prototyping, real-time processing of measured data, and good low signal-to-noise ratio (SNR) performance. Despite the tremendous effort being vested in devising new receivers by the global research community, channel-estimating Rake receiver is still one of the most promising receivers that can offer superior performance to the suboptimal counterparts. However, acquiring Nyquist-rate samples costs substantial power and resource consumption and is a major obstacle to the feasible implementation of the asymptotic maximum likelihood (ML) channel estimator. In this thesis, we address all three aspects of the UWB impulse radio (UWB-IR), in three separate contributions. First, we study the {\it a priori} dependency of the CLEAN deconvolution with real-world measurements, and propose a high-resolution, multi-template deconvolution algorithm to enhance the channel estimation accuracy. This algorithm is shown to supersede its predecessors in terms of accuracy, energy capture and computational speed. Secondly, we propose a {\it regularized} least squares time-of-arrival (ToA) estimator with wavelet denoising to the problem of ranging and localization with UWB-IR. We devise a threshold selection framework based on the Neyman-Pearson (NP) criterion, and show the robustness of our algorithm by comparing with other ToA algorithms in both computer simulation and ranging measurements when advanced digital signal processing (DSP) is available. Finally, we propose a low-complexity ML (LC-ML) channel estimator to fully exploit the multipath diversity with Rake receiver with sub-Nyquist rate sampling. We derive the Cram\'er-Rao Lower Bound (CRLB) for the LC-ML, and perform simulation to compare our estimator with both the $\ell_1$-norm minimization technique and the conventional ML estimator.
176

HARDI Denoising using Non-local Means on the ℝ³ x 𝕊² Manifold

Kuurstra, Alan 20 December 2011 (has links)
Magnetic resonance imaging (MRI) has long become one of the most powerful and accurate tools of medical diagnostic imaging. Central to the diagnostic capabilities of MRI is the notion of contrast, which is determined by the biochemical composition of examined tissue as well as by its morphology. Despite the importance of the prevalent T₁, T₂, and proton density contrast mechanisms to clinical diagnosis, none of them has demonstrated effectiveness in delineating the morphological structure of the white matter - the information which is known to be related to a wide spectrum of brain-related disorders. It is only with the recent advent of diffusion-weighted MRI that scientists have been able to perform quantitative measurements of the diffusivity of white matter, making possible the structural delineation of neural fibre tracts in the human brain. One diffusion imaging technique in particular, namely high angular resolution diffusion imaging (HARDI), has inspired a substantial number of processing methods capable of obtaining the orientational information of multiple fibres within a single voxel while boasting minimal acquisition requirements. HARDI characterization of fibre morphology can be enhanced by increasing spatial and angular resolutions. However, doing so drastically reduces the signal-to-noise ratio. Since pronounced measurement noise tends to obscure and distort diagnostically relevant details of diffusion-weighted MR signals, increasing spatial or angular resolution necessitates application of the efficient and reliable tools of image denoising. The aim of this work is to develop an effective framework for the filtering of HARDI measurement noise which takes into account both the manifold to which the HARDI signal belongs and the statistical nature of MRI noise. These goals are accomplished using an approach rooted in non-local means (NLM) weighted averaging. The average includes samples, and therefore dependencies, from the entire manifold and the result of the average is used to deduce an estimate of the original signal value in accordance with MRI statistics. NLM averaging weights are determined adaptively based on a neighbourhood similarity measure. The novel neighbourhood comparison proposed in this thesis is one of spherical neighbourhoods, which assigns large weights to samples with similar local orientational diffusion characteristics. Moreover, the weights are designed to be invariant to both spatial rotations as well as to the particular sampling scheme in use. This thesis provides a detailed description of the proposed filtering procedure as well as experimental results with synthetic and real-life data. It is demonstrated that the proposed filter has substantially better denoising capabilities as compared to a number of alternative methods.
177

Advanced Computational Methods for Power System Data Analysis in an Electricity Market

Ke Meng Unknown Date (has links)
The power industry has undergone significant restructuring throughout the world since the 1990s. In particular, its traditional, vertically monopolistic structures have been reformed into competitive markets in pursuit of increased efficiency in electricity production and utilization. However, along with market deregulation, power systems presently face severe challenges. One is power system stability, a problem that has attracted widespread concern because of severe blackouts experienced in the USA, the UK, Italy, and other countries. Another is that electricity market operation warrants more effective planning, management, and direction techniques due to the ever expanding large-scale interconnection of power grids. Moreover, many exterior constraints, such as environmental protection influences and associated government regulations, now need to be taken into consideration. All these have made existing challenges even more complex. One consequence is that more advanced power system data analysis methods are required in the deregulated, market-oriented environment. At the same time, the computational power of modern computers and the application of databases have facilitated the effective employment of new data analysis techniques. In this thesis, the reported research is directed at developing computational intelligence based techniques to solve several power system problems that emerge in deregulated electricity markets. Four major contributions are included in the thesis: a newly proposed quantum-inspired particle swarm optimization and self-adaptive learning scheme for radial basis function neural networks; online wavelet denoising techniques; electricity regional reference price forecasting methods in the electricity market; and power system security assessment approaches for deregulated markets, including fault analysis, voltage profile prediction under contingencies, and machine learning based load shedding scheme for voltage stability enhancement. Evolutionary algorithms (EAs) inspired by biological evolution mechanisms have had great success in power system stability analysis and operation planning. Here, a new quantum-inspired particle swarm optimization (QPSO) is proposed. Its inspiration stems from quantum computation theory, whose mechanism is totally different from those of original EAs. The benchmark data sets and economic load dispatch research results show that the QPSO improves on other versions of evolutionary algorithms in terms of both speed and accuracy. Compared to the original PSO, it greatly enhances the searching ability and efficiently manages system constraints. Then, fuzzy C-means (FCM) and QPSO are applied to train radial basis function (RBF) neural networks with the capacity to auto-configure the network structures and obtain the model parameters. The benchmark data sets test results suggest that the proposed training algorithms ensure good performance on data clustering, also improve training and generalization capabilities of RBF neural networks. Wavelet analysis has been widely used in signal estimation, classification, and compression. Denoising with traditional wavelet transforms always exhibits visual artefacts because of translation-variant. Furthermore, in most cases, wavelet denoising of real-time signals is actualized via offline processing which limits the efficacy of such real-time applications. In the present context, an online wavelet denoising method using a moving window technique is proposed. Problems that may occur in real-time wavelet denoising, such as border distortion and pseudo-Gibbs phenomena, are effectively solved by using window extension and window circle spinning methods. This provides an effective data pre-processing technique for the online application of other data analysis approaches. In a competitive electricity market, price forecasting is one of the essential functions required of a generation company and the system operator. It provides critical information for building up effective risk management plans by market participants, especially those companies that generate and retail electrical power. Here, an RBF neural network is adopted as a predictor of the electricity market regional reference price in the Australian national electricity market (NEM). Furthermore, the wavelet denoising technique is adopted to pre-process the historical price data. The promising network prediction performance with respect to price data demonstrates the efficiency of the proposed method, with real-time wavelet denoising making feasible the online application of the proposed price prediction method. Along with market deregulation, power system security assessment has attracted great concern from both academic and industry analysts, especially after several devastating blackouts in the USA, the UK, and Russia. This thesis goes on to propose an efficient composite method for cascading failure prevention comprising three major stages. Firstly, a hybrid method based on principal component analysis (PCA) and specific statistic measures is used to detect system faults. Secondly, the RBF neural network is then used for power network bus voltage profile prediction. Tests are carried out by means of the “N-1” and “N-1-1” methods applied in the New England power system through PSS/E dynamic simulations. Results show that system faults can be reliably detected and voltage profiles can be correctly predicted. In contrast to traditional methods involving phase calculation, this technique uses raw data from time domains and is computationally inexpensive in terms of both memory and speed for practical applications. This establishes a connection between power system fault analysis and cascading analysis. Finally, a multi-stage model predictive control (MPC) based load shedding scheme for ensuring power system voltage stability is proposed. It has been demonstrated that optimal action in the process of load shedding for voltage stability during emergencies can be achieved as a consequence. Based on above discussions, a framework for analysing power system voltage stability and ensuring its enhancement is proposed, with such a framework able to be used as an effective means of cascading failure analysis. In summary, the research reported in this thesis provides a composite framework for power system data analysis in a market environment. It covers advanced techniques of computational intelligence and machine learning, also proposes effective solutions for both the market operation and the system stability related problems facing today’s power industry.
178

Sub-Nyquist Sampling and Super-Resolution Imaging

Mulleti, Satish January 2017 (has links) (PDF)
The Shannon sampling framework is widely used for discrete representation of analog bandlimited signals, starting from samples taken at the Nyquist rate. In many practical applications, signals are not bandlimited. In order to accommodate such signals within the Shannon-Nyquist framework, one typically passes the signal through an anti-aliasing filter, which essentially performs bandlimiting. In applications such as RADAR, SONAR, ultrasound imaging, optical coherence to-mography, multiband signal communication, wideband spectrum sensing, etc., the signals to be sampled have a certain structure, which could manifest in one of the following forms: (i) sparsity or parsimony in a certain bases; (ii) shift-invariant representation; (iii) multi-band spectrum; (iv) finite rate of innovation property, etc.. By using such structure as a prior, one could devise efficient sampling strategies that operate at sub-Nyquist rates. In this Ph.D. thesis, we consider the problem of sampling and reconstruction of finite-rate-of-innovation (FRI) signals, which fall in one of the two classes: (i) Sum-of-weighted and time-shifted (SWTS) pulses; and (ii) Sum-of-weighted exponential (SWE). Finite-rate-of-innovation signals are not necessarily bandlimited, but they are specified by a finite number of free parameters per unit time interval. Hence, the FRI reconstruction problem could be solved by estimating the parameters starting from measurements on the signal. Typically, parameter estimation is done using high-resolution spectral estimation (HRSE) techniques such as the annihilating filter, matrix pencil method, estimation of signal parameter via rotational invariance technique (ESPRIT), etc.. The sampling issues include design of the sampling kernel and choice of the sampling grid structure. Following a frequency-domain reconstruction approach, we propose a novel technique to design compactly supported sampling kernels. The key idea is to cancel aliasing at certain set of uniformly spaced frequencies and make sure that the rest of the frequency response is specified such that the kernel follows the Paley-Wiener criterion for compactly supported functions. To assess the robustness in the presence of noise, we consider a particular class of the proposed kernel whose impulse response has the form of sum of modulated splines (SMS). In the presence of continuous-time and digital noise cases, we show that the reconstruction accuracy is improved by 5 to 25 dB by using the SMS kernel compared with the state-of-the-art compactly supported kernels. Apart from noise robustness, the SMS kernel also has polynomial-exponential reproducing property where the exponents are harmonically related. An interesting feature of the SMS kernel, in contrast with E-splines, is that its support is independent of the number of exponentials. In a typical SWTS signal reconstruction mechanism, first, the SWTS signal is trans formed to a SWE signal followed by uniform sampling, and then discrete-domain annihilation is applied for parameter estimation. In this thesis, we develop a continuous-time annihilation approach using the shift operator for estimating the parameters of SWE signals. Instead of using uniform sampling-based HRSE techniques, operator-based annihilation allows us to estimate parameters from structured non-uniform samples (SNS), and gives more accurate parameters estimates. On the application front, we first consider the problem of curve fitting and curve completion, specifically, ellipse fitting to uniform or non-uniform samples. In general, the ellipse fitting problem is solved by minimizing distance metrics such as the algebraic distance, geometric distance, etc.. It is known that when the samples are measured from an incomplete ellipse, such fitting techniques tend to estimate biased ellipse parameters and the estimated ellipses are relatively smaller than the ground truth. By taking into account the FRI property of an ellipse, we show how accurate ellipse fitting can be performed even to data measured from a partial ellipse. Our fitting technique first estimates the underlying sampling rate using annihilating filter and then carries out least-squares regression to estimate the ellipse parameters. The estimated ellipses have lesser bias compared with the state-of-the-art methods and the mean-squared error is lesser by about 2 to 10 dB. We show applications of ellipse fitting in iris images starting from partial edge contours. We found that the proposed method is able to localize iris/pupil more accurately compared with conventional methods. In a related application, we demonstrate curve completion to partial ellipses drawn on a touch-screen tablet. We also applied the FRI principle to imaging applications such as frequency-domain optical-coherence tomography (FDOCT) and nuclear magnetic resonance (NMR) spectroscopy. In these applications, the resolution is limited by the uncertainty principle, which, in turn, is limited by the number of measurements. By establishing the FRI property of the measurements, we show that one could attain super-resolved tomograms and NMR spectra by using the same or lesser number of samples compared with the classical Fourier-based techniques. In the case of FDOCT, by assuming a piecewise-constant refractive index of the specimen, we show that the measurements have SWE form. We show how super-resolved tomograms could be achieved using SNS-based reconstruction technique. To demonstrate clinical relevance, we consider FDOCT measurements obtained from the retinal pigment epithelium (RPE) and photoreceptor inner/outer segments (IS/OS) of the retina. We show that the proposed method is able to resolve the RPE and IS/OS layers by using only 40% of the available samples. In the context of NMR spectroscopy, the measured signal or free induction decay (FID) can be modelled as a SWE signal. Due to the exponential decay, the FIDs are non-stationary. Hence, one cannot directly apply autocorrelation-based methods such as ESPRIT. We develop DEESPRIT, a counterpart of ESPRIT for decaying exponentials. We consider FID measurements taken from amino acid mixture and show that the proposed method is able to resolve two closely spaced frequencies by using only 40% of the measurements. In summary, this thesis focuses on various aspects of sub-Nyquist sampling and demonstrates concrete applications to super-resolution imaging.
179

Débruitage d’image par fusion de filtrage spatio-fréquentielle

Barry, Djenabou 03 1900 (has links)
No description available.
180

Filtragem de ruído speckle em imagens de radar de abertura sintética por filtros de média não local com transformação homomórfica e distâncias estocásticas

Penna, Pedro Augusto de Alagão 23 January 2014 (has links)
Made available in DSpace on 2016-06-02T19:06:17Z (GMT). No. of bitstreams: 1 6277.pdf: 15816665 bytes, checksum: 105661656ee67fe816f34a96605797f9 (MD5) Previous issue date: 2014-01-23 / The development of new methods and noisy images filtering techniques still attract researchers, which seek to reduce the noise with the minimal loss of details, edges, resolution and removal of fine structures of the image. Moreover, it is extremely important to expand the capacity of the filters for the different noise models present in the Image and Signal Processing literature, like the multiplicative noise speckle, present in the synthetic aperture radar (SAR) images. This Master s degree thesis aims to use a recent denoising algorithm: the nonlocal means (NLM), developed for the additive white gaussian noise (AWGN), and expand, analyze and compare its capacity for intensity SAR images denoising (despeckling), which are contaminated with the speckle. This expansion of the NLM filter is based with the use of the stochastic distances and the comparison of the estimated parameters with de G0 and the inverse Gamma distributions. Finally, this work compares the synthetic and real results of the proposed filter with some filters of the literature. / A elaboração de novos métodos e técnicas de filtragem de imagens ruidosas ainda atraem pesquisadores, que buscam a redução de ruído com a mínima perda dos detalhes, bordas, resolução e remoção de estruturas finas da imagem. Além disto, é de extrema importância ampliar a capacidade dos filtros para diversos modelos de ruído existentes na literatura de Processamento de Imagens e Sinais, como o ruído multiplicativo speckle , presente em imagens de radar de abertura sintética (SAR). Esta dissertação de Mestrado tem o objetivo de utilizar um algoritmo de filtragem recente: o nonlocal means (NLM), desenvolvido para o ruído branco aditivo gaussiano (AWGN), e ampliar, analisar e comparar a sua capacidade para a filtragem de imagens SAR de intensidade ( despeckling ), as quais são contaminadas com o speckle . Esta ampliação do filtro NLM é baseada no uso das distâncias estocásticas e na comparação dos parâmetros estimados através das distribuições G0 e da inversa da Gama. Por fim, este trabalho compara os resultados sintéticos e reais do filtro proposto com alguns filtros da literatura.

Page generated in 0.0746 seconds