Spelling suggestions: "subject:"savitzkygolay filters"" "subject:"savitzkygolay builters""
1 |
Savitzky-Golay Filters and Application to Image and Signal DenoisingMenon, Seeram V January 2015 (has links) (PDF)
We explore the applicability of local polynomial approximation of signals for noise suppression. In the context of data regression, Savitzky and Golay showed that least-squares approximation of data with a polynomial of fixed order, together with a constant window length, is identical to convolution with a finite impulse response filter, whose characteristics depend entirely on two parameters, namely, the order and window length. Schafer’s recent article in IEEE Signal Processing Magazine provides a detailed account of one-dimensional Savitzky-Golay (SG) filters. Drawing motivation from this idea, we present an elaborate study of two-dimensional SG filters and employ them for image denoising by optimizing the filter response to minimize the mean-squared error (MSE) between the original image and the filtered output. The key contribution of this thesis is a method for optimal selection of order and window length of SG filters for denoising images. First, we apply the denoising technique for images contaminated by additive Gaussian noise. Owing to the absence of ground truth in practice, direct minimization of the MSE is infeasible. However, the classical work of C. Stein provides a statistical method to overcome the hurdle. Based on Stein’s lemma, an estimate of the MSE, namely Stein’s unbiased risk estimator (SURE), is derived, and the two critical parameters of the filter are optimized to minimize the cost. The performance of the technique improves when a regularization term, which penalizes fast variations in the estimate, is added to the optimization cost. In the next three chapters, we focus on non-Gaussian noise models.
In Chapter 3, image degradation in the presence of a compound noise model, where images are corrupted by mixed Poisson-Gaussian noise, is addressed. Inspired by Hudson’s identity, an estimate of MSE, namely Poisson unbiased risk estimator (PURE), which is analogous to SURE, is developed. Combining both lemmas, Poisson-Gaussian unbiased risk estimator (PGURE) minimization is performed to obtain the optimal filter parameters. We also show that SG filtering provides better lowpass approximation for a multiresolution denoising framework.
In Chapter 4, we employ SG filters for reducing multiplicative noise in images. The standard SG filter frequency response can be controlled along horizontal or vertical directions. This limits its ability to capture oriented features and texture that lie at other angles. Here, we introduce the idea of steering the SG filter kernel and perform mean-squared error minimization based on the new concept of multiplicative noise unbiased risk estimation (MURE).
Finally, we propose a method to robustify SG filters, robustness to deviation from Gaussian noise statistics. SG filters work on the principle of least-squares error minimization, and are hence compatible with maximum-likelihood (ML) estimation in the context of Gaussian statistics. However, for heavily-tailed noise such as the Laplacian, where ML estimation requires mean-absolute error minimization in lieu of MSE minimization, standard SG filter performance deteriorates. `1 minimization is a challenge since there is no closed-form solution. We solve the problem by inducing the `1-norm criterion using the iteratively reweighted least-squares (IRLS) method. At every iteration, we solve an l`2 problem, which is equivalent to optimizing a weighted SG filter, but, as iterations progress, the solution converges to that corresponding to `1 minimization. The results thus obtained
are superior to those obtained using the standard SG filter.
|
2 |
Método para detecção e compensação dos efeitos causados pela saturação dos TCs de proteção com meios adaptativos para mitigação da influência do ruído e dos desvios de frequênciaSchettino, Bruno Montesano 08 December 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-04-12T19:44:42Z
No. of bitstreams: 1
brunomontesanoschettino.pdf: 4192681 bytes, checksum: 0df0705f39a6ff58697ec7dc00759256 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-04-24T03:29:36Z (GMT) No. of bitstreams: 1
brunomontesanoschettino.pdf: 4192681 bytes, checksum: 0df0705f39a6ff58697ec7dc00759256 (MD5) / Made available in DSpace on 2016-04-24T03:29:36Z (GMT). No. of bitstreams: 1
brunomontesanoschettino.pdf: 4192681 bytes, checksum: 0df0705f39a6ff58697ec7dc00759256 (MD5)
Previous issue date: 2015-12-08 / Este trabalho propõe um método para detectar a saturação dos núcleos dos transformadores de corrente (TCs) utilizados na proteção de sistemas elétricos de potência (SEPs), além de promover a compensação de seus efeitos através da correção do sinal de corrente secundária distorcido pela saturação. Técnicas de processamento de sinais baseadas no filtro diferenciador de segunda ordem de Savitzky-Golay são utilizadas para localizar os pontos de transição entre partes do sinal de corrente distorcidas e não distorcidas pela saturação. Em seguida, um processo de estimação baseado no critério dos mínimos quadrados que utiliza exclusivamente amostras do sinal contidas nas regiões não distorcidas é efetuado, extraindo os parâmetros necessários à promoção da correção do sinal. As influências do ruído e dos desvios de frequência de operação do SEP foram analisadas, tendo sido desenvolvidos e incorporados meios adaptativos para mitigar seus efeitos. Os algoritmos desenvolvidos foram implementados em MATLAB e a avaliação de desempenho foi realizada utilizando sinais extraídos de simulações de falta ocorridas em um sistema modelado em um simulador digital em tempo real (RTDS). Os resultados indicaram que o método proposto atingiu desempenho satisfatório, independente dos parâmetros do TC e dentro de uma ampla gama de cenários da falta analisados. Além disso, o método mostrou-se robusto em relação ao ruído e eficiente na mitigação dos erros provocados pelos desvios de frequência. Ainda, os recursos técnicos e computacionais necessários para sua execução indicam que o método proposto é passível de implementação nos atuais dispositivos de proteção disponibilizados pela indústria. / This work proposes a method for detecting the saturation of the current-transformer cores used in the protection of electric power systems and promote the compensation for its effects by correcting the secondary current signal distorted due to the saturation. Signal processing techniques based on the second order differentiator Savitzky-Golay filter are used for locating the transition points between distorted and undistorted parts of the current signal. Then, an estimation process based on the least squares criteria that uses exclusively signal samples included in the undistorted regions is performed, extracting the parameters needed for the signal correction. The influences of the noise and the frequency offset were analysed, and adaptive means to mitigate their effects were developed and incorporated. The developed algorithms were implemented in MATLAB and performance evaluation was performed using the signals taken from fault simulations in a system modeled on a real time digital simulator (RTDS). The results indicated that the proposed method reaches a satisfactory performance, regardless of the CT parameters and within a wide range of analysed fault scenarios. Moreover, the method showed to be robust relative to the noise and effective in mitigating the errors due to the frequency offsets.
|
3 |
Optimum Savitzky-Golay Filtering for Signal EstimationKrishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE).
The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity.
A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression.
Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature.
Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
|
4 |
Optimum Savitzky-Golay Filtering for Signal EstimationKrishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE).
The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity.
A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression.
Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature.
Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
|
Page generated in 0.0425 seconds