• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 1
  • 1
  • Tagged with
  • 13
  • 10
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Experimental Testing and Evaluation of Orthogonal Waveforms for MIMO Radar with an Emphasis on Modified Golay Codes

Burwell, Alex 26 August 2014 (has links)
No description available.
2

Proposta e implementação de uma Micro-PMU

Aleixo, Renato Ribeiro 01 March 2018 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-04-10T14:04:24Z No. of bitstreams: 1 renatoribeiroaleixo.pdf: 11717772 bytes, checksum: 92418eff47ec8bfa0e099a19d849c068 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-04-10T14:22:50Z (GMT) No. of bitstreams: 1 renatoribeiroaleixo.pdf: 11717772 bytes, checksum: 92418eff47ec8bfa0e099a19d849c068 (MD5) / Made available in DSpace on 2018-04-10T14:22:50Z (GMT). No. of bitstreams: 1 renatoribeiroaleixo.pdf: 11717772 bytes, checksum: 92418eff47ec8bfa0e099a19d849c068 (MD5) Previous issue date: 2018-03-01 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Este trabalho tem como objetivo a proposta de uma Unidade de Medição Fasorial (do inglês, Phasor Measurement Unit)(PMU), de baixo custo, voltada para o monito-ramento da distribuição de energia elétrica. O medidor proposto pode ser conectado à rede de baixa tensão, possibilitando assim o monitoramento dos sistemas de dis-tribuição e transmissão de energia. O algoritmo de estimação fasorial que compõe o software embarcado do equipamento faz uso do filtro Savitzky-Golay como aproxima-ção da derivada, necessária no processo de estimação da frequência do componente fundamental do sinal. O hardware utilizado é composto pelo microprocessador ARM TM4C1294NCPDT da Texas Instruments, um módulo GPS NEO-6M da uBlox, um módulo Wi-Fi ESP8266, além de um circuito de condicionamento do sinal analógico. O sincronismo das medições realizadas é garantido graças ao sinal composto por um pulso por segundo fornecido pelo GPS. Para o envio dos dados gerados pelo medidor pro-posto, o protocolo definido na norma vigente para PMUs foi utilizado. As estimações podem ser armazenadas e vizualizadas em tempo real através de um software monitor de dados de sincrofasores. Os resultados contemplam os testes exigidos pela norma, avaliando-se o erro total da estimação do fasor, o erro de frequência e o erro de taxa de variação da frequência. Por último, a fim de se reafirmar o sincronismo existente entre as medições realizadas por mais de um equipamento, estimou-se os fasores e a frequência em pontos distintos do sistema 4 Barras do IEEE, simulado em tempo real no RTDS, onde pode-se observar a estimação correta da defasagem entre duas barras desse sistema. / The present work proposes of a low cost Phasor Measurement Unity (PMU), for monitoring the power distribution system. The proposed meter can be connected at the low voltage level, making possible the monitoring of the distribution system and the transmission system. The algorithm used to compute the phasor estimation that composes the embedded software in the equipment uses the Savitzky-Golay filter to approximate the differentiation process, necessary in the frequency estimation of the fundamental component of the signal. The hardware of the equipment is composed by a microprocessor AMR TM4C1294NCPDT of Texas Instruments, a uBlox GPS NEO-6M module, a Wi-Fi ESP8266 module and an analog conditioning circuit. The synchronism of the measurements is guaranteed due to a pulse per second signal from the GPS module. For the transmission of the data generated by the PMU, the protocol suggested by the standard is used. The estimated parameters can be visualized in real time through the Synchrophasor Data Monitor Software. The results contemplate the tests required by the IEEE standard C37.118.1 and the analyses of the total vector error, frequency error and rate of change of frequency error. Finally, to attest the synchronism between different PMUs, a test in a Real Time Digital Simulator (RTDS) was made, where the 4 bus IEEE system was simulated. The difference of the angles estimated for different buses was computed and the obtained values were according to the expected.
3

Near-Infrared Spectral Measurements and Multivariate Analysis for Predicting Glass Contamination of Boiler Fuel

Winn, Olivia, Thekkemadathil Sivaram, Kiran January 2017 (has links)
This degree project investigates how glass contamination in refuse-derived fuel for a fluidised bed boiler can be detected using near-infrared spectroscopy. It is motivated by the potential to reduce greenhouse gas emissions by replacing fossil fuels with refuse-derived fuel. The intent was to develop a multivariate predictive model of near-infrared spectral data to detect the presence of glass cullet against a background material that represents refuse-derived fuel. Existing literature was reviewed to confirm the usage of near-infrared spectroscopy as a sensing technology and determine the necessity of glass detection. Four unique background materials were chosen to represent the main components in municipal solid waste: wood shavings, shredded coconut, dry rice and whey powder. Samples of glass mixed with the background material were imaged using near-infrared spectroscopy, the resulting data was pre-processed and analysed using partial least squares regression. It was shown that a predictive model for quantifying coloured glass cullet content in one of several background materials were reasonably accurate with a validation coefficient of determination of 0.81 between the predicted and reference data. Models that used data from a single type of background material, wood shavings, were more accurate. Models for quantifying clear glass cullet content were significantly less accurate. These types of models could be applied to predict coloured glass content in different kinds of background materials. However, the presence of clear glass in municipal solid waste, and thus refuse-derived fuel, limit the opportunities to apply these methods to the detection of glass contamination in fuel.
4

Método para detecção e compensação dos efeitos causados pela saturação dos TCs de proteção com meios adaptativos para mitigação da influência do ruído e dos desvios de frequência

Schettino, Bruno Montesano 08 December 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-04-12T19:44:42Z No. of bitstreams: 1 brunomontesanoschettino.pdf: 4192681 bytes, checksum: 0df0705f39a6ff58697ec7dc00759256 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-04-24T03:29:36Z (GMT) No. of bitstreams: 1 brunomontesanoschettino.pdf: 4192681 bytes, checksum: 0df0705f39a6ff58697ec7dc00759256 (MD5) / Made available in DSpace on 2016-04-24T03:29:36Z (GMT). No. of bitstreams: 1 brunomontesanoschettino.pdf: 4192681 bytes, checksum: 0df0705f39a6ff58697ec7dc00759256 (MD5) Previous issue date: 2015-12-08 / Este trabalho propõe um método para detectar a saturação dos núcleos dos transformadores de corrente (TCs) utilizados na proteção de sistemas elétricos de potência (SEPs), além de promover a compensação de seus efeitos através da correção do sinal de corrente secundária distorcido pela saturação. Técnicas de processamento de sinais baseadas no filtro diferenciador de segunda ordem de Savitzky-Golay são utilizadas para localizar os pontos de transição entre partes do sinal de corrente distorcidas e não distorcidas pela saturação. Em seguida, um processo de estimação baseado no critério dos mínimos quadrados que utiliza exclusivamente amostras do sinal contidas nas regiões não distorcidas é efetuado, extraindo os parâmetros necessários à promoção da correção do sinal. As influências do ruído e dos desvios de frequência de operação do SEP foram analisadas, tendo sido desenvolvidos e incorporados meios adaptativos para mitigar seus efeitos. Os algoritmos desenvolvidos foram implementados em MATLAB e a avaliação de desempenho foi realizada utilizando sinais extraídos de simulações de falta ocorridas em um sistema modelado em um simulador digital em tempo real (RTDS). Os resultados indicaram que o método proposto atingiu desempenho satisfatório, independente dos parâmetros do TC e dentro de uma ampla gama de cenários da falta analisados. Além disso, o método mostrou-se robusto em relação ao ruído e eficiente na mitigação dos erros provocados pelos desvios de frequência. Ainda, os recursos técnicos e computacionais necessários para sua execução indicam que o método proposto é passível de implementação nos atuais dispositivos de proteção disponibilizados pela indústria. / This work proposes a method for detecting the saturation of the current-transformer cores used in the protection of electric power systems and promote the compensation for its effects by correcting the secondary current signal distorted due to the saturation. Signal processing techniques based on the second order differentiator Savitzky-Golay filter are used for locating the transition points between distorted and undistorted parts of the current signal. Then, an estimation process based on the least squares criteria that uses exclusively signal samples included in the undistorted regions is performed, extracting the parameters needed for the signal correction. The influences of the noise and the frequency offset were analysed, and adaptive means to mitigate their effects were developed and incorporated. The developed algorithms were implemented in MATLAB and performance evaluation was performed using the signals taken from fault simulations in a system modeled on a real time digital simulator (RTDS). The results indicated that the proposed method reaches a satisfactory performance, regardless of the CT parameters and within a wide range of analysed fault scenarios. Moreover, the method showed to be robust relative to the noise and effective in mitigating the errors due to the frequency offsets.
5

Optimum Savitzky-Golay Filtering for Signal Estimation

Krishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
6

Optimum Savitzky-Golay Filtering for Signal Estimation

Krishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
7

Automatically measuring the resistive loss of a transformer : A project in cooperation with Alstom Power Sweden

Rakk, Adrian January 2015 (has links)
In order to develop more economical and ecologically friendly transformers it is necessary to know the losses throughout the product development process. There are several losses related to transformers, but in this particular case the focus will be on the resistive loss of the transformer. In order to measure this loss first the resonant frequency of the transformer is determined. Since at resonance the secondary side of the transformer is considered to be purely resistive. The aim of this paper is to design and build a closed loop measurement system that is able to perform this task.
8

Rozšířený binární Golayův kód / Extended binary Golay code

Uchytilová, Vendula January 2011 (has links)
This work deals with three different constructions of the extended binary Golay code G24. The first construction is based on a projective plane of order four. In terms of it Steiner system (5, 8, 24) is built. Linear span of its blocks forms a linear binary [24, 12, 8] code C. Every binary [24, 12, 8] code is isomorphic to C which is known as extended binary Golay code G24. The second construction uses so-called Miracle Octad Generator (MOG). All MOG-words of weight eight form Steiner system (5, 8, 24). The third construction uses impartial combinatorial game Mogul. In terms of its P-positions one can create a linear binary [24, 12, 8] code. The fact that is is also a lexikographic code is useful for parametres estimate. 1
9

Validation of Black-and-White Topology Optimization Designs

Garla Venkatakrishnaiah, Sharath Chandra, Varadaraju, Harivinay January 2021 (has links)
Topology optimization has seen rapid developments in its field with algorithms getting better and faster all the time. These new algorithms help reduce the lead time from concept development to a finished product. Simulation and post-processing of geometry are one of the major developmental costs. Post-processing of this geometry also takes up a lot of time and is dependent on the quality of the geometry output from the solver to make the product ready for rapid prototyping or final production. The work done in this thesis deals with the post-processing of the results obtained from topology optimization algorithms which output the result as a 2D image. A suitable methodology is discussed where this image is processed and converted into a CAD geometry all while minimizing deviation in geometry, compliance and volume fraction. Further on, a validation of the designs is performed to measure the extracted geometry's deviation from the post-processed result. The workflow is coded using MATLAB and uses an image-based post-processing approach. The proposed workflow is tested on several numerical examples to assess the performance, limitations and numerical instabilities. The code written for the entire workflow is included as an appendix and can be downloaded from the website:https://github.com/M87K452b/postprocessing-topopt.
10

Savitzky-Golay Filters and Application to Image and Signal Denoising

Menon, Seeram V January 2015 (has links) (PDF)
We explore the applicability of local polynomial approximation of signals for noise suppression. In the context of data regression, Savitzky and Golay showed that least-squares approximation of data with a polynomial of fixed order, together with a constant window length, is identical to convolution with a finite impulse response filter, whose characteristics depend entirely on two parameters, namely, the order and window length. Schafer’s recent article in IEEE Signal Processing Magazine provides a detailed account of one-dimensional Savitzky-Golay (SG) filters. Drawing motivation from this idea, we present an elaborate study of two-dimensional SG filters and employ them for image denoising by optimizing the filter response to minimize the mean-squared error (MSE) between the original image and the filtered output. The key contribution of this thesis is a method for optimal selection of order and window length of SG filters for denoising images. First, we apply the denoising technique for images contaminated by additive Gaussian noise. Owing to the absence of ground truth in practice, direct minimization of the MSE is infeasible. However, the classical work of C. Stein provides a statistical method to overcome the hurdle. Based on Stein’s lemma, an estimate of the MSE, namely Stein’s unbiased risk estimator (SURE), is derived, and the two critical parameters of the filter are optimized to minimize the cost. The performance of the technique improves when a regularization term, which penalizes fast variations in the estimate, is added to the optimization cost. In the next three chapters, we focus on non-Gaussian noise models. In Chapter 3, image degradation in the presence of a compound noise model, where images are corrupted by mixed Poisson-Gaussian noise, is addressed. Inspired by Hudson’s identity, an estimate of MSE, namely Poisson unbiased risk estimator (PURE), which is analogous to SURE, is developed. Combining both lemmas, Poisson-Gaussian unbiased risk estimator (PGURE) minimization is performed to obtain the optimal filter parameters. We also show that SG filtering provides better lowpass approximation for a multiresolution denoising framework. In Chapter 4, we employ SG filters for reducing multiplicative noise in images. The standard SG filter frequency response can be controlled along horizontal or vertical directions. This limits its ability to capture oriented features and texture that lie at other angles. Here, we introduce the idea of steering the SG filter kernel and perform mean-squared error minimization based on the new concept of multiplicative noise unbiased risk estimation (MURE). Finally, we propose a method to robustify SG filters, robustness to deviation from Gaussian noise statistics. SG filters work on the principle of least-squares error minimization, and are hence compatible with maximum-likelihood (ML) estimation in the context of Gaussian statistics. However, for heavily-tailed noise such as the Laplacian, where ML estimation requires mean-absolute error minimization in lieu of MSE minimization, standard SG filter performance deteriorates. `1 minimization is a challenge since there is no closed-form solution. We solve the problem by inducing the `1-norm criterion using the iteratively reweighted least-squares (IRLS) method. At every iteration, we solve an l`2 problem, which is equivalent to optimizing a weighted SG filter, but, as iterations progress, the solution converges to that corresponding to `1 minimization. The results thus obtained are superior to those obtained using the standard SG filter.

Page generated in 0.0297 seconds