• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 11
  • 7
  • 6
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 65
  • 22
  • 12
  • 12
  • 12
  • 11
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Some Contributions on Probabilistic Interpretation For Nonlinear Stochastic PDEs / Quelques contributions dans la représentation probabiliste des solutions d'EDPs non linéaires

Sabbagh, Wissal 08 December 2014 (has links)
L'objectif de cette thèse est l'étude de la représentation probabiliste des différentes classes d'EDPSs non-linéaires(semi-linéaires, complètement non-linéaires, réfléchies dans un domaine) en utilisant les équations différentielles doublement stochastiques rétrogrades (EDDSRs). Cette thèse contient quatre parties différentes. Nous traitons dans la première partie les EDDSRs du second ordre (2EDDSRs). Nous montrons l'existence et l'unicité des solutions des EDDSRs en utilisant des techniques de contrôle stochastique quasi- sure. La motivation principale de cette étude est la représentation probabiliste des EDPSs complètement non-linéaires. Dans la deuxième partie, nous étudions les solutions faibles de type Sobolev du problème d'obstacle pour les équations à dérivées partielles inteégro-différentielles (EDPIDs). Plus précisément, nous montrons la formule de Feynman-Kac pour l'EDPIDs par l'intermédiaire des équations différentielles stochastiques rétrogrades réfléchies avec sauts (EDSRRs). Plus précisément, nous établissons l'existence et l'unicité de la solution du problème d'obstacle, qui est considérée comme un couple constitué de la solution et de la mesure de réflexion. L'approche utilisée est basée sur les techniques de flots stochastiques développées dans Bally et Matoussi (2001) mais les preuves sont beaucoup plus techniques. Dans la troisième partie, nous traitons l'existence et l'unicité pour les EDDSRRs dans un domaine convexe D sans aucune condition de régularité sur la frontière. De plus, en utilisant l'approche basée sur les techniques du flot stochastiques nous démontrons l'interprétation probabiliste de la solution faible de type Sobolev d'une classe d'EDPSs réfléchies dans un domaine convexe via les EDDSRRs. Enfin, nous nous intéressons à la résolution numérique des EDDSRs à temps terminal aléatoire. La motivation principale est de donner une représentation probabiliste des solutions de Sobolev d'EDPSs semi-linéaires avec condition de Dirichlet nul au bord. Dans cette partie, nous étudions l'approximation forte de cette classe d'EDDSRs quand le temps terminal aléatoire est le premier temps de sortie d'une EDS d'un domaine cylindrique. Ainsi, nous donnons les bornes pour l'erreur d'approximation en temps discret. Cette partie se conclut par des tests numériques qui démontrent que cette approche est effective. / The objective of this thesis is to study the probabilistic representation (Feynman-Kac for- mula) of different classes ofStochastic Nonlinear PDEs (semilinear, fully nonlinear, reflected in a domain) by means of backward doubly stochastic differential equations (BDSDEs). This thesis contains four different parts. We deal in the first part with the second order BDS- DEs (2BDSDEs). We show the existence and uniqueness of solutions of 2BDSDEs using quasi sure stochastic control technics. The main motivation of this study is the probabilistic representation for solution of fully nonlinear SPDEs. First, under regularity assumptions on the coefficients, we give a Feynman-Kac formula for classical solution of fully nonlinear SPDEs and we generalize the work of Soner, Touzi and Zhang (2010-2012) for deterministic fully nonlinear PDE. Then, under weaker assumptions on the coefficients, we prove the probabilistic representation for stochastic viscosity solution of fully nonlinear SPDEs. In the second part, we study the Sobolev solution of obstacle problem for partial integro-differentialequations (PIDEs). Specifically, we show the Feynman-Kac formula for PIDEs via reflected backward stochastic differentialequations with jumps (BSDEs). Specifically, we establish the existence and uniqueness of the solution of the obstacle problem, which is regarded as a pair consisting of the solution and the measure of reflection. The approach is based on stochastic flow technics developed in Bally and Matoussi (2001) but the proofs are more technical. In the third part, we discuss the existence and uniqueness for RBDSDEs in a convex domain D without any regularity condition on the boundary. In addition, using the approach based on the technics of stochastic flow we provide the probabilistic interpretation of Sobolev solution of a class of reflected SPDEs in a convex domain via RBDSDEs. Finally, we are interested in the numerical solution of BDSDEs with random terminal time. The main motivation is to give a probabilistic representation of Sobolev solution of semilinear SPDEs with Dirichlet null condition. In this part, we study the strong approximation of this class of BDSDEs when the random terminal time is the first exit time of an SDE from a cylindrical domain. Thus, we give bounds for the discrete-time approximation error.. We conclude this part with numerical tests showing that this approach is effective.
42

Directors’ perceptions of parent involvement in the Early Head Start and Sure Start early intervention programs : a cross-Atlantic study

Ross, K. B. January 2010 (has links)
This research is a cross-Atlantic study of Sure Start and Early Head Start program directors' perceptions of parent involvement in their early intervention programs, with a focus on the provision and take-up of parenting and employability-focused services. The review of the literature, which informed the survey design and the later data chapters, focuses on poverty and parenting, working parents, welfare reform, and early intervention programs, including early childhood education and care policies in England and the United States. Data was collected via an online survey, administered to all those individuals directing either a Sure Start Local Programme (including those that had been designated as Children's Centres) in England or an Early Head Start program in the USA. There was a 40.3% response rate (231 English and 236 American directors, resulting in a total of 467 respondents). The survey questioned directors on their background, and also sought their views of the area in which their program operated, characteristics of their programs and their perceptions of the families accessing the parent-focused services offered by their early intervention program. The resulting data was used to address the primary theme of parenting and employability, drawing associations between reported parent involvement and directors' perceptions of area, program and family characteristics. The findings also led to the establishment of secondary themes: the targeting and catchment area approach to service provision, engaging disadvantaged families, relationships with partner agencies, issues of funding and resources, particularly for staff, and the expansion of Children's Centres. A summary report was sent to all participating directors. It is hoped that this research has benefited program directors, providing insights into the local-level experiences had by their colleagues both within their own country and across the Atlantic, particularly with respect to parent involvement in early intervention programs.
43

Estimation utilisant les polynômes de Bernstein

Tchouake Tchuiguep, Hervé 03 1900 (has links)
Ce mémoire porte sur la présentation des estimateurs de Bernstein qui sont des alternatives récentes aux différents estimateurs classiques de fonctions de répartition et de densité. Plus précisément, nous étudions leurs différentes propriétés et les comparons à celles de la fonction de répartition empirique et à celles de l'estimateur par la méthode du noyau. Nous déterminons une expression asymptotique des deux premiers moments de l'estimateur de Bernstein pour la fonction de répartition. Comme pour les estimateurs classiques, nous montrons que cet estimateur vérifie la propriété de Chung-Smirnov sous certaines conditions. Nous montrons ensuite que l'estimateur de Bernstein est meilleur que la fonction de répartition empirique en terme d'erreur quadratique moyenne. En s'intéressant au comportement asymptotique des estimateurs de Bernstein, pour un choix convenable du degré du polynôme, nous montrons que ces estimateurs sont asymptotiquement normaux. Des études numériques sur quelques distributions classiques nous permettent de confirmer que les estimateurs de Bernstein peuvent être préférables aux estimateurs classiques. / This thesis focuses on the presentation of the Bernstein estimators which are recent alternatives to conventional estimators of the distribution function and density. More precisely, we study their various properties and compare them with the empirical distribution function and the kernel method estimators. We determine an asymptotic expression of the first two moments of the Bernstein estimator for the distribution function. As the conventional estimators, we show that this estimator satisfies the Chung-Smirnov property under conditions. We then show that the Bernstein estimator is better than the empirical distribution function in terms of mean squared error. We are interested in the asymptotic behavior of Bernstein estimators, for a suitable choice of the degree of the polynomial, we show that the Bernstein estimators are asymptotically normal. Numerical studies on some classical distributions confirm that the Bernstein estimators may be preferable to conventional estimators.
44

Terminaison en temps moyen fini de systèmes de règles probabilistes / Termination within a finite mean time of probabilistic rules based systems

Garnier, Florent 17 September 2007 (has links)
Nous avons dans cette thèse cherché à définir un formalisme simple pour pouvoir modéliser des systèmes où se combinent des phénomènes non-déterministes et des comportements aléatoires. Nous avons choisi d'étendre le formalisme de la réécriture pour lui permettre d'exprimer des phénomènes probabilistes, puis nous avons étudié la terminaison en temps moyen fini de ce modèle. Nous avons également présenté une notion de stratégie pour contrôler l'application des règles de réécriture probabilistes et nous présentons des critères généraux permettant d'identifier des classes de stratégies sous lesquelles les systèmes de réécriture probabilistes terminent en temps moyen fini. Afin de mettre en valeur notre formalisme et les méthodes de preuve de terminaison en temps moyen fini, nous avons modélisé un réseau de stations \WIFI~ et nous montrons que toutes les stations parviennent à émettre leurs messages dans un temps moyen fini. / In this thesis we define a new formalism that allows to model transition systems where transitions can be either probabilistic or non deterministic. We choose to extend the rewriting formalism because it allows to simply express non-deterministic behavior. Latter, we study the termination of such systems and we give some criteria that imply the termination within a finite mean number of rewrite steps. We also study the termination of such systems when the firing of probabilistic rules are controlled by strategies. In this document, we use our techniques to model the \WIFI~ protocol and show that a pool of stations successfully emits all its messages within a finite mean time.
45

Estimation de paramètres pour des processus autorégressifs à bifurcation

Blandin, Vassili 26 June 2013 (has links)
Les processus autorégressifs à bifurcation (BAR) ont été au centre de nombreux travaux de recherche ces dernières années. Ces processus, qui sont l'adaptation à un arbre binaire des processus autorégressifs, sont en effet d'intérêt en biologie puisque la structure de l'arbre binaire permet une analogie aisée avec la division cellulaire. L'objectif de cette thèse est l'estimation les paramètres de variantes de ces processus autorégressifs à bifurcation, à savoir les processus BAR à valeurs entières et les processus BAR à coefficients aléatoires. Dans un premier temps, nous nous intéressons aux processus BAR à valeurs entières. Nous établissons, via une approche martingale, la convergence presque sûre des estimateurs des moindres carrés pondérés considérés, ainsi qu'une vitesse de convergence de ces estimateurs, une loi forte quadratique et leur comportement asymptotiquement normal. Dans un second temps, on étudie les processus BAR à coefficients aléatoires. Cette étude permet d'étendre le concept de processus autorégressifs à bifurcation en généralisant le côté aléatoire de l'évolution. Nous établissons les mêmes résultats asymptotiques que pour la première étude. Enfin, nous concluons cette thèse par une autre approche des processus BAR à coefficients aléatoires où l'on ne pondère plus nos estimateurs des moindres carrés en tirant parti du théorème de Rademacher-Menchov. / Bifurcating autoregressive (BAR) processes have been widely investigated this past few years. Those processes, which are an adjustment of autoregressive processes to a binary tree structure, are indeed of interest concerning biology since the binary tree structure allows an easy analogy with cell division. The aim of this thesis is to estimate the parameters of some variations of those BAR processes, namely the integer-valued BAR processes and the random coefficients BAR processes. First, we will have a look to integer-valued BAR processes. We establish, via a martingale approach, the almost sure convergence of the weighted least squares estimators of interest, together with a rate of convergence, a quadratic strong law and their asymptotic normality. Secondly, we study the random coefficients BAR processes. The study allows to extend the principle of bifurcating autoregressive processes by enlarging the randomness of the evolution. We establish the same asymptotic results as for the first study. Finally, we conclude this thesis with an other approach of random coefficient BAR processes where we do not weight our least squares estimators any more by making good use of the Rademacher-Menchov theorem.
46

EVALUATING THE IMPORTANCE OF A STRUCTURED METHODOLOGY BY MANAGEMENT OF CRITICAL RISK/FAILURE FACTORS IN ERP IMPLEMENTATION

Bayir, Arzu, Shetty, Bhavya January 2011 (has links)
Studies in recent years have revealed the challenges involved in deploying ERP solutions due to its complexity. Before attempting to implement ERP systems, it is essential to study various aspects such as project management, training, and change management in detail to manage the associated risks. When an ERP project is undertaken with insufficient planning, it may result in failure to integrate business processes and in substantial financial loss. Research has been pursued to identify critical risk/failure factors that may arise during implementation and the measures that should be taken to manage them. However, there is lack of research in identifying the management of critical risk/failure factor using a structured methodology. This raises a question of ‘can a structured methodology identify and manage critical risk/failure factors and support deploying ERP solutions with a better quality?’ A study of Microsoft Sure Step Methodology is performed to identify critical risk/failure factors that frequently occur during ERP implementation. These factors are derived from 8 articles. On determining critical risk/failure factors, we investigated if Sure Step methodology likely contains procedures that approach these factors.
47

Limit theorems for statistical functionals with applications to dimension estimation / Grenzwertsätze für statistische Funktionale mit Anwendungen auf Dimensionsschätzungen

Min, Aleksey 23 June 2004 (has links)
No description available.
48

Estimation utilisant les polynômes de Bernstein

Tchouake Tchuiguep, Hervé 03 1900 (has links)
Ce mémoire porte sur la présentation des estimateurs de Bernstein qui sont des alternatives récentes aux différents estimateurs classiques de fonctions de répartition et de densité. Plus précisément, nous étudions leurs différentes propriétés et les comparons à celles de la fonction de répartition empirique et à celles de l'estimateur par la méthode du noyau. Nous déterminons une expression asymptotique des deux premiers moments de l'estimateur de Bernstein pour la fonction de répartition. Comme pour les estimateurs classiques, nous montrons que cet estimateur vérifie la propriété de Chung-Smirnov sous certaines conditions. Nous montrons ensuite que l'estimateur de Bernstein est meilleur que la fonction de répartition empirique en terme d'erreur quadratique moyenne. En s'intéressant au comportement asymptotique des estimateurs de Bernstein, pour un choix convenable du degré du polynôme, nous montrons que ces estimateurs sont asymptotiquement normaux. Des études numériques sur quelques distributions classiques nous permettent de confirmer que les estimateurs de Bernstein peuvent être préférables aux estimateurs classiques. / This thesis focuses on the presentation of the Bernstein estimators which are recent alternatives to conventional estimators of the distribution function and density. More precisely, we study their various properties and compare them with the empirical distribution function and the kernel method estimators. We determine an asymptotic expression of the first two moments of the Bernstein estimator for the distribution function. As the conventional estimators, we show that this estimator satisfies the Chung-Smirnov property under conditions. We then show that the Bernstein estimator is better than the empirical distribution function in terms of mean squared error. We are interested in the asymptotic behavior of Bernstein estimators, for a suitable choice of the degree of the polynomial, we show that the Bernstein estimators are asymptotically normal. Numerical studies on some classical distributions confirm that the Bernstein estimators may be preferable to conventional estimators.
49

Optimum Savitzky-Golay Filtering for Signal Estimation

Krishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
50

Optimum Savitzky-Golay Filtering for Signal Estimation

Krishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.

Page generated in 0.039 seconds