• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 8
  • 7
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 15
  • 11
  • 10
  • 10
  • 9
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Three Essays on Estimation and Testing of Nonparametric Models

Ma, Guangyi 2012 August 1900 (has links)
In this dissertation, I focus on the development and application of nonparametric methods in econometrics. First, a constrained nonparametric regression method is developed to estimate a function and its derivatives subject to shape restrictions implied by economic theory. The constrained estimators can be viewed as a set of empirical likelihood-based reweighted local polynomial estimators. They are shown to be weakly consistent and have the same first order asymptotic distribution as the unconstrained estimators. When the shape restrictions are correctly specified, the constrained estimators can achieve a large degree of finite sample bias reduction and thus outperform the unconstrained estimators. The constrained nonparametric regression method is applied on the estimation of daily option pricing function and state-price density function. Second, a modified Cumulative Sum of Squares (CUSQ) test is proposed to test structural changes in the unconditional volatility in a time-varying coefficient model. The proposed test is based on nonparametric residuals from local linear estimation of the time-varying coefficients. Asymptotic theory is provided to show that the new CUSQ test has standard null distribution and diverges at standard rate under the alternatives. Compared with a test based on least squares residuals, the new test enjoys correct size and good power properties. This is because, by estimating the model nonparametrically, one can circumvent the size distortion from potential structural changes in the mean. Empirical results from both simulation experiments and real data applications are presented to demonstrate the test's size and power properties. Third, an empirical study of testing the Purchasing Power Parity (PPP) hypothesis is conducted in a functional-coefficient cointegration model, which is consistent with equilibrium models of exchange rate determination with the presence of trans- actions costs in international trade. Supporting evidence of PPP is found in the recent float exchange rate era. The cointegration relation of nominal exchange rate and price levels varies conditioning on the real exchange rate volatility. The cointegration coefficients are more stable and numerically near the value implied by PPP theory when the real exchange rate volatility is relatively lower.
52

Flexibilnost, robustnost a nespojitost v neparamerických regresních postupech / Flexibility, Robustness and Discontinuities in Nonparametric Regression Approaches

Maciak, Matúš January 2011 (has links)
Thesis title: Flexibility, Robustness and Discontinuity in Nonparametric Regression Approaches Author: Mgr. Matúš Maciak, M.Sc. Department: Department of Probability and Mathematical Statistics, Charles University in Prague Supervisor: Prof. RNDr. Marie Hušková, DrSc. huskova@karlin.mff.cuni.cz Abstract: In this thesis we focus on local polynomial estimation approaches of an unknown regression function while taking into account also some robust issues like a presence of outlying observa- tions or heavy-tailed distributions of random errors as well. We will discuss the most common method used for such settings, so called local polynomial M-smoothers and we will present the main statistical properties and asymptotic inference for this method. The M-smoothers method is especially suitable for such cases because of its natural robust flavour, which can nicely deal with outliers as well as heavy-tailed distributed random errors. Another important quality we will focus in this thesis on is a discontinuity issue where we allow for sudden changes (discontinuity points) in the unknown regression function or its derivatives respectively. We will propose a discontinuity model with different variability structures for both independent and dependent random errors while the discontinuity points will be treated in a...
53

Optimum Savitzky-Golay Filtering for Signal Estimation

Krishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
54

Optimum Savitzky-Golay Filtering for Signal Estimation

Krishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
55

Quantifying Gait Characteristics and Neurological Effects in people with Spinal Cord Injury using Data-Driven Techniques / Kvantifiering av gångens egenskaper och neurologisk funktionens effekt hos personer med ryggmärgsskada med hjälp av datadrivna metoder

Truong, Minh January 2024 (has links)
Spinal cord injury, whether traumatic or nontraumatic, can partially or completely damage sensorimotor pathways between the brain and the body, leading to heterogeneous gait abnormalities. Mobility impairments also depend on other factors such as age, weight, time since injury, pain, and walking aids used. The ASIA Impairment Scale is recommended to classify injury severity, but is not designed to characterize individual ambulatory capacity. Other standardized tests based on subjective or timing/distance assessments also have only limited ability to determine an individual's capacity. Data-driven techniques have demonstrated effectiveness in analysing complexity in many domains and may provide additional perspectives on the complexity of gait performance in persons with spinal cord injury. The studies in this thesis aimed to address the complexity of gait and functional abilities after spinal cord injury using data-driven approaches. The aim of the first manuscript was to characterize the heterogeneous gait patterns in persons with incomplete spinal cord injury. Dissimilarities among gait patterns in the study population were quantified with multivariate dynamic time warping. Gait patterns were classified into six distinct clusters using hierarchical agglomerative clustering. Through random forest classifiers with explainable AI, peak ankle plantarflexion during swing was identified as the feature that most often distinguished most clusters from the controls. By combining clinical evaluation with the proposed methods, it was possible to provide comprehensive analyses of the six gait clusters.     The aim of the second manuscript was to quantify sensorimotor effects on walking performance in persons with spinal cord injury. The relationships between 11 input features and 2 walking outcome measures - distance walked in 6 minutes and net energy cost of transport - were captured using 2 Gaussian process regression models. Explainable AI revealed the importance of muscle strength on both outcome measures. Use of walking aids also influenced distance walked, and  cardiovascular capacity influenced energy cost. Analyses for each person also gave useful insights into individual performance.     The findings from these studies demonstrate the large potential of advanced machine learning and explainable AI to address the complexity of gait function in persons with spinal cord injury. / Skador på ryggmärgen, oavsett om de är traumatiska eller icke-traumatiska, kan helt eller delvis skada sensoriska och motoriska banor mellan hjärnan och kroppen, vilket påverkar gången i varierande grad. Rörelsenedsättningen beror också på andra faktorer såsom ålder, vikt, tid sedan skadan uppstod, smärta och gånghjälpmedel. ASIA-skalan används för att klassificera ryggmärgsskadans svårighetsgrad, men är inte utformad för att karaktärisera individens gångförmåga. Andra standardiserade tester baserade på subjektiva eller tids och avståndsbedömningar har också begränsad möjlighet att beskriva individuell kapacitet. Datadrivna metoder är kraftfulla och kan ge ytterligare perspektiv på gångens komplexitet och prestation. Studierna i denna avhandling syftar till att analysera komplexa relationer mellan gång, motoriska samt sensoriska funktion efter ryggmärgsskada med hjälp av datadrivna metoder. Syftet med den första studien är att karaktärisera de heterogena gångmönster hos personer med inkomplett ryggmärgsskada. Multivariat dynamisk tidsförvrägning (eng: Multivariate dynamic time warping) användes för att kvantifiera gångskillnader i studiepopulationen. Hierarkisk agglomerativ klusteranalys (eng: hierarchical agglomerative clustering) delade upp gång i sex distinkta kluster, varav fyra hade lägre hastighet än kontroller. Med hjälp av förklarbara AI (eng: explainable AI) identifierades det att fotledsvinkeln i svingfasen hade störst påverkan om vilken kluster som gångmönstret hamnat i. Genom att kombinera klinisk undersökning med datadrivna metoder kunde vi beskriva en omfattande bild av de sex gångklustren. Syftet med den andra manuskriptet är att kvantifiera sensoriska och motoriska faktorerans påverkan på gångförmåga efter ryggmärgsskada. Med hjälp av två Gaussian process-regressionsmodeller identiferades sambanden mellan 11 beskrivande faktorer och 2 gång prestationsmått, nämligen gångavstånd på 6 minuter samt metabola energiåtgång. Med hjälp av förklarbar AI påvisades det stora påverkan av muskelstyrka på både gångsträckan och energiåtgång. Gånghjälpmedlet samt kardiovaskulär kapaciteten hade också betydande påverkan på gångprestation. Enskilda analyser gav insiktsfull information om varje individ. Resultaten från dessa studier visar på potentiella tillämpningar av avancerad maskininlärning och AI metoder för att analysera komplexa relationer mellan funktion och motorisk prestation efter ryggmärgsskada. / <p>QC 20240221</p>

Page generated in 0.4617 seconds