• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 20
  • 20
  • 13
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Multiscale local polynomial transforms in smoothing and density estimation

Amghar, Mohamed 22 December 2017 (has links)
Un défi majeur dans les méthodes d'estimation non linéaire multi-échelle, comme le seuillage des ondelettes, c'est l'extension de ces méthodes vers une disposition où les observations sont irrégulières et non équidistantes. L'application de ces techniques dans le lissage de données ou l'estimation des fonctions de densité, il est crucial de travailler dans un espace des fonctions qui impose un certain degré de régularité. Nous suivons donc une approche différente, en utilisant le soi-disant système de levage. Afin de combiner la régularité et le bon conditionnement numérique, nous adoptons un schéma similaire à la pyramide Laplacienne, qui peut être considérée comme une transformation d'ondelettes légèrement redondantes. Alors que le schéma de levage classique repose sur l'interpolation comme opération de base, ce schéma permet d'utiliser le lissage, en utilisant par exemple des polynômes locaux. Le noyau de l'opération de lissage est choisi de manière multi-échelle. Le premier chapitre de ce projet consiste sur le développement de La transformée polynomiale locale multi-échelle, qui combine les avantages du lissage polynomial local avec la parcimonie de la décomposition multi-échelle. La contribution de cette partie est double. Tout d'abord, il se concentre sur les largeurs de bande utilisées tout au long de la transformée. Ces largeurs de bande fonctionnent comme des échelles contrôlées par l'utilisateur dans une analyse multi-échelle, ce qui s'explique par un intérêt particulier dans le cas des données non-équidistantes. Cette partie présente à la fois une sélection de bande passante optimale basée sur la vraisemblance et une approche heuristique rapide. La deuxième contribution consiste sur la combinaison du lissage polynomial local avec les préfiltres orthogonaux dans le but de diminuer la variance de la reconstruction. Dans le deuxième chapitre, le projet porte sur l'estimation des fonctions de densité à travers la transformée polynomiale locale multi-échelle, en proposant une reconstruction plus avancée, appelée reconstruction pondérée pour contrôler la propagation de la variance. Dans le dernier chapitre, On s’intéresse à l’extension de la transformée polynomiale locale multi-échelle dans le cas bivarié, tout en énumérant quelques avantages qu'on peut exploiter de cette transformée (la parcimonie, pas de triangulations), comparant à la transformée en ondelette classique en deux dimension. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
2

Methods for Quantitatively Describing Tree Crown Profiles of Loblolly pine (<I>Pinus taeda</I> L.)

Doruska, Paul F. 17 July 1998 (has links)
Physiological process models, productivity studies, and wildlife abundance studies all require accurate representations of tree crowns. In the past, geometric shapes or flexible mathematical equations approximating geometric shapes were used to represent crown profiles. Crown profile of loblolly pine (<I>Pinus taeda</I> L.) was described using single-regressor, nonparametric regression analysis in an effort to improve crown representations. The resulting profiles were compared to more traditional representations. Nonparametric regression may be applicable when an underlying parametric model cannot be identified. The modeler does not specify a functional form. Rather, a data-driven technique is used to determine the shape a curve. The modeler determines the amount of local curvature to be depicted in the curve. A class of local-polynomial estimators which contains the popular kernel estimator as a special case was investigated. Kernel regression appears to fit closely to the interior data points but often possesses bias problems at the boundaries of the data, a feature less exhibited by local linear or local quadratic regression. When using nonparametric regression, decisions must be made regarding polynomial order and bandwidth. Such decisions depend on the presence of local curvature, desired degree of smoothing, and, for bandwidth in particular, the minimization of some global error criterion. In the present study, a penalized PRESS criterion (PRESS*) was selected as the global error criterion. When individual- tree, crown profile data are available, the technique of nonparametric regression appears capable of capturing more of the tree to tree variation in crown shape than multiple linear regression and other published functional forms. Thus, modelers should consider the use of nonparametric regression when describing crown profiles as well as in any regression situation where traditional techniques perform unsatisfactorily or fail. / Ph. D.
3

Sequential Procedures for Nonparametric Kernel Regression

Dharmasena, Tibbotuwa Deniye Kankanamge Lasitha Sandamali, Sandamali.dharmasena@rmit.edu.au January 2008 (has links)
In a nonparametric setting, the functional form of the relationship between the response variable and the associated predictor variables is unspecified; however it is assumed to be a smooth function. The main aim of nonparametric regression is to highlight an important structure in data without any assumptions about the shape of an underlying regression function. In regression, the random and fixed design models should be distinguished. Among the variety of nonparametric regression estimators currently in use, kernel type estimators are most popular. Kernel type estimators provide a flexible class of nonparametric procedures by estimating unknown function as a weighted average using a kernel function. The bandwidth which determines the influence of the kernel has to be adapted to any kernel type estimator. Our focus is on Nadaraya-Watson estimator and Local Linear estimator which belong to a class of kernel type regression estimators called local polynomial kerne l estimators. A closely related problem is the determination of an appropriate sample size that would be required to achieve a desired confidence level of accuracy for the nonparametric regression estimators. Since sequential procedures allow an experimenter to make decisions based on the smallest number of observations without compromising accuracy, application of sequential procedures to a nonparametric regression model at a given point or series of points is considered. The motivation for using such procedures is: in many applications the quality of estimating an underlying regression function in a controlled experiment is paramount; thus, it is reasonable to invoke a sequential procedure of estimation that chooses a sample size based on recorded observations that guarantees a preassigned accuracy. We have employed sequential techniques to develop a procedure for constructing a fixed-width confidence interval for the predicted value at a specific point of the independent variable. These fixed-width confidence intervals are developed using asymptotic properties of both Nadaraya-Watson and local linear kernel estimators of nonparametric kernel regression with data-driven bandwidths and studied for both fixed and random design contexts. The sample sizes for a preset confidence coefficient are optimized using sequential procedures, namely two-stage procedure, modified two-stage procedure and purely sequential procedure. The proposed methodology is first tested by employing a large-scale simulation study. The performance of each kernel estimation method is assessed by comparing their coverage accuracy with corresponding preset confidence coefficients, proximity of computed sample sizes match up to optimal sample sizes and contrasting the estimated values obtained from the two nonparametric methods with act ual values at given series of design points of interest. We also employed the symmetric bootstrap method which is considered as an alternative method of estimating properties of unknown distributions. Resampling is done from a suitably estimated residual distribution and utilizes the percentiles of the approximate distribution to construct confidence intervals for the curve at a set of given design points. A methodology is developed for determining whether it is advantageous to use the symmetric bootstrap method to reduce the extent of oversampling that is normally known to plague Stein's two-stage sequential procedure. The procedure developed is validated using an extensive simulation study and we also explore the asymptotic properties of the relevant estimators. Finally, application of our proposed sequential nonparametric kernel regression methods are made to some problems in software reliability and finance.
4

Three essays on econometrics / 計量経済学に関する三つの論文

Yi, Kun 23 March 2023 (has links)
京都大学 / 新制・課程博士 / 博士(経済学) / 甲第24375号 / 経博第662号 / 新制||経||302(附属図書館) / 京都大学大学院経済学研究科経済学専攻 / (主査)教授 西山 慶彦, 教授 江上 雅彦, 講師 柳 貴英 / 学位規則第4条第1項該当 / Doctor of Economics / Kyoto University / DFAM
5

Nonstationary Techniques For Signal Enhancement With Applications To Speech, ECG, And Nonuniformly-Sampled Signals

Sreenivasa Murthy, A January 2012 (has links) (PDF)
For time-varying signals such as speech and audio, short-time analysis becomes necessary to compute specific signal attributes and to keep track of their evolution. The standard technique is the short-time Fourier transform (STFT), using which one decomposes a signal in terms of windowed Fourier bases. An advancement over STFT is the wavelet analysis in which a function is represented in terms of shifted and dilated versions of a localized function called the wavelet. A specific modeling approach particularly in the context of speech is based on short-time linear prediction or short-time Wiener filtering of noisy speech. In most nonstationary signal processing formalisms, the key idea is to analyze the properties of the signal locally, either by first truncating the signal and then performing a basis expansion (as in the case of STFT), or by choosing compactly-supported basis functions (as in the case of wavelets). We retain the same motivation as these approaches, but use polynomials to model the signal on a short-time basis (“short-time polynomial representation”). To emphasize the local nature of the modeling aspect, we refer to it as “local polynomial modeling (LPM).” We pursue two main threads of research in this thesis: (i) Short-time approaches for speech enhancement; and (ii) LPM for enhancing smooth signals, with applications to ECG, noisy nonuniformly-sampled signals, and voiced/unvoiced segmentation in noisy speech. Improved iterative Wiener filtering for speech enhancement A constrained iterative Wiener filter solution for speech enhancement was proposed by Hansen and Clements. Sreenivas and Kirnapure improved the performance of the technique by imposing codebook-based constraints in the process of parameter estimation. The key advantage is that the optimal parameter search space is confined to the codebook. The Nonstationary signal enhancement solutions assume stationary noise. However, in practical applications, noise is not stationary and hence updating the noise statistics becomes necessary. We present a new approach to perform reliable noise estimation based on spectral subtraction. We first estimate the signal spectrum and perform signal subtraction to estimate the noise power spectral density. We further smooth the estimated noise spectrum to ensure reliability. The key contributions are: (i) Adaptation of the technique for non-stationary noises; (ii) A new initialization procedure for faster convergence and higher accuracy; (iii) Experimental determination of the optimal LP-parameter space; and (iv) Objective criteria and speech recognition tests for performance comparison. Optimal local polynomial modeling and applications We next address the problem of fitting a piecewise-polynomial model to a smooth signal corrupted by additive noise. Since the signal is smooth, it can be represented using low-order polynomial functions provided that they are locally adapted to the signal. We choose the mean-square error as the criterion of optimality. Since the model is local, it preserves the temporal structure of the signal and can also handle nonstationary noise. We show that there is a trade-off between the adaptability of the model to local signal variations and robustness to noise (bias-variance trade-off), which we solve using a stochastic optimization technique known as the intersection of confidence intervals (ICI) technique. The key trade-off parameter is the duration of the window over which the optimum LPM is computed. Within the LPM framework, we address three problems: (i) Signal reconstruction from noisy uniform samples; (ii) Signal reconstruction from noisy nonuniform samples; and (iii) Classification of speech signals into voiced and unvoiced segments. The generic signal model is x(tn)=s(tn)+d(tn),0 ≤ n ≤ N - 1. In problems (i) and (iii) above, tn=nT(uniform sampling); in (ii) the samples are taken at nonuniform instants. The signal s(t)is assumed to be smooth; i.e., it should admit a local polynomial representation. The problem in (i) and (ii) is to estimate s(t)from x(tn); i.e., we are interested in optimal signal reconstruction on a continuous domain starting from uniform or nonuniform samples. We show that, in both cases, the bias and variance take the general form: The mean square error (MSE) is given by where L is the length of the window over which the polynomial fitting is performed, f is a function of s(t), which typically comprises the higher-order derivatives of s(t), the order itself dependent on the order of the polynomial, and g is a function of the noise variance. It is clear that the bias and variance have complementary characteristics with respect to L. Directly optimizing for the MSE would give a value of L, which involves the functions f and g. The function g may be estimated, but f is not known since s(t)is unknown. Hence, it is not practical to compute the minimum MSE (MMSE) solution. Therefore, we obtain an approximate result by solving the bias-variance trade-off in a probabilistic sense using the ICI technique. We also propose a new approach to optimally select the ICI technique parameters, based on a new cost function that is the sum of the probability of false alarm and the area covered over the confidence interval. In addition, we address issues related to optimal model-order selection, search space for window lengths, accuracy of noise estimation, etc. The next issue addressed is that of voiced/unvoiced segmentation of speech signal. Speech segments show different spectral and temporal characteristics based on whether the segment is voiced or unvoiced. Most speech processing techniques process the two segments differently. The challenge lies in making detection techniques offer robust performance in the presence of noise. We propose a new technique for voiced/unvoiced clas-sification by taking into account the fact that voiced segments have a certain degree of regularity, and that the unvoiced segments do not possess any smoothness. In order to capture the regularity in voiced regions, we employ the LPM. The key idea is that regions where the LPM is inaccurate are more likely to be unvoiced than voiced. Within this frame-work, we formulate a hypothesis testing problem based on the accuracy of the LPM fit and devise a test statistic for performing V/UV classification. Since the technique is based on LPM, it is capable of adapting to nonstationary noises. We present Monte Carlo results to demonstrate the accuracy of the proposed technique.
6

Statistical Methods for Dating Collections of Historical Documents

Tilahun, Gelila 31 August 2011 (has links)
The problem in this thesis was originally motivated by problems presented with documents of Early England Data Set (DEEDS). The central problem with these medieval documents is the lack of methods to assign accurate dates to those documents which bear no date. With the problems of the DEEDS documents in mind, we present two methods to impute missing features of texts. In the first method, we suggest a new class of metrics for measuring distances between texts. We then show how to combine the distances between the texts using statistical smoothing. This method can be adapted to settings where the features of the texts are ordered or unordered categoricals (as in the case of, for example, authorship assignment problems). In the second method, we estimate the probability of occurrences of words in texts using nonparametric regression techniques of local polynomial fitting with kernel weight to generalized linear models. We combine the estimated probability of occurrences of words of a text to estimate the probability of occurrence of a text as a function of its feature -- the feature in this case being the date in which the text is written. The application and results of our methods to the DEEDS documents are presented.
7

Inference Of Piecewise Linear Systems With An Improved Method Employing Jump Detection

Selcuk, Ahmet Melih 01 September 2007 (has links) (PDF)
Inference of regulatory relations in dynamical systems is a promising active research area. Recently, most of the investigations in this field have been stimulated by the researches in functional genomics. In this thesis, the inferential modeling problem for switching hybrid systems is studied. The hybrid systems refers to dynamical systems in which discrete and continuous variables regulate each other, in other words the jumps and flows are interrelated. In this study, piecewise linear approximations are used for modeling purposes and it is shown that piecewise linear models are capable of displaying the evolutionary characteristics of switching hybrid systems approxi- mately. For the mentioned systems, detection of switching instances and inference of locally linear parameters from empirical data provides a solid understanding about the system dynamics. Thus, the inference methodology is based on these issues. The primary difference of the inference algorithm is the idea of transforming the switch- ing detection problem into a jump detection problem by derivative estimation from discrete data. The jump detection problem has been studied extensively in signal processing literature. So, related techniques in the literature has been analyzed care- fully and suitable ones adopted in this thesis. The primary advantage of proposed method would be its robustness in switching detection and derivative estimation. The theoretical background of this robustness claim and the importance of robustness for real world applications are explained in detail.
8

Statistical Methods for Dating Collections of Historical Documents

Tilahun, Gelila 31 August 2011 (has links)
The problem in this thesis was originally motivated by problems presented with documents of Early England Data Set (DEEDS). The central problem with these medieval documents is the lack of methods to assign accurate dates to those documents which bear no date. With the problems of the DEEDS documents in mind, we present two methods to impute missing features of texts. In the first method, we suggest a new class of metrics for measuring distances between texts. We then show how to combine the distances between the texts using statistical smoothing. This method can be adapted to settings where the features of the texts are ordered or unordered categoricals (as in the case of, for example, authorship assignment problems). In the second method, we estimate the probability of occurrences of words in texts using nonparametric regression techniques of local polynomial fitting with kernel weight to generalized linear models. We combine the estimated probability of occurrences of words of a text to estimate the probability of occurrence of a text as a function of its feature -- the feature in this case being the date in which the text is written. The application and results of our methods to the DEEDS documents are presented.
9

Estimação de volatilidade em séries financeiras : modelos aditivos semi-paramétricos e GARCH

Santos, Douglas Gomes dos January 2008 (has links)
A estimação e previsão da volatilidade de ativos são de suma importância para os mercados financeiros. Temas como risco e incerteza na teoria econômica moderna incentivaram a procura por métodos capazes de modelar uma variância condicional que evolui ao longo do tempo. O objetivo principal desta dissertação é comparar alguns métodos de regressão global e local quanto à extração da volatilidade dos índices Ibovespa e Standard and Poor´s 500. Para isto, são realizadas estimações e previsões com os modelos GARCH paramétricos e com os modelos aditivos semi-paramétricos. Os primeiros, tradicionalmente utilizados na estimação de segundos momentos condicionais, têm sua capacidade sugerida em diversos artigos. Os segundos provêm alta flexibilidade e descrições visualmente informativas das relações entre as variáveis, tais como assimetrias e não linearidades. Sendo assim, testar o desempenho dos últimos frente às estruturas paramétricas consagradas apresenta-se como uma investigação apropriada. A realização das comparações ocorre em períodos selecionados de alta volatilidade no mercado financeiro internacional (crises), sendo a performance dos modelos medida dentro e fora da amostra. Os resultados encontrados sugerem a capacidade dos modelos semi-paramétricos em estimar e prever a volatilidade dos retornos dos índices nos momentos analisados. / Volatility estimation and forecasting are very important matters for the financial markets. Themes like risk and uncertainty in modern economic theory have encouraged the search for methods that allow for the modeling of time varying variances. The main objective of this dissertation is to compare global and local regressions in terms of their capacity to extract the volatility of Ibovespa and Standard and Poor 500 indexes. To achieve this aim, parametric GARCH and semiparametric additive models estimation and forecasting are performed. The first ones, traditionally applied in the estimation of conditional second moments, have their capacity suggested in many papers. The second ones provide high flexibility and visually informative descriptions of the relationships between the variables, like asymmetries and nonlinearities. Therefore, testing the last ones´ performance against the acknowledged parametric structures is an appropriate investigation. Comparisons are made in selected periods of high volatility in the international financial market (crisis), measuring the models´ performance inside and outside sample. The results that were found suggest the capacity of semiparametric models to estimate and forecast the Indexes returns´ volatility at the analyzed moments.
10

Regression Discontinuity Design with Covariates

Kramer, Patrick 07 November 2023 (has links)
This thesis studies regression discontinuity designs with the use of additional covariates for estimation of the average treatment effect. We prove asymptotic normality of the covariate-adjusted estimator under sufficient regularity conditions. In the case of a high-dimensional setting with a large number of covariates depending on the number of observations, we discuss a Lasso-based selection approach as well as alternatives based on calculated correlation thresholds. We present simulation results on those alternative selection strategies.:1. Introduction 2. Preliminaries 3. Regression Discontinuity Designs 4. Setup and Notation 5. Computing the Bias 6. Asymptotic Behavior 7. Asymptotic Normality of the Estimator 8. Including Potentially Many Covariates 9. Simulations 10. Conclusion

Page generated in 0.0778 seconds