• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Survey of Applications of Spline Functions to Statistics.

Mawk, Russell Lynn 01 August 2001 (has links) (PDF)
This thesis provides a survey study on applications of spline functions to statistics. We start with a brief history of splines. Then, we discuss the application of splines to statistics as they are applied today. Several topics included in the discussion are splines, spline regression, spline smoothing, and estimating the smoothing parameter for spline regression. Also, we give a very brief discussion of multivariate splines in statistics and wavelets in statistics. Both of these topics are currently subjects for continuing research by many mathematicians.
2

Selection of smoothing parameters with application in causal inference

Häggström, Jenny January 2011 (has links)
This thesis is a contribution to the research area concerned with selection of smoothing parameters in the framework of nonparametric and semiparametric regression. Selection of smoothing parameters is one of the most important issues in this framework and the choice can heavily influence subsequent results. A nonparametric or semiparametric approach is often desirable when large datasets are available since this allow us to make fewer and weaker assumptions as opposed to what is needed in a parametric approach. In the first paper we consider smoothing parameter selection in nonparametric regression when the purpose is to accurately predict future or unobserved data. We study the use of accumulated prediction errors and make comparisons to leave-one-out cross-validation which is widely used by practitioners. In the second paper a general semiparametric additive model is considered and the focus is on selection of smoothing parameters when optimal estimation of some specific parameter is of interest. We introduce a double smoothing estimator of a mean squared error and propose to select smoothing parameters by minimizing this estimator. Our approach is compared with existing methods.The third paper is concerned with the selection of smoothing parameters optimal for estimating average treatment effects defined within the potential outcome framework. For this estimation problem we propose double smoothing methods similar to the method proposed in the second paper. Theoretical properties of the proposed methods are derived and comparisons with existing methods are made by simulations.In the last paper we apply our results from the third paper by using a double smoothing method for selecting smoothing parameters when estimating average treatment effects on the treated. We estimate the effect on BMI of divorcing in middle age. Rich data on socioeconomic conditions, health and lifestyle from Swedish longitudinal registers is used.
3

An Additive Bivariate Hierarchical Model for Functional Data and Related Computations

Redd, Andrew Middleton 2010 August 1900 (has links)
The work presented in this dissertation centers on the theme of regression and computation methodology. Functional data is an important class of longitudinal data, and principal component analysis is an important approach to regression with this type of data. Here we present an additive hierarchical bivariate functional data model employing principal components to identify random e ects. This additive model extends the univariate functional principal component model. These models are implemented in the pfda package for R. To t the curves from this class of models orthogonalized spline basis are used to reduce the dimensionality of the t, but retain exibility. Methods for handing spline basis functions in a purely analytical manner, including the orthogonalizing process and computing of penalty matrices used to t the principal component models are presented. The methods are implemented in the R package orthogonalsplinebasis. The projects discussed involve complicated coding for the implementations in R. To facilitate this I created the NppToR utility to add R functionality to the popular windows code editor Notepad . A brief overview of the use of the utility is also included.
4

Two Essays on Single-index Models

Wu, Zhou 24 September 2008 (has links)
No description available.
5

Essays on High-dimensional Nonparametric Smoothing and Its Applications to Asset Pricing

Wu, Chaojiang 25 October 2013 (has links)
No description available.
6

Smoothing Parameter Selection In Nonparametric Functional Estimation

Amezziane, Mohamed 01 January 2004 (has links)
This study intends to build up new techniques for how to obtain completely data-driven choices of the smoothing parameter in functional estimation, within the confines of minimal assumptions. The focus of the study will be within the framework of the estimation of the distribution function, the density function and their multivariable extensions along with some of their functionals such as the location and the integrated squared derivatives.
7

Extending covariance structure analysis for multivariate and functional data

Sheppard, Therese January 2010 (has links)
For multivariate data, when testing homogeneity of covariance matrices arising from two or more groups, Bartlett's (1937) modified likelihood ratio test statistic is appropriate to use under the null hypothesis of equal covariance matrices where the null distribution of the test statistic is based on the restrictive assumption of normality. Zhang and Boos (1992) provide a pooled bootstrap approach when the data cannot be assumed to be normally distributed. We give three alternative bootstrap techniques to testing homogeneity of covariance matrices when it is both inappropriate to pool the data into one single population as in the pooled bootstrap procedure and when the data are not normally distributed. We further show that our alternative bootstrap methodology can be extended to testing Flury's (1988) hierarchy of covariance structure models. Where deviations from normality exist, we show, by simulation, that the normal theory log-likelihood ratio test statistic is less viable compared with our bootstrap methodology. For functional data, Ramsay and Silverman (2005) and Lee et al (2002) together provide four computational techniques for functional principal component analysis (PCA) followed by covariance structure estimation. When the smoothing method for smoothing individual profiles is based on using least squares cubic B-splines or regression splines, we find that the ensuing covariance matrix estimate suffers from loss of dimensionality. We show that ridge regression can be used to resolve this problem, but only for the discretisation and numerical quadrature approaches to estimation, and that choice of a suitable ridge parameter is not arbitrary. We further show the unsuitability of regression splines when deciding on the optimal degree of smoothing to apply to individual profiles. To gain insight into smoothing parameter choice for functional data, we compare kernel and spline approaches to smoothing individual profiles in a nonparametric regression context. Our simulation results justify a kernel approach using a new criterion based on predicted squared error. We also show by simulation that, when taking account of correlation, a kernel approach using a generalized cross validatory type criterion performs well. These data-based methods for selecting the smoothing parameter are illustrated prior to a functional PCA on a real data set.
8

Goodness-Of-Fit Test for Hazard Rate

Vital, Ralph Antoine 14 December 2018 (has links)
In certain areas such as Pharmacokinetic(PK) and Pharmacodynamic(PD), the hazard rate function, denoted by ??, plays a central role in modeling the instantaneous risk of failure time data. In the context of assessing the appropriateness of a given parametric hazard rate model, Huh and Hutmacher [22] showed that their hazard-based visual predictive check is as good as a visual predictive check based on the survival function. Even though Huh and Hutmacher’s visual method is simple to implement and interpret, the final decision reached there depends on the personal experience of the user. In this thesis, our primary aim is to develop nonparametric goodness-ofit tests for hazard rate functions to help bring objectivity in hazard rate model selections or to augment subjective procedures like Huh and Hutmacher’s visual predictive check. Toward that aim two nonparametric goodnessofit (g-o) test statistics are proposed and they are referred to as chi-square g-o test and kernel-based nonparametric goodness-ofit test for hazard rate functions, respectively. On one hand, the asymptotic distribution of the chi-square goodness-ofit test for hazard rate functions is derived under the null hypothesis ??0 : ??(??) = ??0(??) ??? ? R + as well as under the fixed alternative hypothesis ??1 : ??(??) = ??1(??) ??? ? R +. The results as expected are asymptotically similar to those of the usual Pearson chi-square test. That is, under the null hypothesis the proposed test converges to a chi-square distribution and under the fixed alternative hypothesis it converges to a non-central chi-square distribution. On the other hand, we showed that the power properties of the kernel-based nonparametric goodness-ofit test for hazard rate functions are equivalent to those of the Bickel and Rosenblatt test, meaning the proposed kernel-based nonparametric goodness-ofit test can detect alternatives converging to the null at the rate of ???? , ?? < 1/2, where ?? is the sample size. Unlike the latter, the convergence rate of the kernel-base nonparametric g-o test is much greater; that is, one does not need a very large sample size for able to use the asymptotic distribution of the test in practice.
9

Multivariate EWMA Control Chart and Application to a Semiconductor Manufacturing Process

Huh, Ick 09 1900 (has links)
<p>The multivariate cumulative sum (MCUSUM) and the multivariate exponentially weighted moving average (MEWMA) control charts are the two leading methods to monitor a multivariate process. This thesis focuses on the MEWMA control chart. Specifically, using the Markov chain method, we study in detail several aspects of the run length distribution both for the on- and off- target cases. Regarding the on-target run length analysis, we express the probability mass function of the run length distribution, the average run length (ARL), the variance of run length (V RL) and higher moments of the run length distribution in mathematically closed forms. In previous studies, with respect to the off-target performance for the MEWMA control chart, the process mean shift was usually assumed to take place at the beginning of the process. We extend the classical off-target case and introduce a generalization of the probability mass function of the run length distribution, the ARL and the V RL. What Prabhu and Runger (1996) proposed can be derived from our new model. By evaluating the off-target ARL values for the MEWMA control chart, we determine the optimal smoothing parameters by using the partition method that provides an easy algorithm to find the optimal smoothing parameters and study how they respond as the process mean shift time changes. We compare the ARL performance of the MEWMA control chart with that of the multivariate Shewhart control chart to see whether the MEWMA chart is still effective in detecting a small mean shift as the process mean shift time changes. In order to apply the model to semiconductor manufacturing processes, we use a bivariate normal distribution to generate sample data and compare the MEWMA control chart with the multivariate Shewhart control chart to evaluate how the MEWMA control chart behaves when a delayed mean shift happens. We also apply the variation transmission model introduced by Lawless et al. (1999) to the semiconductor manufacturing process and show an extension of the model to make our application to semiconductor manufacturing processes more realistic. All the programming and calculations were done in R</p> / Master of Science (MS)
10

Choix optimal du paramètre de lissage dans l'estimation non paramétrique de la fonction de densité pour des processus stationnaires à temps continu / Optimal choice of smoothing parameter in non parametric density estimation for continuous time stationary processes

El Heda, Khadijetou 25 October 2018 (has links)
Les travaux de cette thèse portent sur le choix du paramètre de lissage dans le problème de l'estimation non paramétrique de la fonction de densité associée à des processus stationnaires ergodiques à temps continus. La précision de cette estimation dépend du choix de ce paramètre. La motivation essentielle est de construire une procédure de sélection automatique de la fenêtre et d'établir des propriétés asymptotiques de cette dernière en considérant un cadre de dépendance des données assez général qui puisse être facilement utilisé en pratique. Cette contribution se compose de trois parties. La première partie est consacrée à l'état de l'art relatif à la problématique qui situe bien notre contribution dans la littérature. Dans la deuxième partie, nous construisons une méthode de sélection automatique du paramètre de lissage liée à l'estimation de la densité par la méthode du noyau. Ce choix issu de la méthode de la validation croisée est asymptotiquement optimal. Dans la troisième partie, nous établissons des propriétés asymptotiques, de la fenêtre issue de la méthode de la validation croisée, données par des résultats de convergence presque sûre. / The work this thesis focuses on the choice of the smoothing parameter in the context of non-parametric estimation of the density function for stationary ergodic continuous time processes. The accuracy of the estimation depends greatly on the choice of this parameter. The main goal of this work is to build an automatic window selection procedure and establish asymptotic properties while considering a general dependency framework that can be easily used in practice. The manuscript is divided into three parts. The first part reviews the literature on the subject, set the state of the art and discusses our contribution in within. In the second part, we design an automatical method for selecting the smoothing parameter when the density is estimated by the Kernel method. This choice stemming from the cross-validation method is asymptotically optimal. In the third part, we establish an asymptotic properties pertaining to consistency with rate for the resulting estimate of the window-width.

Page generated in 0.1079 seconds