• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 8
  • 7
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 15
  • 11
  • 10
  • 10
  • 9
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Time varying-coefficient models

Ambler, Gareth January 1996 (has links)
No description available.
2

Model Robust Regression Based on Generalized Estimating Equations

Clark, Seth K. 04 April 2002 (has links)
One form of model robust regression (MRR) predicts mean response as a convex combination of a parametric and a nonparametric prediction. MRR is a semiparametric method by which an incompletely or an incorrectly specified parametric model can be improved through adding an appropriate amount of a nonparametric fit. The combined predictor can have less bias than the parametric model estimate alone and less variance than the nonparametric estimate alone. Additionally, as shown in previous work for uncorrelated data with linear mean function, MRR can converge faster than the nonparametric predictor alone. We extend the MRR technique to the problem of predicting mean response for clustered non-normal data. We combine a nonparametric method based on local estimation with a global, parametric generalized estimating equations (GEE) estimate through a mixing parameter on both the mean scale and the linear predictor scale. As a special case, when data are uncorrelated, this amounts to mixing a local likelihood estimate with predictions from a global generalized linear model. Cross-validation bandwidth and optimal mixing parameter selectors are developed. The global fits and the optimal and data-driven local and mixed fits are studied under no/some/substantial model misspecification via simulation. The methods are then illustrated through application to data from a longitudinal study. / Ph. D.
3

DEVELOPMENTS IN NONPARAMETRIC REGRESSION METHODS WITH APPLICATION TO RAMAN SPECTROSCOPY ANALYSIS

Guo, Jing 01 January 2015 (has links)
Raman spectroscopy has been successfully employed in the classification of breast pathologies involving basis spectra for chemical constituents of breast tissue and resulted in high sensitivity (94%) and specificity (96%) (Haka et al, 2005). Motivated by recent developments in nonparametric regression, in this work, we adapt stacking, boosting, and dynamic ensemble learning into a nonparametric regression framework with application to Raman spectroscopy analysis for breast cancer diagnosis. In Chapter 2, we apply compound estimation (Charnigo and Srinivasan, 2011) in Raman spectra analysis to classify normal, benign, and malignant breast tissue. We explore both the spectra profiles and their derivatives to differentiate different types of breast tissue. In Chapters 3-5 of this dissertation, we develop a novel paradigm for incorporating ensemble learning classification methodology into a nonparametric regression framework. Specifically, in Chapter 3 we set up modified stacking framework and combine different classifiers together to make better predictions in nonparametric regression settings. In Chapter 4 we develop a method by incorporating a modified AdaBoost algorithm in nonparametric regression settings to improve classification accuracy. In Chapter 5 we propose a dynamic ensemble integration based on multiple meta-learning strategies for nonparametric regression based classification. In Chapter 6, we revisit the Raman spectroscopy data in Chapter 2, and make improvements based on the developments of the methods from Chapter 3 to Chapter 4. Finally we summarize the major findings and contributions of this work as well as identify opportunities for future research and their public health implications.
4

Gradient-Based Sensitivity Analysis with Kernels

Wycoff, Nathan Benjamin 20 August 2021 (has links)
Emulation of computer experiments via surrogate models can be difficult when the number of input parameters determining the simulation grows any greater than a few dozen. In this dissertation, we explore dimension reduction in the context of computer experiments. The active subspace method is a linear dimension reduction technique which uses the gradients of a function to determine important input directions. Unfortunately, we cannot expect to always have access to the gradients of our black-box functions. We thus begin by developing an estimator for the active subspace of a function using kernel methods to indirectly estimate the gradient. We then demonstrate how to deploy the learned input directions to improve the predictive performance of local regression models by ``undoing" the active subspace. Finally, we develop notions of sensitivities which are local to certain parts of the input space, which we then use to develop a Bayesian optimization algorithm which can exploit locally important directions. / Doctor of Philosophy / Increasingly, scientists and engineers developing new understanding or products rely on computers to simulate complex phenomena. Sometimes, these computer programs are so detailed that the amount of time they take to run becomes a serious issue. Surrogate modeling is the problem of trying to predict a computer experiment's result without having to actually run it, on the basis of having observed the behavior of similar simulations. Typically, computer experiments have different settings which induce different behavior. When there are many different settings to tweak, typical surrogate modeling approaches can struggle. In this dissertation, we develop a technique for deciding which input settings, or even which combinations of input settings, we should focus our attention on when trying to predict the output of the computer experiment. We then deploy this technique both to prediction of computer experiment outputs as well as to trying to find which of the input settings yields a particular desired result.
5

Methods for Quantitatively Describing Tree Crown Profiles of Loblolly pine (<I>Pinus taeda</I> L.)

Doruska, Paul F. 17 July 1998 (has links)
Physiological process models, productivity studies, and wildlife abundance studies all require accurate representations of tree crowns. In the past, geometric shapes or flexible mathematical equations approximating geometric shapes were used to represent crown profiles. Crown profile of loblolly pine (<I>Pinus taeda</I> L.) was described using single-regressor, nonparametric regression analysis in an effort to improve crown representations. The resulting profiles were compared to more traditional representations. Nonparametric regression may be applicable when an underlying parametric model cannot be identified. The modeler does not specify a functional form. Rather, a data-driven technique is used to determine the shape a curve. The modeler determines the amount of local curvature to be depicted in the curve. A class of local-polynomial estimators which contains the popular kernel estimator as a special case was investigated. Kernel regression appears to fit closely to the interior data points but often possesses bias problems at the boundaries of the data, a feature less exhibited by local linear or local quadratic regression. When using nonparametric regression, decisions must be made regarding polynomial order and bandwidth. Such decisions depend on the presence of local curvature, desired degree of smoothing, and, for bandwidth in particular, the minimization of some global error criterion. In the present study, a penalized PRESS criterion (PRESS*) was selected as the global error criterion. When individual- tree, crown profile data are available, the technique of nonparametric regression appears capable of capturing more of the tree to tree variation in crown shape than multiple linear regression and other published functional forms. Thus, modelers should consider the use of nonparametric regression when describing crown profiles as well as in any regression situation where traditional techniques perform unsatisfactorily or fail. / Ph. D.
6

Bandwidth Selection Concerns for Jump Point Discontinuity Preservation in the Regression Setting Using M-smoothers and the Extension to hypothesis Testing

Burt, David Allan 31 March 2000 (has links)
Most traditional parametric and nonparametric regression methods operate under the assumption that the true function is continuous over the design space. For methods such as ordinary least squares polynomial regression and local polynomial regression the functional estimates are constrained to be continuous. Fitting a function that is not continuous with a continuous estimate will have practical scientific implications as well as important model misspecification effects. Scientifically, breaks in the continuity of the underlying mean function may correspond to specific physical phenomena that will be hidden from the researcher by a continuous regression estimate. Statistically, misspecifying a mean function as continuous when it is not will result in an increased bias in the estimate. One recently developed nonparametric regression technique that does not constrain the fit to be continuous is the jump preserving M-smooth procedure of Chu, Glad, Godtliebsen & Marron (1998),`Edge-preserving smoothers for image processing', Journal of the American Statistical Association 93(442), 526-541. Chu et al.'s (1998) M-smoother is defined in such a way that the noise about the mean function is smoothed out while jumps in the mean function are preserved. Before the jump preserving M-smoother can be used in practice the choice of the bandwidth parameters must be addressed. The jump preserving M-smoother requires two bandwidth parameters h and g. These two parameters determine the amount of noise that is smoothed out as well as the size of the jumps which are preserved. If these parameters are chosen haphazardly the resulting fit could exhibit worse bias properties than traditional regression methods which assume a continuous mean function. Currently there are no automatic bandwidth selection procedures available for the jump preserving M-smoother of Chu et al. (1998). One of the main objectives of this dissertation is to develop an automatic data driven bandwidth selection procedure for Chu et al.'s (1998) M-smoother. We actually present two bandwidth selection procedures. The first is a crude rule of thumb method and the second is a more sophistocated direct plug in method. Our bandwidth selection procedures are modeled after the methods of Chu et al. (1998) with two significant modifications which make the methods robust to possible jump points. Another objective of this dissertation is to provide a nonparametric hypothesis test, based on Chu et al.'s (1998) M-smoother, to test for a break in the continuity of an underlying regression mean function. Our proposed hypothesis test is nonparametric in the sense that the mean function away from the jump point(s) is not required to follow a specific parametric model. In addition the test does not require the user to specify the number, position, or size of the jump points in the alternative hypothesis as do many current methods. Thus the null and alternative hypotheses for our test are: H0: The mean function is continuous (i.e. no jump points) vs. HA: The mean function is not continuous (i.e. there is at least one jump point). Our testing procedure takes the form of a critical bandwidth hypothesis test. The test statistic is essentially the largest bandwidth that allows Chu et al.'s (1998) M-smoother to satisfy the null hypothesis. The significance of the test is then calculated via a bootstrap method. This test is currently in the experimental stage of its development. In this dissertation we outline the steps required to calculate the test as well as assess the power based on a small simulation study. Future work such as a faster calculation algorithm is required before the testing procedure will be practical for the general user. / Ph. D.
7

Variational Estimators in Statistical Multiscale Analysis

Li, Housen 17 February 2016 (has links)
No description available.
8

Testy dobré shody při rušivých parametrech / Goodness of fit tests with nuisance parameters

Baňasová, Barbora January 2015 (has links)
This thesis deals with the goodness of fit tests in nonparametric model in the presence of unknown parameters of the probability distribution. The first part is devoted to understanding of the theoretical basis. We compare two methodologies for the construction of test statistics with application of empirical characteristic and empirical distribution functions. We use kernel estimates of regression functions and parametric bootstrap method to approximate the critical values of the tests. In the second part of the thesis, the work is complemented with the simulation study for different choices of weighting functions and parameters. Finally we illustrate the use and the comparison of goodness of fit tests on the example with the real data set. Powered by TCPDF (www.tcpdf.org)
9

Régression isotonique itérée / Iterative isotonic regression

Jégou, Nicolas 23 November 2012 (has links)
Ce travail se situe dans le cadre de la régression non paramétrique univariée. Supposant la fonction de régression à variation bornée et partant du résultat selon lequel une telle fonction se décompose en la somme d’une fonction croissante et d’une fonction décroissante, nous proposons de construire et d’étudier un nouvel estimateur combinant les techniques d’estimation des modèles additifs et celles d’estimation sous contraintes de monotonie. Plus précisément, notreméthode consiste à itérer la régression isotonique selon l’algorithme backfitting. On dispose ainsià chaque itération d’un estimateur de la fonction de régression résultant de la somme d’une partiecroissante et d’une partie décroissante.Le premier chapitre propose un tour d’horizon des références relatives aux outils cités à l’instant. Le chapitre suivant est dédié à l’étude théorique de la régression isotonique itérée. Dans un premier temps, on montre que, la taille d’échantillon étant fixée, augmenter le nombre d’itérations conduit à l’interpolation des données. On réussit à identifier les limites des termes individuels de la somme en montrant l’égalité de notre algorithme avec celui consistant à itérer la régressionisotonique selon un algorithme de type réduction itérée du biais. Nous établissons enfin la consistance de l’estimateur.Le troisième chapitre est consacré à l’étude pratique de l’estimateur. Comme augmenter le nombre d’itérations conduit au sur-ajustement, il n’est pas souhaitable d’itérer la méthode jusqu’à la convergence. Nous examinons des règles d’arrêt basées sur des adaptations de critères usuellement employés dans le cadre des méthodes linéaires de lissage (AIC, BIC,...) ainsi que des critères supposant une connaissance a priori sur le nombre de modes de la fonction de régression. Il en ressort un comportement intéressant de la méthode lorsque la fonction de régression possède des points de rupture. Nous appliquons ensuite l’algorithme à des données réelles de type puces CGH où la détection de ruptures est d’un intérêt crucial. Enfin, une application à l’estimation des fonctions unimodales et à la détection de mode(s) est proposée / This thesis is part of non parametric univariate regression. Assume that the regression function is of bounded variation then the Jordan’s decomposition ensures that it can be written as the sum of an increasing function and a decreasing function. We propose and analyse a novel estimator which combines the isotonic regression related to the estimation of monotonefunctions and the backfitting algorithm devoted to the estimation of additive models. The first chapter provides an overview of the references related to isotonic regression and additive models. The next chapter is devoted to the theoretical study of iterative isotonic regression. As a first step we show that increasing the number of iterations tends to reproduce the data. Moreover, we manage to identify the individual limits by making a connexion with the general property of isotonicity of projection onto convex cones and deriving another equivalent algorithm based on iterative bias reduction. Finally, we establish the consistency of the estimator.The third chapter is devoted to the practical study of the estimator. As increasing the number of iterations leads to overfitting, it is not desirable to iterate the procedure until convergence. We examine stopping criteria based on adaptations of criteria usually used in the context of linear smoothing methods (AIC, BIC, ...) as well as criteria assuming the knowledge of thenumber of modes of the regression function. As it is observed an interesting behavior of the method when the regression function has breakpoints, we apply the algorithm to CGH-array data where breakopoints detections are of crucial interest. Finally, an application to the estimation of unimodal functions is proposed
10

Multiscale Change-point Segmentation: Beyond Step Functions

Guo, Qinghai 03 February 2017 (has links)
No description available.

Page generated in 0.0937 seconds