• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 8
  • 7
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 49
  • 49
  • 15
  • 9
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Time varying-coefficient models

Ambler, Gareth January 1996 (has links)
No description available.
2

Model Robust Regression Based on Generalized Estimating Equations

Clark, Seth K. 04 April 2002 (has links)
One form of model robust regression (MRR) predicts mean response as a convex combination of a parametric and a nonparametric prediction. MRR is a semiparametric method by which an incompletely or an incorrectly specified parametric model can be improved through adding an appropriate amount of a nonparametric fit. The combined predictor can have less bias than the parametric model estimate alone and less variance than the nonparametric estimate alone. Additionally, as shown in previous work for uncorrelated data with linear mean function, MRR can converge faster than the nonparametric predictor alone. We extend the MRR technique to the problem of predicting mean response for clustered non-normal data. We combine a nonparametric method based on local estimation with a global, parametric generalized estimating equations (GEE) estimate through a mixing parameter on both the mean scale and the linear predictor scale. As a special case, when data are uncorrelated, this amounts to mixing a local likelihood estimate with predictions from a global generalized linear model. Cross-validation bandwidth and optimal mixing parameter selectors are developed. The global fits and the optimal and data-driven local and mixed fits are studied under no/some/substantial model misspecification via simulation. The methods are then illustrated through application to data from a longitudinal study. / Ph. D.
3

Bandwidth Selection Concerns for Jump Point Discontinuity Preservation in the Regression Setting Using M-smoothers and the Extension to hypothesis Testing

Burt, David Allan 31 March 2000 (has links)
Most traditional parametric and nonparametric regression methods operate under the assumption that the true function is continuous over the design space. For methods such as ordinary least squares polynomial regression and local polynomial regression the functional estimates are constrained to be continuous. Fitting a function that is not continuous with a continuous estimate will have practical scientific implications as well as important model misspecification effects. Scientifically, breaks in the continuity of the underlying mean function may correspond to specific physical phenomena that will be hidden from the researcher by a continuous regression estimate. Statistically, misspecifying a mean function as continuous when it is not will result in an increased bias in the estimate. One recently developed nonparametric regression technique that does not constrain the fit to be continuous is the jump preserving M-smooth procedure of Chu, Glad, Godtliebsen & Marron (1998),`Edge-preserving smoothers for image processing', Journal of the American Statistical Association 93(442), 526-541. Chu et al.'s (1998) M-smoother is defined in such a way that the noise about the mean function is smoothed out while jumps in the mean function are preserved. Before the jump preserving M-smoother can be used in practice the choice of the bandwidth parameters must be addressed. The jump preserving M-smoother requires two bandwidth parameters h and g. These two parameters determine the amount of noise that is smoothed out as well as the size of the jumps which are preserved. If these parameters are chosen haphazardly the resulting fit could exhibit worse bias properties than traditional regression methods which assume a continuous mean function. Currently there are no automatic bandwidth selection procedures available for the jump preserving M-smoother of Chu et al. (1998). One of the main objectives of this dissertation is to develop an automatic data driven bandwidth selection procedure for Chu et al.'s (1998) M-smoother. We actually present two bandwidth selection procedures. The first is a crude rule of thumb method and the second is a more sophistocated direct plug in method. Our bandwidth selection procedures are modeled after the methods of Chu et al. (1998) with two significant modifications which make the methods robust to possible jump points. Another objective of this dissertation is to provide a nonparametric hypothesis test, based on Chu et al.'s (1998) M-smoother, to test for a break in the continuity of an underlying regression mean function. Our proposed hypothesis test is nonparametric in the sense that the mean function away from the jump point(s) is not required to follow a specific parametric model. In addition the test does not require the user to specify the number, position, or size of the jump points in the alternative hypothesis as do many current methods. Thus the null and alternative hypotheses for our test are: H0: The mean function is continuous (i.e. no jump points) vs. HA: The mean function is not continuous (i.e. there is at least one jump point). Our testing procedure takes the form of a critical bandwidth hypothesis test. The test statistic is essentially the largest bandwidth that allows Chu et al.'s (1998) M-smoother to satisfy the null hypothesis. The significance of the test is then calculated via a bootstrap method. This test is currently in the experimental stage of its development. In this dissertation we outline the steps required to calculate the test as well as assess the power based on a small simulation study. Future work such as a faster calculation algorithm is required before the testing procedure will be practical for the general user. / Ph. D.
4

Gradient-Based Sensitivity Analysis with Kernels

Wycoff, Nathan Benjamin 20 August 2021 (has links)
Emulation of computer experiments via surrogate models can be difficult when the number of input parameters determining the simulation grows any greater than a few dozen. In this dissertation, we explore dimension reduction in the context of computer experiments. The active subspace method is a linear dimension reduction technique which uses the gradients of a function to determine important input directions. Unfortunately, we cannot expect to always have access to the gradients of our black-box functions. We thus begin by developing an estimator for the active subspace of a function using kernel methods to indirectly estimate the gradient. We then demonstrate how to deploy the learned input directions to improve the predictive performance of local regression models by ``undoing" the active subspace. Finally, we develop notions of sensitivities which are local to certain parts of the input space, which we then use to develop a Bayesian optimization algorithm which can exploit locally important directions. / Doctor of Philosophy / Increasingly, scientists and engineers developing new understanding or products rely on computers to simulate complex phenomena. Sometimes, these computer programs are so detailed that the amount of time they take to run becomes a serious issue. Surrogate modeling is the problem of trying to predict a computer experiment's result without having to actually run it, on the basis of having observed the behavior of similar simulations. Typically, computer experiments have different settings which induce different behavior. When there are many different settings to tweak, typical surrogate modeling approaches can struggle. In this dissertation, we develop a technique for deciding which input settings, or even which combinations of input settings, we should focus our attention on when trying to predict the output of the computer experiment. We then deploy this technique both to prediction of computer experiment outputs as well as to trying to find which of the input settings yields a particular desired result.
5

Variational Estimators in Statistical Multiscale Analysis

Li, Housen 17 February 2016 (has links)
No description available.
6

Testy dobré shody při rušivých parametrech / Goodness of fit tests with nuisance parameters

Baňasová, Barbora January 2015 (has links)
This thesis deals with the goodness of fit tests in nonparametric model in the presence of unknown parameters of the probability distribution. The first part is devoted to understanding of the theoretical basis. We compare two methodologies for the construction of test statistics with application of empirical characteristic and empirical distribution functions. We use kernel estimates of regression functions and parametric bootstrap method to approximate the critical values of the tests. In the second part of the thesis, the work is complemented with the simulation study for different choices of weighting functions and parameters. Finally we illustrate the use and the comparison of goodness of fit tests on the example with the real data set. Powered by TCPDF (www.tcpdf.org)
7

Régression isotonique itérée / Iterative isotonic regression

Jégou, Nicolas 23 November 2012 (has links)
Ce travail se situe dans le cadre de la régression non paramétrique univariée. Supposant la fonction de régression à variation bornée et partant du résultat selon lequel une telle fonction se décompose en la somme d’une fonction croissante et d’une fonction décroissante, nous proposons de construire et d’étudier un nouvel estimateur combinant les techniques d’estimation des modèles additifs et celles d’estimation sous contraintes de monotonie. Plus précisément, notreméthode consiste à itérer la régression isotonique selon l’algorithme backfitting. On dispose ainsià chaque itération d’un estimateur de la fonction de régression résultant de la somme d’une partiecroissante et d’une partie décroissante.Le premier chapitre propose un tour d’horizon des références relatives aux outils cités à l’instant. Le chapitre suivant est dédié à l’étude théorique de la régression isotonique itérée. Dans un premier temps, on montre que, la taille d’échantillon étant fixée, augmenter le nombre d’itérations conduit à l’interpolation des données. On réussit à identifier les limites des termes individuels de la somme en montrant l’égalité de notre algorithme avec celui consistant à itérer la régressionisotonique selon un algorithme de type réduction itérée du biais. Nous établissons enfin la consistance de l’estimateur.Le troisième chapitre est consacré à l’étude pratique de l’estimateur. Comme augmenter le nombre d’itérations conduit au sur-ajustement, il n’est pas souhaitable d’itérer la méthode jusqu’à la convergence. Nous examinons des règles d’arrêt basées sur des adaptations de critères usuellement employés dans le cadre des méthodes linéaires de lissage (AIC, BIC,...) ainsi que des critères supposant une connaissance a priori sur le nombre de modes de la fonction de régression. Il en ressort un comportement intéressant de la méthode lorsque la fonction de régression possède des points de rupture. Nous appliquons ensuite l’algorithme à des données réelles de type puces CGH où la détection de ruptures est d’un intérêt crucial. Enfin, une application à l’estimation des fonctions unimodales et à la détection de mode(s) est proposée / This thesis is part of non parametric univariate regression. Assume that the regression function is of bounded variation then the Jordan’s decomposition ensures that it can be written as the sum of an increasing function and a decreasing function. We propose and analyse a novel estimator which combines the isotonic regression related to the estimation of monotonefunctions and the backfitting algorithm devoted to the estimation of additive models. The first chapter provides an overview of the references related to isotonic regression and additive models. The next chapter is devoted to the theoretical study of iterative isotonic regression. As a first step we show that increasing the number of iterations tends to reproduce the data. Moreover, we manage to identify the individual limits by making a connexion with the general property of isotonicity of projection onto convex cones and deriving another equivalent algorithm based on iterative bias reduction. Finally, we establish the consistency of the estimator.The third chapter is devoted to the practical study of the estimator. As increasing the number of iterations leads to overfitting, it is not desirable to iterate the procedure until convergence. We examine stopping criteria based on adaptations of criteria usually used in the context of linear smoothing methods (AIC, BIC, ...) as well as criteria assuming the knowledge of thenumber of modes of the regression function. As it is observed an interesting behavior of the method when the regression function has breakpoints, we apply the algorithm to CGH-array data where breakopoints detections are of crucial interest. Finally, an application to the estimation of unimodal functions is proposed
8

Multiscale Change-point Segmentation: Beyond Step Functions

Guo, Qinghai 03 February 2017 (has links)
No description available.
9

Multiscale Total Variation Estimators for Regression and Inverse Problems

Álamo, Miguel del 24 May 2019 (has links)
No description available.
10

Análise da série do índice de Depósito Interfinanceiro: modelagem da volatilidade e apreçamento de suas opções. / Analysis of Brazilian Interbank Deposit Index series: volatility modeling and option pricing

Mauad, Roberto Baltieri 05 December 2013 (has links)
Modelos bastante utilizados atualmente no apreçamento de derivativos de taxas de juros realizam, muitas vezes, premissas excessivamente restritivas com relação à volatilidade da série do ativo objeto. O método de Black and Scholes e o de Vasicek, por exemplo, consideram a variância da série como constante no tempo e entre as diferentes maturidades, suposição que pode não ser a mais adequada para todos os casos. Assim, entre as técnicas alternativas de modelagem da volatilidade que vêm sendo estudadas, destacam-se as regressões por kernel. Discutimos neste trabalho a modelagem não paramétrica por meio da referida técnica e posterior apreçamento das opções em um modelo HJM Gaussiano. Analisamos diferentes especificações possíveis para a estimação não paramétrica da função de volatilidade através de simulações de Monte Carlo para o apreçamento de opções sobre títulos zero cupom, e realizamos um estudo empírico utilizando a metodologia proposta para o apreçamento de opções sobre IDI no mercado brasileiro. Um dos principais resultados encontrados é o bom ajuste da metodologia proposta no apreçamento de opções sobre títulos zero cupom. / Many models which have been recently used for derivatives pricing make restrictive assumptions about the volatility of the underlying object. Black-Scholes and Vasicek models, for instance, consider the volatility of the series as constant throughout time and maturity, an assumption that might not be the most appropriate for all cases. In this context, kernel regressions are important technics which have been researched recently. We discuss in this framework nonparametric modeling using the aforementioned technic and posterior option pricing using a Gaussian HJM model. We analyze different specifications for the nonparametric estimation of the volatility function using Monte Carlo simulations for the pricing of options on zero coupon bonds and conduct an empirical study using the proposed methodology for the pricing of options on the Interbank Deposit Index (IDI) in the Brazilian market. One of our main results is the good adjustment of the proposed methodology on the pricing of options on zero coupon bonds.

Page generated in 0.0574 seconds