• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 12
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Graphical Analysis of Simultaneously Choosing the Bandwidth and Mixing Parameter for Semiparametric Regression Techniques

Rivers, Derick L. 31 July 2009 (has links)
There has been extensive research done in the area of Semiparametric Regression. These techniques deliver substantial improvements over previously developed methods, such as Ordinary Least Squares and Kernel Regression. Two of these hybrid techniques: Model Robust Regression 1 (MRR1) and Model Robust Regression 2 (MRR2) require the choice of an appropriate bandwidth for smoothing and a mixing parameter that allows a portion of a nonparametric fit to be used in fitting a model that may be misspecifed by other regression methods. The current method of choosing the bandwidth and mixing parameter does not guarantee the optimal choices in either case. The immediate objective of the current work is to address this process of choosing the optimal bandwidth and mixing parameter and to examine the behavior of these estimates using 3D plots. The 3D plots allow us to examine how the semiparametric techniques: MRR1 and MRR2, behave for the optimal (AVEMSE) selection process when compared to data-driven selectors, such as PRESS* and PRESS**. It was found that the structure of MRR2 behaved consistently under all conditions. MRR2 displayed a wider range of "acceptable" values for the choice of bandwidth as opposed to a much more limited choice when using MRR1. These results provide general support for earlier fndings by Mays et al. (2000).
2

Choosing a Kernel for Cross-Validation

Savchuk, Olga 14 January 2010 (has links)
The statistical properties of cross-validation bandwidths can be improved by choosing an appropriate kernel, which is different from the kernels traditionally used for cross- validation purposes. In the light of this idea, we developed two new methods of bandwidth selection termed: Indirect cross-validation and Robust one-sided cross- validation. The kernels used in the Indirect cross-validation method yield an improvement in the relative bandwidth rate to n^1=4, which is substantially better than the n^1=10 rate of the least squares cross-validation method. The robust kernels used in the Robust one-sided cross-validation method eliminate the bandwidth bias for the case of regression functions with discontinuous derivatives.
3

Adaptiva metoder för systemidentifiering med inriktning mot direkt viktoptimering / Adaptive Bandwidth Selection for Nonlinear System Identification with Focus on Direct Weight Optimization

Gillberg, Tony January 2010 (has links)
<p>Direkt viktoptimering (Direct Weight Optimization, DWO) är en ickeparamterisk systemidentifieringsmetod. DWO bygger på att man skattar ett funktionsvärde i en viss punkt genom en viktad summa av mätvärden, där vikterna optimeras fram. Det faktum att DWO har en inparameter som man måste veta i förväg leder till att man på något sätt vill skatta denna inparameter. Det finns många sätt man kan göra denna skattning på men det centrala i denna uppsats är att skatta inparametern lokalt. Fördelen med detta är att metoden anpassar sig om till exempel systemet ändrar beteende från att variera långsamt till att variera snabbare. Denna typ av metoder brukar kallas adaptiva metoder.Det finns flera metoder för att skatta en inparameter lokalt och anpassningen till DWO är redan klar för ett fåtal som lämpar sig bra. Det är dock inte undersökt vilken av dessa metoder som ger det bästa resultatet för just DWO. Syftet med denna uppsats är alltså att ta reda på hur man lokalt kan skatta en inparameter till DWO på bästa sätt och om DWO är en bra grund att basera en adaptiv metod på.Det har visat sig att DWO kanske är för känslig för en lokalt vald inparameter för att vara en bra grund att basera en adaptiv metod på. Däremot utmärker sig en av metoderna för att skatta inparametern genom att vara mycket bättre än de andra metoderna när den kanske inte borde vara det. Varför den är så bra kan vara ett bra ämne för vidare forskning.</p> / <p>Direct Weight Optimization (DWO) is a nonparametric system identification meth\-od. In DWO the value of a function in a certain point is estimated by a weighted sum of measured values. The weights are obtained as a solution to a convex optimization problem. DWO has a design parameter which has to be chosen or estimated a priori. There are many ways to estimate this parameter. The main focus of this thesis is to estimate this parameter locally. The advantage of estimating the parameter locally is that the estimate will adapt if the system changes behavior from slowly varying to rapidly varying. Estimation methods of this type are usually called adaptive estimation methods.There are a number of adaptive estimation methods and the adaptation of some of these methods to DWO has already been done. There are however no evaluation studies done. The goal with this thesis is therefore to find out how to estimate the parameter in DWO in the best way and to find out whether DWO is a good base for an adaptive method.It turned out that DWO might be too sensitive to local changes in the design parameter to be a good base for an adaptive method. However, one of the adaptive estimation methods stands out from the rest because it is much better than the other methods when it, perhaps, should not. Why this method is good might be a good subject for further research.</p>
4

Adaptiva metoder för systemidentifiering med inriktning mot direkt viktoptimering / Adaptive Bandwidth Selection for Nonlinear System Identification with Focus on Direct Weight Optimization

Gillberg, Tony January 2010 (has links)
Direkt viktoptimering (Direct Weight Optimization, DWO) är en ickeparamterisk systemidentifieringsmetod. DWO bygger på att man skattar ett funktionsvärde i en viss punkt genom en viktad summa av mätvärden, där vikterna optimeras fram. Det faktum att DWO har en inparameter som man måste veta i förväg leder till att man på något sätt vill skatta denna inparameter. Det finns många sätt man kan göra denna skattning på men det centrala i denna uppsats är att skatta inparametern lokalt. Fördelen med detta är att metoden anpassar sig om till exempel systemet ändrar beteende från att variera långsamt till att variera snabbare. Denna typ av metoder brukar kallas adaptiva metoder.Det finns flera metoder för att skatta en inparameter lokalt och anpassningen till DWO är redan klar för ett fåtal som lämpar sig bra. Det är dock inte undersökt vilken av dessa metoder som ger det bästa resultatet för just DWO. Syftet med denna uppsats är alltså att ta reda på hur man lokalt kan skatta en inparameter till DWO på bästa sätt och om DWO är en bra grund att basera en adaptiv metod på.Det har visat sig att DWO kanske är för känslig för en lokalt vald inparameter för att vara en bra grund att basera en adaptiv metod på. Däremot utmärker sig en av metoderna för att skatta inparametern genom att vara mycket bättre än de andra metoderna när den kanske inte borde vara det. Varför den är så bra kan vara ett bra ämne för vidare forskning. / Direct Weight Optimization (DWO) is a nonparametric system identification meth\-od. In DWO the value of a function in a certain point is estimated by a weighted sum of measured values. The weights are obtained as a solution to a convex optimization problem. DWO has a design parameter which has to be chosen or estimated a priori. There are many ways to estimate this parameter. The main focus of this thesis is to estimate this parameter locally. The advantage of estimating the parameter locally is that the estimate will adapt if the system changes behavior from slowly varying to rapidly varying. Estimation methods of this type are usually called adaptive estimation methods.There are a number of adaptive estimation methods and the adaptation of some of these methods to DWO has already been done. There are however no evaluation studies done. The goal with this thesis is therefore to find out how to estimate the parameter in DWO in the best way and to find out whether DWO is a good base for an adaptive method.It turned out that DWO might be too sensitive to local changes in the design parameter to be a good base for an adaptive method. However, one of the adaptive estimation methods stands out from the rest because it is much better than the other methods when it, perhaps, should not. Why this method is good might be a good subject for further research.
5

Mean preservation in censored regression using preliminary nonparametric smoothing

Heuchenne, Cédric 18 August 2005 (has links)
In this thesis, we consider the problem of estimating the regression function in location-scale regression models. This model assumes that the random vector (X,Y) satisfies Y = m(X) + s(X)e, where m(.) is an unknown location function (e.g. conditional mean, median, truncated mean,...), s(.) is an unknown scale function, and e is independent of X. The response Y is subject to random right censoring, and the covariate X is completely observed. In the first part of the thesis, we assume that m(x) = E(Y|X=x) follows a polynomial model. A new estimation procedure for the unknown regression parameters is proposed, which extends the classical least squares procedure to censored data. The proposed method is inspired by the method of Buckley and James (1979), but is, unlike the latter method, a non-iterative procedure due to nonparametric preliminary estimation. The asymptotic normality of the estimators is established. Simulations are carried out for both methods and they show that the proposed estimators have usually smaller variance and smaller mean squared error than the Buckley-James estimators. For the second part, suppose that m(.)=E(Y|.) belongs to some parametric class of regression functions. A new estimation procedure for the true, unknown vector of parameters is proposed, that extends the classical least squares procedure for nonlinear regression to the case where the response is subject to censoring. The proposed technique uses new `synthetic' data points that are constructed by using a nonparametric relation between Y and X. The consistency and asymptotic normality of the proposed estimator are established, and the estimator is compared via simulations with an estimator proposed by Stute in 1999. In the third part, we study the nonparametric estimation of the regression function m(.). It is well known that the completely nonparametric estimator of the conditional distribution F(.|x) of Y given X=x suffers from inconsistency problems in the right tail (Beran, 1981), and hence the location function m(x) cannot be estimated consistently in a completely nonparametric way, whenever m(x) involves the right tail of F(.|x) (like e.g. for the conditional mean). We propose two alternative estimators of m(x), that do not share the above inconsistency problems. The idea is to make use of the assumed location-scale model, in order to improve the estimation of F(.|x), especially in the right tail. We obtain the asymptotic properties of the two proposed estimators of m(x). Simulations show that the proposed estimators outperform the completely nonparametric estimator in many cases.
6

Data-driven estimation for Aalen's additive risk model

Boruvka, Audrey 02 August 2007 (has links)
The proportional hazards model developed by Cox (1972) is by far the most widely used method for regression analysis of censored survival data. Application of the Cox model to more general event history data has become possible through extensions using counting process theory (e.g., Andersen and Borgan (1985), Therneau and Grambsch (2000)). With its development based entirely on counting processes, Aalen’s additive risk model offers a flexible, nonparametric alternative. Ordinary least squares, weighted least squares and ridge regression have been proposed in the literature as estimation schemes for Aalen’s model (Aalen (1989), Huffer and McKeague (1991), Aalen et al. (2004)). This thesis develops data-driven parameter selection criteria for the weighted least squares and ridge estimators. Using simulated survival data, these new methods are evaluated against existing approaches. A survey of the literature on the additive risk model and a demonstration of its application to real data sets are also provided. / Thesis (Master, Mathematics & Statistics) -- Queen's University, 2007-07-18 22:13:13.243
7

Bootstrap bandwidth selection in kernel hazard rate estimation / S. Jansen van Vuuren

Van Vuuren, Stefan Jansen January 2011 (has links)
The purpose of this study is to thoroughly discuss kernel hazard function estimation, both in the complete sample case as well as in the presence of random right censoring. Most of the focus is on the very important task of automatic bandwidth selection. Two existing selectors, least–squares cross validation as described by Patil (1993a) and Patil (1993b), as well as the bootstrap bandwidth selector of Gonzalez–Manteiga, Cao and Marron (1996) will be discussed. The bandwidth selector of Hall and Robinson (2009), which uses bootstrap aggregation (or 'bagging'), will be extended to and evaluated in the setting of kernel hazard rate estimation. We will also make a simple proposal for a bootstrap bandwidth selector. The performance of these bandwidth selectors will be compared empirically in a simulation study. The findings and conclusions of this study are reported. / Thesis (M.Sc. (Statistics))--North-West University, Potchefstroom Campus, 2011.
8

Bootstrap bandwidth selection in kernel hazard rate estimation / S. Jansen van Vuuren

Van Vuuren, Stefan Jansen January 2011 (has links)
The purpose of this study is to thoroughly discuss kernel hazard function estimation, both in the complete sample case as well as in the presence of random right censoring. Most of the focus is on the very important task of automatic bandwidth selection. Two existing selectors, least–squares cross validation as described by Patil (1993a) and Patil (1993b), as well as the bootstrap bandwidth selector of Gonzalez–Manteiga, Cao and Marron (1996) will be discussed. The bandwidth selector of Hall and Robinson (2009), which uses bootstrap aggregation (or 'bagging'), will be extended to and evaluated in the setting of kernel hazard rate estimation. We will also make a simple proposal for a bootstrap bandwidth selector. The performance of these bandwidth selectors will be compared empirically in a simulation study. The findings and conclusions of this study are reported. / Thesis (M.Sc. (Statistics))--North-West University, Potchefstroom Campus, 2011.
9

The Generalized Splitting method for Combinatorial Counting and Static Rare-Event Probability Estimation

Zdravko Botev Unknown Date (has links)
This thesis is divided into two parts. In the first part we describe a new Monte Carlo algorithm for the consistent and unbiased estimation of multidimensional integrals and the efficient sampling from multidimensional densities. The algorithm is inspired by the classical splitting method and can be applied to general static simulation models. We provide examples from rare-event probability estimation, counting, optimization, and sampling, demonstrating that the proposed method can outperform existing Markov chain sampling methods in terms of convergence speed and accuracy. In the second part we present a new adaptive kernel density estimator based on linear diffusion processes. The proposed estimator builds on existing ideas for adaptive smoothing by incorporating information from a pilot density estimate. In addition, we propose a new plug-in bandwidth selection method that is free from the arbitrary normal reference rules used by existing methods. We present simulation examples in which the proposed approach outperforms existing methods in terms of accuracy and reliability.
10

Aspects théoriques et pratiques dans l'estimation non paramétrique de la densité conditionnelle pour des données fonctionnelles / Theoretical and practical aspects in non parametric estimation of the conditional density with functional data

Madani, Fethi 11 May 2012 (has links)
Dans cette thèse, nous nous intéressons à l'estimation non paramétrique de la densité conditionnelle d'une variable aléatoire réponse réelle conditionnée par une variable aléatoire explicative fonctionnelle de dimension éventuellement fi nie. Dans un premier temps, nous considérons l'estimation de ce modèle par la méthode du double noyaux. Nous proposons une méthode de sélection automatique du paramètre de lissage (global et puis local) intervenant dans l'estimateur à noyau, et puis nous montrons l'optimalité asymptotique du paramètre obtenu quand les observations sont indépendantes et identiquement distribuées. Le critère adopté est issu du principe de validation croisée. Dans cette partie nous avons procédé également à la comparaison de l'efficacité des deux types de choix (local et global). Dans la deuxième partie et dans le même contexte topologique, nous estimons la densité conditionnelle par la méthode des polynômes locaux. Sous certaines conditions, nous établissons des propriétés asymptotiques de cet estimateur telles que la convergence presque-complète et la convergence en moyenne quadratique dans le cas où les observations sont indépendantes et identiquement distribuées. Nous étendons aussi nos résultats au cas où les observations sont de type α- mélangeantes, dont on montre la convergence presque-complète (avec vitesse de convergence) de l'estimateur proposé. Enfi n, l'applicabilité rapide et facile de nos résultats théoriques, dans le cadre fonctionnel, est illustrée par des exemples (1) sur des données simulées, et (2) sur des données réelles. / In this thesis, we consider the problem of the nonparametric estimation of the conditional density when the response variable is real and the regressor is valued in a functional space. In the rst part, we use the double kernels method's as a estimation method where we focus on the choice of the smoothing parameters. We construct a data driven method permitting to select optimally and automatically bandwidths. As main results, we study the asymptotic optimality of this selection method in the case where observations are independent and identically distributed (i.i.d). Our selection rule is based on the classical cross-validation ideas and it deals with the both global and local choices. The performance of our approach is illustrated also by some simulation results on nite samples where we conduct a comparison between the two types of bandwidths choices (local and global). In the second part, we adopt a functional version of the local linear method, in the same topological context, to estimate some functional parameters. Under some general conditions, we establish the almost-complete convergence (with rates) of the proposed estimator in the both cases ( the i.i.d. case and the α-mixing case) . As application, we use the conditional density estimator to estimate the conditional mode estimation and to derive some asymptotic proprieties of the constructed estimator. Then, we establish the quadratic error of this estimator by giving its exact asymptotic expansion (involved in the leading in the bias and variance terms). Finally, the applicability of our results is then veri ed and validated for (1) simulated data, and (2) some real data.

Page generated in 0.1028 seconds