Spelling suggestions: "subject:"nonparametric destimation"" "subject:"nonparametric coestimation""
1 |
Wavelets and adaptive filtersSuhasini, Subba Rao Tata January 2001 (has links)
No description available.
|
2 |
Robust Parametric Functional Component Estimation Using a Divergence FamilySilver, Justin 16 September 2013 (has links)
The classical parametric estimation approach, maximum likelihood, while providing maximally efficient estimators at the correct model, lacks robustness. As a modification of maximum likelihood, Huber (1964) introduced M-estimators, which are very general but often ad hoc. Basu et al. (1998) developed a family of density-based divergences, many of which exhibit robustness. It turns out that maximum likelihood is a special case of this general class of divergence functions, which are
indexed by a parameter alpha. Basu noted that only values of alpha in the [0,1] range were of interest -- with alpha = 0 giving the maximum likelihood solution and alpha = 1 the L2E solution (Scott, 2001). As alpha increases, there is a clear tradeoff between increasing robustness and decreasing efficiency. This thesis develops a family of robust location and scale estimators by applying Basu's alpha-divergence function to a multivariate partial density component model (Scott, 2004). The usefulness of alpha values greater than 1 will be explored, and the new estimator will be applied to simulated cases and applications in parametric density estimation and regression.
|
3 |
Selecting tuning parameters in minimum distance estimatorsWarwick, Jane January 2002 (has links)
Many minimum distance estimators have the potential to provide parameter estimates which are both robust and efficient and yet, despite these highly desirable theoretical properties, they are rarely used in practice. This is because the performance of these estimators is rarely guaranteed per se but obtained by placing a suitable value on some tuning parameter. Hence there is a risk involved in implementing these methods because if the value chosen for the tuning parameter is inappropriate for the data to which the method is applied, the resulting estimators may not have the desired theoretical properties and could even perform less well than one of the simpler, more widely used alternatives. There are currently no data-based methods available for deciding what value one should place on these tuning parameters hence the primary aim of this research is to develop an objective way of selecting values for the tuning parameters in minimum distance estimators so that the full potential of these estimators might be realised. This new method was initially developed to optimise the performance of the density power divergence estimator, which was proposed by Basu, Harris, Hjort and Jones [3]. The results were very promising so the method was then applied to two other minimum distance estimators and the results compared.
|
4 |
Parametric estimation for randomly censored autocorrelated data.Sithole, Moses M. January 1997 (has links)
This thesis is mainly concerned with the estimation of parameters in autoregressive models with censored data. For convenience, attention is restricted to the first-order stationary autoregressive (AR(1)) model in which the response random variables are subject to right-censoring. In their present form, currently available methods of estimation in regression analysis with censored autocorrelated data, which includes the MLE, are applicable only if the errors of the AR component of the model are Gaussian. Use of these methods in AR processes with non-Gaussian errors requires, essentially, rederivations of the estimators. Hence, in this thesis, we propose new estimators which arerobust in the sense that they can be applied with minor or no modifications to AR models with non-Gaussian. We propose three estimators, two of which the form of the distribution of the errors needs to be specified. The third estimator is a distribution-free estimator. As the reference to this estimator suggests, it is free from distributional assumptions in the sense that the error distribution is calculated from the observed data. Hence, it can be used in a wide variety of applications.In the first part of the thesis, we present a summary of the various currently available estimators for the linear regression model with censored independent and identically distributed (i.i.d.) data. In our review of these estimators, we note that the linear regression model with censored i.i.d. data has been studied quite extensively. Yet, use of autoregressive models with censored data has received very little attention. Hence, the remainder of the thesis focuses on the estimation of parameters for censored autocorrelated data. First, as part of the study, we review currently available estimators in regression with censored autocorrelated data. Then we present descriptions of the new estimators for censored ++ / autocorrelated data. With the view that extensions to the AR(p), model, p > 1, and to left-censored data can be easily achieved, all the estimators, both currently available and new, are discussed in the context of the AR(1) model. Next, we establish some asymptotic results for the estimators in which specification of the form of the error distribution is necessary. This is followed by a simulation study based on Monte Carlo experiments in which we evaluate and compare the performances of the new and currently available estimators among themselves and with the least-squares estimator for the uncensored case. The performances of the asymptotic variance estimators of the parameter estimators are also evaluated.In summary, we establish that for each of the two new estimators for which the distribution of the errors is assumed known, under suitable conditions on the moments of the error distribution function, if the estimator is consistent, then it is also asymptotically normally distributed. For one of these estimators, if the errors are Gaussian and alternate observations are censored, then the estimator is consistent. Hence, for this special case, the estimator is consistent and asymptotically normal. The simulation results suggest that this estimator is comparable with the distribution-free estimator and a currently available pseudolikelihood (PL) estimator. All three estimators perform worse than the least squares estimator for the uncensored case. The MLE and another currently available PL estimator perform comparably not only with the least squares estimator for the uncensored case but also with estimators from the abovementioned group of three estimators, which includes the distribution-free estimator. The other new estimator for which the form of the error distribution is assumed known compares favourably with the least- squares estimator for the uncensored case ++ / and better than the rest of the estimators when the true value of the autoregression parameter is 0.2. When the true value of the parameter is 0.5, this estimator performs comparably with the rest of the estimators and worse when the true value of the parameter is O.S. The simulation results of the asymptotic variance estimators suggest that for each estimator and for a fixed value of the true autoregression parameter, if the error distribution is fixed and the censoring rate is constant, the asymptotic formulas lead to values which are asymptotically insensitive to the censoring pattern. Also, the estimated asymptotic variances decrease as the sample size increases and their behaviour, with respect to changes in the true value of autoregression parameter, is consistent with the behaviour of the asymptotic variance of the least-squares estimator for the uncensored case.Some suggestions for possible extensions conclude the thesis.
|
5 |
Análise da correlação entre o Ibovespa e o ativo petr4 : estimação via modelos Garch e modelos aditivosNunes, Fábio Magalhães January 2009 (has links)
A estimação e previsão da volatilidade de ativos são de suma importância para os mercados financeiros. Temas como risco e incerteza na teoria econômica incentivaram a procura por métodos capazes de modelar a variância condicional que evolui ao longo do tempo. O objetivo central desta dissertação foi modelar via modelos ARCH – GARCH e modelos aditivos o índice do IBOVESPA e o ativo PETR4 para analisar a existência de correlação entre as volatilidades estimadas. A estimação da volatilidade dos ativos no método paramétrico foi realizada via modelos EGARCH; já para o método não paramétrico, utilizouse os modelos aditivos com 5 defasagens. / Volatility estimation and forecasting are very important matters for the financial markets. Themes like risk and uncertainty in modern economic theory have encouraged the search for methods that allow for modeling of time varying variances. The main objective of this dissertation was estimate through GARCH models and additive models of IBOVESPA and PETR4 assets; and analyzes the existence of correlation between volatilities estimated. We use EGARCH models to estimate through parametric methods and use additive models 5 to estimate non parametric methods.
|
6 |
Análise da correlação entre o Ibovespa e o ativo petr4 : estimação via modelos Garch e modelos aditivosNunes, Fábio Magalhães January 2009 (has links)
A estimação e previsão da volatilidade de ativos são de suma importância para os mercados financeiros. Temas como risco e incerteza na teoria econômica incentivaram a procura por métodos capazes de modelar a variância condicional que evolui ao longo do tempo. O objetivo central desta dissertação foi modelar via modelos ARCH – GARCH e modelos aditivos o índice do IBOVESPA e o ativo PETR4 para analisar a existência de correlação entre as volatilidades estimadas. A estimação da volatilidade dos ativos no método paramétrico foi realizada via modelos EGARCH; já para o método não paramétrico, utilizouse os modelos aditivos com 5 defasagens. / Volatility estimation and forecasting are very important matters for the financial markets. Themes like risk and uncertainty in modern economic theory have encouraged the search for methods that allow for modeling of time varying variances. The main objective of this dissertation was estimate through GARCH models and additive models of IBOVESPA and PETR4 assets; and analyzes the existence of correlation between volatilities estimated. We use EGARCH models to estimate through parametric methods and use additive models 5 to estimate non parametric methods.
|
7 |
Análise da correlação entre o Ibovespa e o ativo petr4 : estimação via modelos Garch e modelos aditivosNunes, Fábio Magalhães January 2009 (has links)
A estimação e previsão da volatilidade de ativos são de suma importância para os mercados financeiros. Temas como risco e incerteza na teoria econômica incentivaram a procura por métodos capazes de modelar a variância condicional que evolui ao longo do tempo. O objetivo central desta dissertação foi modelar via modelos ARCH – GARCH e modelos aditivos o índice do IBOVESPA e o ativo PETR4 para analisar a existência de correlação entre as volatilidades estimadas. A estimação da volatilidade dos ativos no método paramétrico foi realizada via modelos EGARCH; já para o método não paramétrico, utilizouse os modelos aditivos com 5 defasagens. / Volatility estimation and forecasting are very important matters for the financial markets. Themes like risk and uncertainty in modern economic theory have encouraged the search for methods that allow for modeling of time varying variances. The main objective of this dissertation was estimate through GARCH models and additive models of IBOVESPA and PETR4 assets; and analyzes the existence of correlation between volatilities estimated. We use EGARCH models to estimate through parametric methods and use additive models 5 to estimate non parametric methods.
|
8 |
A study of selected methods of nonparametric regression estimation /Chkrebtii, Oksana. January 1900 (has links)
Thesis (M.Sc.) - Carleton University, 2008. / Includes bibliographical references (p. 114-117). Also available in electronic format on the Internet.
|
9 |
Estimations pour les modèles de Markov cachés et approximations particulaires : Application à la cartographie et à la localisation simultanées. / Inference in hidden Markov models and particle approximations - application to the simultaneous localization and mapping problemLe Corff, Sylvain 28 September 2012 (has links)
Dans cette thèse, nous nous intéressons à l'estimation de paramètres dans les chaînes de Markov cachées. Nous considérons tout d'abord le problème de l'estimation en ligne (sans sauvegarde des observations) au sens du maximum de vraisemblance. Nous proposons une nouvelle méthode basée sur l'algorithme Expectation Maximization appelée Block Online Expectation Maximization (BOEM). Cet algorithme est défini pour des chaînes de Markov cachées à espace d'état et espace d'observations généraux. Dans le cas d'espaces d'états généraux, l'algorithme BOEM requiert l'introduction de méthodes de Monte Carlo séquentielles pour approcher des espérances sous des lois de lissage. La convergence de l'algorithme nécessite alors un contrôle de la norme Lp de l'erreur d'approximation Monte Carlo explicite en le nombre d'observations et de particules. Une seconde partie de cette thèse se consacre à l'obtention de tels contrôles pour plusieurs méthodes de Monte Carlo séquentielles. Nous étudions enfin des applications de l'algorithme BOEM à des problèmes de cartographie et de localisation simultanées. La dernière partie de cette thèse est relative à l'estimation non paramétrique dans les chaînes de Markov cachées. Le problème considéré est abordé dans un cadre précis. Nous supposons que (Xk) est une marche aléatoire dont la loi des incréments est connue à un facteur d'échelle a près. Nous supposons que, pour tout k, Yk est une observation de f(Xk) dans un bruit additif gaussien, où f est une fonction que nous cherchons à estimer. Nous établissons l'identifiabilité du modèle statistique et nous proposons une estimation de f et de a à partir de la vraisemblance par paires des observations. / This document is dedicated to inference problems in hidden Markov models. The first part is devoted to an online maximum likelihood estimation procedure which does not store the observations. We propose a new Expectation Maximization based method called the Block Online Expectation Maximization (BOEM) algorithm. This algorithm solves the online estimation problem for general hidden Markov models. In complex situations, it requires the introduction of Sequential Monte Carlo methods to approximate several expectations under the fixed interval smoothing distributions. The convergence of the algorithm is shown under the assumption that the Lp mean error due to the Monte Carlo approximation can be controlled explicitly in the number of observations and in the number of particles. Therefore, a second part of the document establishes such controls for several Sequential Monte Carlo algorithms. This BOEM algorithm is then used to solve the simultaneous localization and mapping problem in different frameworks. Finally, the last part of this thesis is dedicated to nonparametric estimation in hidden Markov models. It is assumed that the Markov chain (Xk) is a random walk lying in a compact set with increment distribution known up to a scaling factor a. At each time step k, Yk is a noisy observations of f(Xk) where f is an unknown function. We establish the identifiability of the statistical model and we propose estimators of f and a based on the pairwise likelihood of the observations.
|
10 |
Statistical inference for inequality measures based on semi-parametric estimatorsKpanzou, Tchilabalo Abozou 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: Measures of inequality, also used as measures of concentration or diversity, are very popular in economics
and especially in measuring the inequality in income or wealth within a population and between
populations. However, they have applications in many other fields, e.g. in ecology, linguistics, sociology,
demography, epidemiology and information science.
A large number of measures have been proposed to measure inequality. Examples include the Gini
index, the generalized entropy, the Atkinson and the quintile share ratio measures. Inequality measures
are inherently dependent on the tails of the population (underlying distribution) and therefore their
estimators are typically sensitive to data from these tails (nonrobust). For example, income distributions
often exhibit a long tail to the right, leading to the frequent occurrence of large values in samples. Since
the usual estimators are based on the empirical distribution function, they are usually nonrobust to such
large values. Furthermore, heavy-tailed distributions often occur in real life data sets, remedial action
therefore needs to be taken in such cases.
The remedial action can be either a trimming of the extreme data or a modification of the (traditional)
estimator to make it more robust to extreme observations. In this thesis we follow the second option,
modifying the traditional empirical distribution function as estimator to make it more robust. Using results
from extreme value theory, we develop more reliable distribution estimators in a semi-parametric
setting. These new estimators of the distribution then form the basis for more robust estimators of the
measures of inequality. These estimators are developed for the four most popular classes of measures,
viz. Gini, generalized entropy, Atkinson and quintile share ratio. Properties of such estimators
are studied especially via simulation. Using limiting distribution theory and the bootstrap methodology,
approximate confidence intervals were derived. Through the various simulation studies, the proposed
estimators are compared to the standard ones in terms of mean squared error, relative impact of contamination,
confidence interval length and coverage probability. In these studies the semi-parametric
methods show a clear improvement over the standard ones. The theoretical properties of the quintile
share ratio have not been studied much. Consequently, we also derive its influence function as well as
the limiting normal distribution of its nonparametric estimator. These results have not previously been
published.
In order to illustrate the methods developed, we apply them to a number of real life data sets. Using
such data sets, we show how the methods can be used in practice for inference. In order to choose
between the candidate parametric distributions, use is made of a measure of sample representativeness
from the literature. These illustrations show that the proposed methods can be used to reach
satisfactory conclusions in real life problems. / AFRIKAANSE OPSOMMING: Maatstawwe van ongelykheid, wat ook gebruik word as maatstawwe van konsentrasie of diversiteit,
is baie populêr in ekonomie en veral vir die kwantifisering van ongelykheid in inkomste of welvaart
binne ’n populasie en tussen populasies. Hulle het egter ook toepassings in baie ander dissiplines,
byvoorbeeld ekologie, linguistiek, sosiologie, demografie, epidemiologie en inligtingskunde.
Daar bestaan reeds verskeie maatstawwe vir die meet van ongelykheid. Voorbeelde sluit in die Gini
indeks, die veralgemeende entropie maatstaf, die Atkinson maatstaf en die kwintiel aandeel verhouding.
Maatstawwe van ongelykheid is inherent afhanklik van die sterte van die populasie (onderliggende
verdeling) en beramers daarvoor is tipies dus sensitief vir data uit sodanige sterte (nierobuust). Inkomste
verdelings het byvoorbeeld dikwels lang regtersterte, wat kan lei tot die voorkoms van groot
waardes in steekproewe. Die tradisionele beramers is gebaseer op die empiriese verdelingsfunksie, en
hulle is gewoonlik dus nierobuust teenoor sodanige groot waardes nie. Aangesien swaarstert verdelings
dikwels voorkom in werklike data, moet regstellings gemaak word in sulke gevalle.
Hierdie regstellings kan bestaan uit of die afknip van ekstreme data of die aanpassing van tradisionele
beramers om hulle meer robuust te maak teen ekstreme waardes. In hierdie tesis word die
tweede opsie gevolg deurdat die tradisionele empiriese verdelingsfunksie as beramer aangepas word
om dit meer robuust te maak. Deur gebruik te maak van resultate van ekstreemwaardeteorie, word
meer betroubare beramers vir verdelings ontwikkel in ’n semi-parametriese opset. Hierdie nuwe beramers
van die verdeling vorm dan die basis vir meer robuuste beramers van maatstawwe van ongelykheid.
Hierdie beramers word ontwikkel vir die vier mees populêre klasse van maatstawwe, naamlik
Gini, veralgemeende entropie, Atkinson en kwintiel aandeel verhouding. Eienskappe van hierdie
beramers word bestudeer, veral met behulp van simulasie studies. Benaderde vertrouensintervalle
word ontwikkel deur gebruik te maak van limietverdelingsteorie en die skoenlus metodologie. Die
voorgestelde beramers word vergelyk met tradisionele beramers deur middel van verskeie simulasie
studies. Die vergelyking word gedoen in terme van gemiddelde kwadraat fout, relatiewe impak van
kontaminasie, vertrouensinterval lengte en oordekkingswaarskynlikheid. In hierdie studies toon die
semi-parametriese metodes ’n duidelike verbetering teenoor die tradisionele metodes. Die kwintiel
aandeel verhouding se teoretiese eienskappe het nog nie veel aandag in die literatuur geniet nie.
Gevolglik lei ons die invloedfunksie asook die asimptotiese verdeling van die nie-parametriese beramer
daarvoor af.
Ten einde die metodes wat ontwikkel is te illustreer, word dit toegepas op ’n aantal werklike datastelle.
Hierdie toepassings toon hoe die metodes gebruik kan word vir inferensie in die praktyk. ’n Metode
in die literatuur vir steekproefverteenwoordiging word voorgestel en gebruik om ’n keuse tussen die
kandidaat parametriese verdelings te maak. Hierdie voorbeelde toon dat die voorgestelde metodes
met vrug gebruik kan word om bevredigende gevolgtrekkings in die praktyk te maak.
|
Page generated in 0.1479 seconds