• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • Tagged with
  • 9
  • 9
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Econometric computing with HC and HAC covariance matrix estimators

Zeileis, Achim January 2004 (has links) (PDF)
Data described by econometric models typically contains autocorrelation and/or heteroskedasticity of unknown form and for inference in such models it is essential to use covariance matrix estimators that can consistently estimate the covariance of the model parameters. Hence, suitable heteroskedasticity-consistent (HC) and heteroskedasticity and autocorrelation consistent (HAC) estimators have been receiving attention in the econometric literature over the last 20 years. To apply these estimators in practice, an implementation is needed that preferably translates the conceptual properties of the underlying theoretical frameworks into computational tools. In this paper, such an implementation in the package sandwich in the R system for statistical computing is described and it is shown how the suggested functions provide reusable components that build on readily existing functionality and how they can be integrated easily into new inferential procedures or applications. The toolbox contained in sandwich is extremely flexible and comprehensive, including specific functions for the most important HC and HAC estimators from the econometric literature. Several real-world data sets are used to illustrate how the functionality can be integrated into applications. / Series: Research Report Series / Department of Statistics and Mathematics
2

Object-oriented Computation of Sandwich Estimators

Zeileis, Achim January 2006 (has links) (PDF)
Sandwich covariance matrix estimators are a popular tool in applied regression modeling for performing inference that is robust to certain types of model misspecification. Suitable implementations are available in the R system for statistical computing for certain model fitting functions only (in particular lm()), but not for other standard regression functions, such as glm(), nls(), or survreg(). Therefore, conceptual tools and their translation to computational tools in the package sandwich are discussed, enabling the computation of sandwich estimators in general parametric models. Object orientation can be achieved by providing a few extractor functions-most importantly for the empirical estimating functions-from which various types of sandwich estimators can be computed. / Series: Research Report Series / Department of Statistics and Mathematics
3

"Modelos de risco de crédito de clientes: Uma aplicação a dados reais" / Customer Scoring Models: An application to Real Data

Pereira, Gustavo Henrique de Araujo 23 August 2004 (has links)
Modelos de customer scoring são utilizados para mensurar o risco de crédito de clientes de instituições financeiras. Neste trabalho, são apresentadas três estratégias que podem ser utilizadas para o desenvolvimento desses modelos. São discutidas as vantagens de cada uma dessas estratégias, bem como os modelos e a teoria estatística associada a elas. Algumas medidas de performance usualmente utilizadas na comparação de modelos de risco de crédito são descritas. Modelos para cada uma das estratégias são ajustados utilizando-se dados reais obtidos de uma instituição financeira. A performance das estratégias para esse conjunto de dados é comparada a partir de medidas usualmente utilizadas na avaliação de modelos de risco de crédito. Uma simulação também é desenvolvida com o propósito de comparar o desempenho das estratégias em condições controladas. / Customer scoring models are used to measure the credit risk of financial institution´s customers. In this work, we present three strategies that can be used to develop these models. We discuss the advantages of each of the strategies, as well as the models and statistical theory related with them. We fit models for each of these strategies using real data of a financial institution. We compare the strategies´s performance through some measures that are usually used to validate credit risk models. We still develop a simulation to study the strategies under controlled conditions.
4

"Modelos de risco de crédito de clientes: Uma aplicação a dados reais" / Customer Scoring Models: An application to Real Data

Gustavo Henrique de Araujo Pereira 23 August 2004 (has links)
Modelos de customer scoring são utilizados para mensurar o risco de crédito de clientes de instituições financeiras. Neste trabalho, são apresentadas três estratégias que podem ser utilizadas para o desenvolvimento desses modelos. São discutidas as vantagens de cada uma dessas estratégias, bem como os modelos e a teoria estatística associada a elas. Algumas medidas de performance usualmente utilizadas na comparação de modelos de risco de crédito são descritas. Modelos para cada uma das estratégias são ajustados utilizando-se dados reais obtidos de uma instituição financeira. A performance das estratégias para esse conjunto de dados é comparada a partir de medidas usualmente utilizadas na avaliação de modelos de risco de crédito. Uma simulação também é desenvolvida com o propósito de comparar o desempenho das estratégias em condições controladas. / Customer scoring models are used to measure the credit risk of financial institution´s customers. In this work, we present three strategies that can be used to develop these models. We discuss the advantages of each of the strategies, as well as the models and statistical theory related with them. We fit models for each of these strategies using real data of a financial institution. We compare the strategies´s performance through some measures that are usually used to validate credit risk models. We still develop a simulation to study the strategies under controlled conditions.
5

Information Matrices in Estimating Function Approach: Tests for Model Misspecification and Model Selection

Zhou, Qian January 2009 (has links)
Estimating functions have been widely used for parameter estimation in various statistical problems. Regular estimating functions produce parameter estimators which have desirable properties, such as consistency and asymptotic normality. In quasi-likelihood inference, an important example of estimating functions, correct specification of the first two moments of the underlying distribution leads to the information unbiasedness, which states that two forms of the information matrix: the negative sensitivity matrix (negative expectation of the first order derivative of an estimating function) and the variability matrix (variance of an estimating function) are equal, or in other words, the analogue of the Fisher information is equivalent to the Godambe information. Consequently, the information unbiasedness indicates that the model-based covariance matrix estimator and sandwich covariance matrix estimator are equivalent. By comparing the model-based and sandwich variance estimators, we propose information ratio (IR) statistics for testing model misspecification of variance/covariance structure under correctly specified mean structure, in the context of linear regression models, generalized linear regression models and generalized estimating equations. Asymptotic properties of the IR statistics are discussed. In addition, through intensive simulation studies, we show that the IR statistics are powerful in various applications: test for heteroscedasticity in linear regression models, test for overdispersion in count data, and test for misspecified variance function and/or misspecified working correlation structure. Moreover, the IR statistics appear more powerful than the classical information matrix test proposed by White (1982). In the literature, model selection criteria have been intensively discussed, but almost all of them target choosing the optimal mean structure. In this thesis, two model selection procedures are proposed for selecting the optimal variance/covariance structure among a collection of candidate structures. One is based on a sequence of the IR tests for all the competing variance/covariance structures. The other is based on an ``information discrepancy criterion" (IDC), which provides a measurement of discrepancy between the negative sensitivity matrix and the variability matrix. In fact, this IDC characterizes the relative efficiency loss when using a certain candidate variance/covariance structure, compared with the true but unknown structure. Through simulation studies and analyses of two data sets, it is shown that the two proposed model selection methods both have a high rate of detecting the true/optimal variance/covariance structure. In particular, since the IDC magnifies the difference among the competing structures, it is highly sensitive to detect the most appropriate variance/covariance structure.
6

Information Matrices in Estimating Function Approach: Tests for Model Misspecification and Model Selection

Zhou, Qian January 2009 (has links)
Estimating functions have been widely used for parameter estimation in various statistical problems. Regular estimating functions produce parameter estimators which have desirable properties, such as consistency and asymptotic normality. In quasi-likelihood inference, an important example of estimating functions, correct specification of the first two moments of the underlying distribution leads to the information unbiasedness, which states that two forms of the information matrix: the negative sensitivity matrix (negative expectation of the first order derivative of an estimating function) and the variability matrix (variance of an estimating function) are equal, or in other words, the analogue of the Fisher information is equivalent to the Godambe information. Consequently, the information unbiasedness indicates that the model-based covariance matrix estimator and sandwich covariance matrix estimator are equivalent. By comparing the model-based and sandwich variance estimators, we propose information ratio (IR) statistics for testing model misspecification of variance/covariance structure under correctly specified mean structure, in the context of linear regression models, generalized linear regression models and generalized estimating equations. Asymptotic properties of the IR statistics are discussed. In addition, through intensive simulation studies, we show that the IR statistics are powerful in various applications: test for heteroscedasticity in linear regression models, test for overdispersion in count data, and test for misspecified variance function and/or misspecified working correlation structure. Moreover, the IR statistics appear more powerful than the classical information matrix test proposed by White (1982). In the literature, model selection criteria have been intensively discussed, but almost all of them target choosing the optimal mean structure. In this thesis, two model selection procedures are proposed for selecting the optimal variance/covariance structure among a collection of candidate structures. One is based on a sequence of the IR tests for all the competing variance/covariance structures. The other is based on an ``information discrepancy criterion" (IDC), which provides a measurement of discrepancy between the negative sensitivity matrix and the variability matrix. In fact, this IDC characterizes the relative efficiency loss when using a certain candidate variance/covariance structure, compared with the true but unknown structure. Through simulation studies and analyses of two data sets, it is shown that the two proposed model selection methods both have a high rate of detecting the true/optimal variance/covariance structure. In particular, since the IDC magnifies the difference among the competing structures, it is highly sensitive to detect the most appropriate variance/covariance structure.
7

Modeling Recurrent Gap Times Through Conditional GEE

Liu, Hai Yan 16 August 2018 (has links)
We present a theoretical approach to the statistical analysis of the dependence of the gap time length between consecutive recurrent events, on a set of explanatory random variables and in the presence of right censoring. The dependence is expressed through regression-like and overdispersion parameters, estimated via estimating functions and equations. The mean and variance of the length of each gap time, conditioned on the observed history of prior events and other covariates, are known functions of parameters and covariates, and are part of the estimating functions. Under certain conditions on censoring, we construct normalized estimating functions that are asymptotically unbiased and contain only observed data. We then use modern mathematical techniques to prove the existence, consistency and asymptotic normality of a sequence of estimators of the parameters. Simulations support our theoretical results.
8

[en] GAMMA-GAMMA STATE SPACE MODELS: APPLICATION OF THE RAINFALL SERIES / [pt] MODELOS DE ESPAÇO DE ESTADOS GAMA-GAMA: APLICAÇÃO A UMA SÉRIE DE CHUVA

KATIA LORENA SAEZ CARRILLO 17 October 2003 (has links)
[pt] Esta tese apresenta o estudo de um modelo de espaço de estados para dados positivos, onde o processo observado é condicionalmente independente, dado um processo latente Gama Markov. O processo observado condicionado ao processo latente tem distribuição Gama. O modelo possibilita a inclusão de covariáveis,tanto através do processo latente, como do processo observado.O modelo obtido é log-linear e a estimação dos parâmetros de regressão é feita através de funções de estimação de Kalman. Os parâmetros de dispersão são estimados via estimadores de Pearson ajustados. São desenvolvidos alguns estudos de simulação e uma aplicação aos dados da série de chuva de Fortaleza, Ceará, onde são incorporados fatos estilizados da série (tendência, sazonalidade ou ciclos), bem como o efeito de variáveis explicativas (temperatura do nível do mar, pressão atmosférica, manchas solares). / [en] This thesis presents a study of a state space model for positive data where the observed process is conditionally independent given a latent process gamma Markov process. The observed process conditioned to the latent process has gamma distribution. The model facilitates the inclusion of as many covariates through the latent process as of the observed process.The obtained model is log-linear and the estimate of the regression parameters is made through Kalman estimating functions. The dispersion parameters are obtained via the adjusted Pearson estimation. Some simulation studies and an application are developed to the data of the series of rainfall of Fortaleza, Ceará, where they are incorporate stylized facts of the series (tendency, sazonalidade or cycles) are include as well as the effect of explanatory variables (temperature of the level of the sea, pressure, sunspots).
9

Range-based parameter estimation in diffusion models

Henkel, Hartmuth 04 October 2010 (has links)
Wir studieren das Verhalten des Maximums, des Minimums und des Endwerts zeithomogener eindimensionaler Diffusionen auf endlichen Zeitintervallen. Zuerst beweisen wir mit Hilfe des Malliavin-Kalküls ein Existenzresultat für die gemeinsamen Dichten. Außerdem leiten wir Entwicklungen der gemeinsamen Momente des Tripels (H,L,X) zur Zeit Delta bzgl. Delta her. Dabei steht X für die zugrundeliegende Diffusion, und H und L bezeichnen ihr fortlaufendes Maximum bzw. Minimum. Ein erster Ansatz, der vollständig auf den elementaren Abschätzungen der Doob’schen und der Cauchy-Schwarz’schen Ungleichung beruht, liefert eine Entwicklung bis zur Ordnung 2 bzgl. der Wurzel der Zeitvariablen Delta. Ein komplexerer Ansatz benutzt Partielle-Differentialgleichungstechniken, um eine Entwicklung der einseitigen Austrittswahrscheinlichkeit für gepinnte Diffusionen zu bestimmen. Da eine Entwicklung der Übergangsdichten von Diffusionen bekannt ist, erhält man eine vollständige Entwicklung der gemeinsamen Wahrscheinlichkeit von (H,X) bzgl. Delta. Die entwickelten Verteilungseigenschaften erlauben es uns, eine Theorie für Martingalschätzfunktionen, die aus wertebereich-basierten Daten konstruiert werden, in einem parameterisierten Diffusionsmodell, herzuleiten. Ein Small-Delta-Optimalitätsansatz, der die approximierten Momente benutzt, liefert eine Vereinfachung der vergleichsweise komplizierten Schätzprozedur und wir erhalten asymptotische Optimalitätsresultate für gegen 0 gehende Sampling-Frequenz. Beim Schätzen des Drift-Koeffizienten ist der wertebereich-basierte Ansatz der Methode, die auf equidistanten Beobachtungen der Diffusion beruht, nicht überlegen. Der Effizienzgewinn im Fall des Schätzens des Diffusionskoeffizienten ist hingegen enorm. Die Maxima und Minima in die Analyse miteinzubeziehen senkt die Varianz des Schätzers für den Parameter in diesem Szenario erheblich. / We study the behavior of the maximum, the minimum and the terminal value of time-homogeneous one-dimensional diffusions on finite time intervals. To begin with, we prove an existence result for the joint density by means of Malliavin calculus. Moreover, we derive expansions for the joint moments of the triplet (H,L,X) at time Delta w.r.t. Delta. Here, X stands for the underlying diffusion whereas H and L denote its running maximum and its running minimum, respectively. In a first approach that entirely relies on elementary estimates, such as Doob’s inequality and Cauchy-Schwarz’ inequality, we derive an expansion w.r.t. the square root of the time parameter Delta including powers of 2. A more sophisticated ansatz uses partial differential equation techniques to determine an expansion of the one-barrier hitting time probability for pinned diffusions. For an expansion of the transition density of diffusions is known, one obtains an overall expansion of the joint probability of (H,X) w.r.t. Delta. The developed distributional properties enable us to establish a theory for martingale estimating functions constructed from range-based data in a parameterized diffusion model. A small-Delta-optimality approach, that uses the approximated moments, yields a simplification of the relatively complicated estimating procedure and we obtain asymptotic optimality results when the sampling frequency Delta tends to 0. When it comes to estimating the drift coefficient the range-based method is not superior to the method relying on equidistant observations of the underlying diffusion alone. However, there is an enormous gain in efficiency at the estimation for the diffusion coefficient. Incorporating the maximum and the minimum into the analysis significantly lowers the asymptotic variance of the estimators for the parameter in this scenario.

Page generated in 0.1555 seconds