Return to search

Variance parameter estimation methods with re-use of data

This dissertation studies three classes of estimators for the asymptotic variance parameter of a stationary stochastic process. All estimators are based on the concept of data "re-use" and all transform the output process into functions of an approximate Brownian motion process.

The first class of estimators consists folded standardized time series area and Cramér-von Mises
(CvM) estimators. Detailed expressions are obtained for their expectation at folding levels 0 and 1; those expressions explain the puzzling increase in small-sample bias as the folding level increases. In addition, we use batching and linear combinations of estimators from different levels to produce estimators with significantly smaller variance. Finally, we obtain very accurate approximations of the limiting distributions of batched folded estimators. These approximations are used to compute confidence intervals for the mean and variance parameter of the underlying stochastic process.

The second class --- folded overlapping area estimators --- are computed by averaging folded
versions of the standardized time series corresponding to overlapping batches. We establish the limiting distributions of the proposed estimators as the sample size tends to infinity. We obtain statistical properties of these estimators such as bias and variance. Further, we find approximate confidence intervals for the mean and variance parameter of the process by approximating the theoretical distributions of the proposed estimators. In addition, we develop algorithms to compute these estimators with only order-of-sample-size work.

The third class --- reflected area and CvM estimators --- are computed from reflections of the original sample path. We obtain the expected values and variance of individual estimators. We show that it is possible to obtain linear combinations of reflected estimators with smaller variance than the variance of each constituent estimator, often at no cost in bias. A quadratic optimization problem is solved to find an optimal linear combination of estimators that minimizes the variance of the linearly combined estimator.

For all classes of estimators, we provide Monte Carlo examples to show that the estimators
perform as well in practice as advertised by the theory.

Identiferoai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/26490
Date25 August 2008
CreatorsMeterelliyoz Kuyzu, Melike
PublisherGeorgia Institute of Technology
Source SetsGeorgia Tech Electronic Thesis and Dissertation Archive
Detected LanguageEnglish
TypeDissertation

Page generated in 0.0021 seconds