This dissertation studies three classes of estimators for the asymptotic variance parameter of a stationary stochastic process. All estimators are based on the concept of data "re-use" and all transform the output process into functions of an approximate Brownian motion process.
The first class of estimators consists folded standardized time series area and Cramér-von Mises
(CvM) estimators. Detailed expressions are obtained for their expectation at folding levels 0 and 1; those expressions explain the puzzling increase in small-sample bias as the folding level increases. In addition, we use batching and linear combinations of estimators from different levels to produce estimators with significantly smaller variance. Finally, we obtain very accurate approximations of the limiting distributions of batched folded estimators. These approximations are used to compute confidence intervals for the mean and variance parameter of the underlying stochastic process.
The second class --- folded overlapping area estimators --- are computed by averaging folded
versions of the standardized time series corresponding to overlapping batches. We establish the limiting distributions of the proposed estimators as the sample size tends to infinity. We obtain statistical properties of these estimators such as bias and variance. Further, we find approximate confidence intervals for the mean and variance parameter of the process by approximating the theoretical distributions of the proposed estimators. In addition, we develop algorithms to compute these estimators with only order-of-sample-size work.
The third class --- reflected area and CvM estimators --- are computed from reflections of the original sample path. We obtain the expected values and variance of individual estimators. We show that it is possible to obtain linear combinations of reflected estimators with smaller variance than the variance of each constituent estimator, often at no cost in bias. A quadratic optimization problem is solved to find an optimal linear combination of estimators that minimizes the variance of the linearly combined estimator.
For all classes of estimators, we provide Monte Carlo examples to show that the estimators
perform as well in practice as advertised by the theory.
Identifer | oai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/26490 |
Date | 25 August 2008 |
Creators | Meterelliyoz Kuyzu, Melike |
Publisher | Georgia Institute of Technology |
Source Sets | Georgia Tech Electronic Thesis and Dissertation Archive |
Detected Language | English |
Type | Dissertation |
Page generated in 0.0019 seconds