Return to search

Empirical Bayes methods in time series analysis

In the case of repetitive experiments of a similar type, where the parameters vary randomly from experiment to experiment, the Empirical Bayes method often leads to estimators which have smaller mean squared errors than the classical estimators.

Suppose there is an unobservable random variable θ, where θ ~ G(θ), usually called a prior distribution. The Bayes estimator of θ cannot be obtained in general unless G(θ) is known. In the empirical Bayes method we do not assume that G(θ) is known, but the sequence of past estimates is used to estimate θ.

This dissertation involves the empirical Bayes estimates of various time series parameters: The autoregressive model, moving average model, mixed autoregressive-moving average, regression with time series errors, regression with unobservable variables, serial correlation, multiple time series and spectral density function. In each case, empirical Bayes estimators are obtained using the asymptotic distributions of the usual estimators.

By Monte Carlo simulation the empirical Bayes estimator of first order autoregressive parameter, ρ, was shown to have smaller mean squared errors than the conditional maximum likelihood estimator for 11 past experiences. / Doctor of Philosophy

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/88723
Date January 1982
CreatorsKhoshgoftaar, Taghi M.
ContributorsStatistics, Krutchkoff, Richard G., Hinkelmann, Klaus, Foutz, Robert V., Myers, Raymond, Mann, Jerry E.
PublisherVirginia Polytechnic Institute and State University
Source SetsVirginia Tech Theses and Dissertation
Languageen_US
Detected LanguageEnglish
TypeDissertation, Text
Formatvi, 100, [1] leaves, application/pdf, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/
RelationOCLC# 8908468

Page generated in 0.0013 seconds