This thesis, which consists of four chapters, focuses on forecasting in a data-rich environment and related computational issues. Chapter 1, “An embarrassment of riches: Forecasting using large panels” explores the idea of combining forecasts from various indicator models by using Bayesian model averaging (BMA) and compares the predictive performance of BMA with predictive performance of factor models. The combination of these two methods is also implemented, together with a benchmark, a simple autoregressive model. The forecast comparison is conducted in a pseudo out-of-sample framework for three distinct datasets measured at different frequencies. These include monthly and quarterly US datasets consisting of more than 140 predictors, and a quarterly Swedish dataset with 77 possible predictors. The results show that none of the considered methods is uniformly superior and that no method consistently outperforms or underperforms a simple autoregressive process. Chapter 2. “Forecast combination using predictive measures” proposes using out-of-sample predictive likelihood as the basis for BMA and forecast combination. In addition to its intuitive appeal, the use of the predictive likelihood relaxes the need to specify proper priors for the parameters of each model. We show that the forecast weights based on the predictive likelihood have desirable asymptotic properties. And that these weights will have better small sample properties than the traditional in-sample marginal likelihood when uninformative priors are used. In order to calculate the weights for the combined forecast, a number of observations, a hold-out sample, is needed. There is a trade off involved in the size of the hold-out sample. The number of observations available for estimation is reduced, which might have a detrimental effect. On the other hand, as the hold-out sample size increases, the predictive measure becomes more stable and this should improve performance. When there is a true model in the model set, the predictive likelihood will select the true model asymptotically, but the convergence to the true model is slower than for the marginal likelihood. It is this slower convergence, coupled with protection against overfitting, which is the reason the predictive likelihood performs better when the true model is not in the model set. In Chapter 3. “Forecasting GDP with factor models and Bayesian forecast combination” the predictive likelihood approach developed in the previous chapter is applied to forecasting GDP growth. The analysis is performed on quarterly economic dataset from six countries: Canada, Germany, Great Britain, Italy, Japan and United States. The forecast combination technique based on both in-sample and out-of-sample weights is compared to forecasts based on factor models. The traditional point forecast analysis is extended by considering confidence intervals. The results indicate that forecast combinations based on the predictive likelihood weights have better forecasting performance compared with the factor models and forecast combinations based on the traditional in-sample weights. In contrast to common findings, the predictive likelihood does improve upon an autoregressive process for longer horizons. The largest improvement over the in-sample weights is for small values of hold-out sample sizes, which provides protection against structural breaks at the end of the sample period. The potential benefits of model averaging as a tool for extracting the relevant information from a large set of predictor variables come at the cost of considerable computational complexity. To avoid evaluating all the models, several approaches have been developed to simulate from the posterior distributions. Markov chain Monte Carlo methods can be used to directly draw from the model posterior distributions. It is desirable that the chain moves well through the model space and takes draws from regions with high probabilities. Several computationally efficient sampling schemes, either one at a time or in blocks, have been proposed for speeding up convergence. There is a trade-off between local moves, which make use of the current parameter values to propose plausible values for model parameters, and more global transitions, which potentially allow faster exploration of the distribution of interest, but may be much harder to implement efficiently. Local model moves enable use of fast updating schemes, where it is unnecessary to completely reestimate the new, slightly modified, model to obtain an updated solution. The last fourth chapter “Computational efficiency in Bayesian model and variable selection” investigates the possibility of increasing computational efficiency by using alternative algorithms to obtain estimates of model parameters as well as keeping track of their numerical accuracy. Also, various samplers that explore the model space are presented and compared based on the output of the Markov chain. / Diss. Stockholm : Handelshögskolan, 2006
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:hhs-490 |
Date | January 2006 |
Creators | Eklund, Jana |
Publisher | Handelshögskolan i Stockholm, Ekonomisk Statistik (ES), Stockholm : Economic Research Institute, Stockholm School of Economics [Ekonomiska forskningsinstitutet vid Handelshögskolan i Stockholm] (EFI) |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Doctoral thesis, comprehensive summary, info:eu-repo/semantics/doctoralThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0019 seconds