Return to search

Improved iterative schemes for REML estimation of variance parameters in linear mixed models.

Residual maximum likelihood (REML) estimation is a popular method of estimation for variance parameters in linear mixed models, which typically requires an iterative scheme. The aim of this thesis is to review several popular iterative schemes and to develop an improved iterative strategy that will work for a wide class of models. The average information (AI) algorithm is a computationally convenient and efficient algorithm to use when starting values are in the neighbourhood of the REML solution. However when reasonable starting values are not available, the algorithm can fail to converge. The expectation-maximisation (EM) algorithm and the parameter expanded EM (PXEM) algorithm are good alternatives in these situations but they can be very slow to converge. The formulation of these algorithms for a general linear mixed model is presented, along with their convergence properties. A series of hybrid algorithms are presented. EM or PXEM iterations are used initially to obtain variance parameter estimates that are in the neighbourhood of the REML solution, and then AI iterations are used to ensure rapid convergence. Composite local EM/AI and local PXEM/AI schemes are also developed; the local EM and local PXEM algorithms update only the random effect variance parameters, with the estimates of the residual error variance parameters held fixed. Techniques for determining when to use EM-type iterations and when to switch to AI iterations are investigated. Methods for obtaining starting values for the iterative schemes are also presented. The performance of these various schemes is investigated for several different linear mixed models. A number of data sets are used, including published data sets and simulated data. The performance of the basic algorithms is compared to that of the various hybrid algorithms, using both uninformed and informed starting values. The theoretical and empirical convergence rates are calculated and compared for the basic algorithms. The direct comparison of the AI and PXEM algorithms shows that the PXEM algorithm, although an improvement over the EM algorithm, still falls well short of the AI algorithm in terms of speed of convergence. However, when the starting values are too far from the REML solution, the AI algorithm can be unstable. Instability is most likely to arise in models with a more complex variance structure. The hybrid schemes use EM-type iterations to move close enough to the REML solution to enable the AI algorithm to successfully converge. They are shown to be robust to choice of starting values like the EM and PXEM algorithms, while demonstrating fast convergence like the AI algorithm. / Thesis (Ph.D.) - University of Adelaide, School of Agriculture, Food and Wine, 2008

Identiferoai:union.ndltd.org:ADTP/264562
Date January 2008
CreatorsKnight, Emma
Source SetsAustraliasian Digital Theses Program
Detected LanguageEnglish

Page generated in 0.0265 seconds