Return to search

Convergence of Adaptive Markov Chain Monte Carlo Algorithms

In the thesis, we study ergodicity of adaptive Markov Chain Monte Carlo methods (MCMC) based on two conditions(Diminishing Adaptation and Containment which together imply ergodicity), explain the advantages of adaptive MCMC, and apply the theoretical result for some applications.
\indent First we show several facts: 1. Diminishing Adaptation alone may not guarantee ergodicity; 2. Containment is not necessary for ergodicity; 3. under some additional condition, Containment is
necessary for ergodicity. Since Diminishing Adaptation is relatively easy to check and Containment is abstract, we focus on the
sufficient conditions of Containment. In order to study Containment, we consider the quantitative bounds of the distance between samplers and targets in total variation norm. From early results, the quantitative bounds are connected with nested drift conditions for polynomial rates of convergence. For ergodicity of adaptive MCMC,
assuming that all samplers simultaneously satisfy nested polynomial drift conditions, we find that either when the number of nested
drift conditions is greater than or equal to two, or when the number of drift conditions with some specific form is one, the adaptive
MCMC algorithm is ergodic. For adaptive MCMC algorithm with Markovian adaptation, the algorithm satisfying simultaneous polynomial ergodicity is ergodic without those restrictions. We also discuss some recent results related to this topic.
\indent Second we consider ergodicity of certain adaptive Markov Chain Monte Carlo algorithms for multidimensional target
distributions, in particular, adaptive Metropolis and adaptive Metropolis-within-Gibbs algorithms. We derive various sufficient conditions to ensure Containment, and connect the convergence rates of algorithms with the tail properties of the corresponding target distributions. We also present a Summable Adaptive Condition which,
when satisfied,proves ergodicity more easily.
\indent Finally, we propose a simple adaptive
Metropolis-within-Gibbs algorithm attempting to study directions on which the Metropolis algorithm can be run flexibly. The algorithm
avoids the wasting moves in wrong directions by proposals from the full dimensional adaptive Metropolis algorithm. We also prove its ergodicity, and test it on a Gaussian Needle example and a real-life
Case-Cohort study with competing risks. For the Cohort study, we describe an extensive version of Competing Risks Regression model,
define censor variables for competing risks, and then apply the algorithm to estimate coefficients based on the posterior
distribution.

Identiferoai:union.ndltd.org:TORONTO/oai:tspace.library.utoronto.ca:1807/24673
Date04 August 2010
CreatorsBai, Yan
ContributorsRosenthal, Jeffrey S.
Source SetsUniversity of Toronto
Languageen_ca
Detected LanguageEnglish
TypeThesis

Page generated in 0.0023 seconds