Return to search

Algorithm-tailored error bound conditions and the linear convergence rae of ADMM

In the literature, error bound conditions have been widely used for studying the linear convergence rates of various first-order algorithms and the majority of literature focuses on how to sufficiently ensure these error bound conditions, usually posing more assumptions on the model under discussion. In this thesis, we focus on the alternating direction method of multipliers (ADMM), and show that the known error bound conditions for studying ADMM's linear convergence, can indeed be further weakened if the error bound is studied over the specific iterative sequence generated by ADMM. A so-called partial error bound condition, which is tailored for the specific ADMM's iterative scheme and weaker than known error bound conditions in the literature, is thus proposed to derive the linear convergence of ADMM. We further show that this partial error bound condition theoretically justifies the difference if the two primal variables are updated in different orders in implementing ADMM, which had been empirically observed in the literature yet no theory is known so far.

Identiferoai:union.ndltd.org:hkbu.edu.hk/oai:repository.hkbu.edu.hk:etd_oa-1474
Date30 October 2017
CreatorsZeng, Shangzhi
PublisherHKBU Institutional Repository
Source SetsHong Kong Baptist University
LanguageEnglish
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceOpen Access Theses and Dissertations

Page generated in 0.0021 seconds