Return to search

Adaptive Curvature for Stochastic Optimization

abstract: This thesis presents a family of adaptive curvature methods for gradient-based stochastic optimization. In particular, a general algorithmic framework is introduced along with a practical implementation that yields an efficient, adaptive curvature gradient descent algorithm. To this end, a theoretical and practical link between curvature matrix estimation and shrinkage methods for covariance matrices is established. The use of shrinkage improves estimation accuracy of the curvature matrix when data samples are scarce. This thesis also introduce several insights that result in data- and computation-efficient update equations. Empirical results suggest that the proposed method compares favorably with existing second-order techniques based on the Fisher or Gauss-Newton and with adaptive stochastic gradient descent methods on both supervised and reinforcement learning tasks. / Dissertation/Thesis / Masters Thesis Computer Science 2019

Identiferoai:union.ndltd.org:asu.edu/item:53675
Date January 2019
ContributorsBarron, Trevor Paul (Author), Ben Amor, Heni (Advisor), He, Jingrui (Committee member), Levihn, Martin (Committee member), Arizona State University (Publisher)
Source SetsArizona State University
LanguageEnglish
Detected LanguageEnglish
TypeMasters Thesis
Format64 pages
Rightshttp://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0018 seconds