Return to search

Automatic step-size adaptation in incremental supervised learning

Performance and stability of many iterative algorithms such as stochastic gradient descent largely depend on a fixed and scalar step-size parameter. Use of a fixed and scalar step-size value may lead to limited performance in many problems. We study several existing step-size adaptation algorithms in nonstationary, supervised learning problems using simulated and real-world data. We discover that effectiveness of the existing step-size adaptation algorithms requires tuning of a meta parameter across problems. We introduce a new algorithm - Autostep - by combining several new techniques with an existing algorithm, and demonstrate that it can effectively adapt a vector step-size parameter on all of our training and test problems without tuning its meta parameter across them. Autostep is the first step-size adaptation algorithm that can be used in widely different problems with the same setting of all of its parameters.

Identiferoai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:AEU.10048/1550
Date11 1900
CreatorsMahmood, Ashique
ContributorsSutton, Richard S. (Computing Science), Sutton, Richard S. (Computing Science), Shah, Sirish L. (Chemical and Materials Engineering), Schuurmans, Dale (Computing Science)
Source SetsLibrary and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada
Languageen_US
Detected LanguageEnglish
TypeThesis
Format4528454 bytes, application/pdf

Page generated in 0.0025 seconds