Return to search

Convex regression and its extensions to learning a Bregman divergence and difference of convex functions

Nonparametric convex regression has been extensively studied over the last two decades. It has been shown any Lipschitz convex function can be approximated with arbitrarily accuracy with a max of linear functions. Using this framework, in this thesis we generalize convex regression to learning an arbitrary Bregman divergence and learning a difference of convex functions. We provide approximation guarantees and sample complexity bounds for both these extensions. Furthermore, we provide algorithms to solve the resulting optimization problems based on 2-block alternative direction method of multipliers (ADMM). For this algorithm, we provide convergence guarantees with iteration complexity of O(nāˆšd/šœ–) for a dataset X š ā„^n,d and arbitrary positive šœ–. Finally we provide experiments for both the Bregman divergence learning and difference of convex functions learning based on UCI datasets that demonstrate the state of the art on regression and classification datasets.

Identiferoai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/43719
Date26 January 2022
CreatorsSiahkamari, Ali
ContributorsKulis, Brian, Saligrama, Venkatesh
Source SetsBoston University
Languageen_US
Detected LanguageEnglish
TypeThesis/Dissertation

Page generated in 0.0019 seconds