Return to search

Sparsity and robustness in modern statistical estimation

Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 219-230). / Two principles at the forefront of modern machine learning and statistics are sparse modeling and robustness. Sparse modeling enables the construction of simpler statistical models, with examples including the Lasso and matrix completion. At the same time, statistical models need to be robust--they should perform well when data is noisy--in order to make reliable decisions. While sparsity and robustness are often closely related, the exact relationship and subsequent trade-offs are not always transparent. For example, convex penalties like the Lasso are often motivated by sparsity considerations, yet the success of these methods is also driven by their robustness. In this thesis, we develop new statistical methods for sparse and robust modeling and clarify the relationship between these two principles. The first portion of the thesis focuses on a new methodological approach to the old multivariate statistical problem of Factor Analysis: finding a low-dimensional description of covariance structure among a set of random variables. Here we propose and analyze a practically tractable family of estimators for this problem. Our approach allows us to exploit bilinearities and eigenvalue structure and thereby show that convex heuristics obtain optimal estimators in many instances. In the latter portion of the thesis, we focus on developing a unified perspective on various penalty methods employed throughout statistical learning. In doing so, we provide a precise characterization of the relationship between robust optimization and a more traditional penalization approach. Further, we show how the threads of optimization under uncertainty and sparse modeling come together by focusing on the trimmed Lasso, a penalization approach to the best subset selection problem. We also contextualize the trimmed Lasso within the broader penalty methods literature by characterizing the relationship with usual separable penalty approaches; as a result, we show that this estimation scheme leads to a richer class of models. / by Martin Steven Copenhaver. / Ph. D.

Identiferoai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/119279
Date January 2018
CreatorsCopenhaver, Martin Steven
ContributorsDimitris J. Bertsimas., Massachusetts Institute of Technology. Operations Research Center., Massachusetts Institute of Technology. Operations Research Center.
PublisherMassachusetts Institute of Technology
Source SetsM.I.T. Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Format230 pages, application/pdf
RightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission., http://dspace.mit.edu/handle/1721.1/7582

Page generated in 0.0016 seconds