Return to search

Deep Networks Through the Lens of Low-Dimensional Structure: Towards Mathematical and Computational Principles for Nonlinear Data

Across scientific and engineering disciplines, the algorithmic pipeline forprocessing and understanding data increasingly revolves around deep learning, a data-driven approach to learning features for tasks that uses high-capacity compositionally-structured models, large datasets, and scalable gradient-based optimization. At the same time, modern deep learning models are resource-inefficient, require up to trillions of trainable parameters to succeed on tasks, and their predictions are notoriously susceptible to perceptually-indistinguishable changes to the input, limiting their use in applications where reliability and safety are critical.

Fortunately, data in scientific and engineering applications are not generic, but structured---they possess low-dimensional nonlinear structure that enables statistical learning in spite of their inherent high-dimensionality---and studying the interactions between deep learning models, training algorithms, and structured data represents a promising approach to understand practical issues such as resource efficiency, robustness and invariance in deep learning. To begin to realize this program, it is necessary to have mathematical model problems that capture the nonlinear structures of data in deep learning applications and features of practical deep learning pipelines, and there is a question of how to translate mathematical insights into practical progress on the aforementioned issues, as well.

We address these considerations in this thesis. First, we pose and study the multiple manifold problem, a binary classification task modeled on applications in computer vision, in which a deep fully-connected neural network is trained to separate two low-dimensional submanifolds of the unit sphere. We provide an analysis of the one-dimensional case, proving for a rather general family of configurations that when the network depth is large relative to certain geometric and statistical properties of the data, the network width grows as a sufficiently large polynomial in the depth, and the number of samples from the manifolds is polynomial in the depth, randomly-initialized gradient descent rapidly learns to classify the two manifolds perfectly with high probability.

Our analysis demonstrates concrete benefits of depth and width in the context of a practically-motivated model problem: the depth acts as a fitting resource, with larger depths corresponding to smoother networks that can more readily separate the class manifolds, and the width acts as a statistical resource, enabling concentration of the randomly-initialized network and its gradients. Next, we turn our attention to the design of specific network architectures for achieving invariance to nuisance transformations in vision systems. Existing approaches to invariance scale exponentially with the dimension of the family of transformations, making them unable to cope with natural variabilities in visual data such as changes in pose and perspective.

We identify a common limitation of these approaches---they rely on sampling to traverse the high-dimensional space of transformations---and propose a new computational primitive for building invariant networks based instead on optimization, which in many scenarios provides a provably more efficient method for high-dimensional exploration than sampling. We provide empirical and theoretical corroboration of the efficiency gains and soundness of our proposed method, and demonstrate its utility in constructing an efficient invariant network for a simple hierarchical object detection task when combined with unrolled optimization. Together, the results in this thesis establish the first end-to-end theoretical guarantees for training deep neural networks with data with nonlinear low-dimensional structure, and provide a methodology to translate these insights into the design of practical neural network architectures with efficiency and invariance benefits.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/p2n1-kp52
Date January 2022
CreatorsBuchanan, Sam
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0011 seconds