• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Understanding Deep Neural Networks and other Nonparametric Methods in Machine Learning

Yixi Xu (6668192) 02 August 2019 (has links)
<div>It is a central problem in both statistics and computer science to understand the theoretical foundation of machine learning, especially deep learning. During the past decade, deep learning has achieved remarkable successes in solving many complex artificial intelligence tasks. The aim of this dissertation is to understand deep neural networks (DNNs) and other nonparametric methods in machine learning. In particular, three machine learning models have been studied: weight normalized DNNs, sparse DNNs, and the compositional nonparametric model.</div><div></div><div><br></div><div>The first chapter presents a general framework for norm-based capacity control for <i>L<sub>p,q</sub></i> weight normalized DNNs. We establish the upper bound on the Rademacher complexities of this family. Especially, with an <i>L<sub>1,infty</sub></i> normalization, we discuss properties of a width-independent capacity control, which only depends on the depth by a square root term. Furthermore, if the activation functions are anti-symmetric, the bound on the Rademacher complexity is independent of both the width and the depth up to a log factor. In addition, we study the weight normalized deep neural networks with rectified linear units (ReLU) in terms of functional characterization and approximation properties. In particular, for an <i>L<sub>1,infty</sub></i> weight normalized network with ReLU, the approximation error can be controlled by the <i>L<sub>1</sub></i> norm of the output layer.</div><div></div><div><br></div><div>In the second chapter, we study <i>L<sub>1,infty</sub></i>-weight normalization for deep neural networks with bias neurons to achieve the sparse architecture. We theoretically establish the generalization error bounds for both regression and classification under the <i>L<sub>1,infty</sub></i>-weight normalization. It is shown that the upper bounds are independent of the network width and <i>k<sup>1/2</sup></i>-dependence on the network depth <i>k</i>. These results provide theoretical justifications on the usage of such weight normalization to reduce the generalization error. We also develop an easily implemented gradient projection descent algorithm to practically obtain a sparse neural network. We perform various experiments to validate our theory and demonstrate the effectiveness of the resulting approach.</div><div></div><div><br></div><div>In the third chapter, we propose a compositional nonparametric method in which a model is expressed as a labeled binary tree of <i>2k+1</i> nodes, where each node is either a summation, a multiplication, or the application of one of the <i>q</i> basis functions to one of the <i>m<sub>1</sub></i> covariates. We show that in order to recover a labeled binary tree from a given dataset, the sufficient number of samples is <i>O(k </i>log<i>(m<sub>1</sub>q)+</i>log<i>(k!))</i>, and the necessary number of samples is <i>Omega(k </i>log<i>(m<sub>1</sub>q)-</i>log<i>(k!))</i>. We further propose a greedy algorithm for regression in order to validate our theoretical findings through synthetic experiments.</div>

Page generated in 0.0899 seconds