Spelling suggestions: "subject:"lparse deep 1earning"" "subject:"lparse deep c1earning""
1 |
A SYSTEMATIC STUDY OF SPARSE DEEP LEARNING WITH DIFFERENT PENALTIESXinlin Tao (13143465) 25 April 2023 (has links)
<p>Deep learning has been the driving force behind many successful data science achievements. However, the deep neural network (DNN) that forms the basis of deep learning is</p>
<p>often over-parameterized, leading to training, prediction, and interpretation challenges. To</p>
<p>address this issue, it is common practice to apply an appropriate penalty to each connection</p>
<p>weight, limiting its magnitude. This approach is equivalent to imposing a prior distribution</p>
<p>on each connection weight from a Bayesian perspective. This project offers a systematic investigation into the selection of the penalty function or prior distribution. Specifically, under</p>
<p>the general theoretical framework of posterior consistency, we prove that consistent sparse</p>
<p>deep learning can be achieved with a variety of penalty functions or prior distributions.</p>
<p>Examples include amenable regularization penalties (such as MCP and SCAD), spike-and?slab priors (such as mixture Gaussian distribution and mixture Laplace distribution), and</p>
<p>polynomial decayed priors (such as the student-t distribution). Our theory is supported by</p>
<p>numerical results.</p>
<p><br></p>
|
2 |
Sparse Deep Learning and Stochastic Neural NetworkYan Sun (12425889) 13 May 2022 (has links)
<p>Deep learning has achieved state-of-the-art performance on many machine learning tasks. But the deep neural network(DNN) model still suffers a few issues. Over-parametrized neural network generally has better optimization landscape, but it is computationally expensive, hard to interpret and the model usually can not correctly quantify the prediction uncertainty. On the other hand, small DNN model could suffer from local trap and will be hard to optimize. In this dissertation, we tackle these issues from two directions, sparse deep learning and stochastic neural network. </p>
<p><br></p>
<p>For sparse deep learning, we proposed Bayesian neural network(BNN) model with mixture of normal prior. Theoretically, We established the posterior consistency and structure selection consistency, which ensures the sparse DNN model can be consistently identified. We also demonstrate the asymptotic normality of the prediction, which ensures the prediction uncertainty to be correctly quantified. Computationally, we proposed a prior annealing approach to optimize the posterior of BNN. The proposed methods share similar computation complexity to the standard stochastic gradient descent method for training DNN. Experiment results show that our model performs well on high dimensional variable selection as well as neural network pruning.</p>
<p><br></p>
<p>For stochastic neural network, we proposed a Kernel-Expanded Stochastic Neural Network model or K-StoNet model in short. We reformulate the DNN as a latent variable model and incorporate support vector regression (SVR) as the first hidden layer. The latent variable formulation breaks the training into a series of convex optimization problems and the model can be easily trained using the imputation-regularized optimization (IRO) algorithm. We provide theoretical guarantee for convergence of the algorithm and the prediction uncertainty quantification. Experiment results show that the proposed model can achieve good prediction performance and provide correct confidence region for prediction. </p>
|
Page generated in 0.0888 seconds