<p>Deep learning has been the driving force behind many successful data science achievements. However, the deep neural network (DNN) that forms the basis of deep learning is</p>
<p>often over-parameterized, leading to training, prediction, and interpretation challenges. To</p>
<p>address this issue, it is common practice to apply an appropriate penalty to each connection</p>
<p>weight, limiting its magnitude. This approach is equivalent to imposing a prior distribution</p>
<p>on each connection weight from a Bayesian perspective. This project offers a systematic investigation into the selection of the penalty function or prior distribution. Specifically, under</p>
<p>the general theoretical framework of posterior consistency, we prove that consistent sparse</p>
<p>deep learning can be achieved with a variety of penalty functions or prior distributions.</p>
<p>Examples include amenable regularization penalties (such as MCP and SCAD), spike-and?slab priors (such as mixture Gaussian distribution and mixture Laplace distribution), and</p>
<p>polynomial decayed priors (such as the student-t distribution). Our theory is supported by</p>
<p>numerical results.</p>
<p><br></p>
Identifer | oai:union.ndltd.org:purdue.edu/oai:figshare.com:article/22693573 |
Date | 25 April 2023 |
Creators | Xinlin Tao (13143465) |
Source Sets | Purdue University |
Detected Language | English |
Type | Text, Thesis |
Rights | CC BY 4.0 |
Relation | https://figshare.com/articles/thesis/A_SYSTEMATIC_STUDY_OF_SPARSE_DEEP_LEARNING_WITH_DIFFERENT_PENALTIES/22693573 |
Page generated in 0.0132 seconds