1 |
A SYSTEMATIC STUDY OF SPARSE DEEP LEARNING WITH DIFFERENT PENALTIESXinlin Tao (13143465) 25 April 2023 (has links)
<p>Deep learning has been the driving force behind many successful data science achievements. However, the deep neural network (DNN) that forms the basis of deep learning is</p>
<p>often over-parameterized, leading to training, prediction, and interpretation challenges. To</p>
<p>address this issue, it is common practice to apply an appropriate penalty to each connection</p>
<p>weight, limiting its magnitude. This approach is equivalent to imposing a prior distribution</p>
<p>on each connection weight from a Bayesian perspective. This project offers a systematic investigation into the selection of the penalty function or prior distribution. Specifically, under</p>
<p>the general theoretical framework of posterior consistency, we prove that consistent sparse</p>
<p>deep learning can be achieved with a variety of penalty functions or prior distributions.</p>
<p>Examples include amenable regularization penalties (such as MCP and SCAD), spike-and?slab priors (such as mixture Gaussian distribution and mixture Laplace distribution), and</p>
<p>polynomial decayed priors (such as the student-t distribution). Our theory is supported by</p>
<p>numerical results.</p>
<p><br></p>
|
Page generated in 0.0996 seconds