<p>Deep learning plays an important role in various disciplines, such as auto-driving, information technology, manufacturing, medical studies, and financial studies. In the past decade, there have been fruitful studies on deep learning in which training and testing data are assumed to follow the same distribution to humans. Recent studies reveal that these dedicated models are vulnerable to adversarial attack, i.e., the predicting label may be changed even if the testing input has an unaware perturbation. However, most existing studies aim to develop computationally efficient adversarial learning algorithms without a thorough understanding of the statistical properties of these algorithms. This dissertation aims to provide theoretical understandings of adversarial training to figure out potential improvements in this area of research. </p>
<p><br></p>
<p>The first part of this dissertation focuses on the algorithmic stability of adversarial training. We reveal that the algorithmic stability of the vanilla adversarial training method is sub-optimal, and we study the effectiveness of a simple noise injection method. While noise injection improves stability, it also does not deteriorate the consistency of adversarial training.</p>
<p><br></p>
<p>The second part of this dissertation reveals a phase transition phenomenon in adversarial training. When the attack strength increases, the training trajectory of adversarial training will deviate from its natural counterpart. Consequently, various properties of adversarial training are different from clean training. It is essential to have adaptations in the training configuration and the neural network structure to improve adversarial training.</p>
<p><br></p>
<p>The last part of this dissertation focuses on how artificially generated data improves adversarial training. It is observed that utilizing synthetic data improves adversarial robustness, even if the data are generated using the original training data, i.e., no extra information is introduced. We use a theory to explain the reason behind this observation and propose further adaptations to utilize the generated data better.</p>
Identifer | oai:union.ndltd.org:purdue.edu/oai:figshare.com:article/21585801 |
Date | 21 November 2022 |
Creators | Yue Xing (14142297) |
Source Sets | Purdue University |
Detected Language | English |
Type | Text, Thesis |
Rights | CC BY 4.0 |
Relation | https://figshare.com/articles/thesis/Statistical_Theory_for_Adversarial_Robustness_in_Machine_Learning/21585801 |
Page generated in 0.0018 seconds