Return to search

Robust Deep Learning Under Application Induced Data Distortions

<p>Deep learning has been increasingly adopted in a multitude of settings. Yet, its strong performance relies on processing data during inference that is in-distribution with its training data. Deep learning input data during deployment, however, is not guaranteed to be in-distribution with the model's training data and can often times be distorted, either intentionally (e.g., by an adversary) or unintentionally (e.g., by a sensor defect), leading to significant performance degradations. In this dissertation, we develop algorithms for a variety of applications to improve the performance of deep learning models in the presence of distorted data. We begin by first designing feature engineering methodologies to increase classification performance in noisy environments. Here, we demonstrate the efficacy of our proposed algorithms on two target detection tasks and show that our framework outperforms a variety of state-of-the-art baselines. Next, we develop mitigation algorithms to improve the performance of deep learning in the presence of adversarial attacks and nonlinear signal distortions. In this context, we demonstrate the effectiveness of our methods on a variety of wireless communications tasks including automatic modulation classification, power allocation in massive MIMO networks, and signal detection. Finally, we develop an uncertainty quantification framework, which produces distributive estimates, as opposed to point predictions, from deep learning models in order to characterize samples with uncertain predictions as well as samples that are out-of-distribution from the model's training data. Our uncertainty quantification framework is carried out on a hyperspectral image target detection task as well as on counter unmanned aircraft systems (cUAS) model. Ultimately, our proposed algorithms improve the performance of deep learning in several environments in which the data during inference has been distorted to be out-of-distribution from the training data. </p>

  1. 10.25394/pgs.21588633.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/21588633
Date21 November 2022
CreatorsRajeev Sahay (10526555)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/Robust_Deep_Learning_Under_Application_Induced_Data_Distortions/21588633

Page generated in 0.0023 seconds