Return to search

Evaluation under Real-world Distribution Shifts

Recent advancements in empirical and certified robustness have shown promising results in developing reliable and deployable Deep Neural Networks (DNNs). However, most evaluations of DNN robustness have focused on testing models on images from the same distribution they were trained on. In real-world scenarios, DNNs may encounter dynamic environments with significant distribution shifts. This thesis aims to investigate the interplay between empirical and certified adversarial robustness and domain generalization. We take the first step by training robust models on multiple domains and evaluating their accuracy and robustness on an unseen domain. Our findings reveal that: (1) both empirical and certified robustness exhibit generalization to unseen domains, and (2) the level of generalizability does not correlate strongly with the visual similarity of inputs, as measured by the Fréchet Inception Distance (FID) between source and target domains. Furthermore, we extend our study to a real-world medical application, where we demonstrate that adversarial augmentation significantly enhances robustness generalization while minimally affecting accuracy on clean data. This research sheds light on the importance of evaluating DNNs under real-world distribution shifts and highlights the potential of adversarial augmentation in improving robustness in practical applications.

Identiferoai:union.ndltd.org:kaust.edu.sa/oai:repository.kaust.edu.sa:10754/692920
Date07 1900
CreatorsAlhamoud, Kumail
ContributorsGhanem, Bernard, Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division, Gao, Xin, Elhoseiny, Mohamed
Source SetsKing Abdullah University of Science and Technology
LanguageEnglish
Detected LanguageEnglish
TypeThesis
RelationN/A

Page generated in 0.0102 seconds