The primary focus of this doctoral dissertation is to investigate the safety and robustness of deep models. Our objective is to thoroughly analyze and introduce innovative methodologies for cultivating robust representations under diverse circumstances. Deep neural networks (DNNs) have emerged as fundamental components in recent advancements across various tasks, including image recognition, semantic segmentation, and object detection. Representation learning stands as a pivotal element in the efficacy of DNNs, involving the extraction of significant features from data through mechanisms like convolutional neural networks (CNNs) applied to image data. In real-world applications, ensuring the robustness of these features against various adversarial conditions is imperative, thus emphasizing robust representation learning. Through the acquisition of robust representations, DNNs can enhance their ability to generalize to new data, mitigate the impact of label noise and domain shifts, and bolster their resilience against external threats, such as backdoor attacks. Consequently, this dissertation explores the implications of robust representation learning in three principal areas: i) Backdoor Attack, ii) Backdoor Defense, and iii) Noisy Labels.
First, we study the backdoor attack creation and detection from different perspectives. Backdoor attack addresses AI safety and robustness issues where an adversary can insert malicious behavior into a DNN by altering the training data. Second, we aim to remove the backdoor from DNN using two different types of defense techniques: i) training-time defense and ii) test-time defense. training-time defense prevents the model from learning the backdoor during model training whereas test-time defense tries to purify the backdoor model after the backdoor has already been inserted. Third, we explore the direction of noisy label learning (NLL) from two perspectives: a) offline NLL and b) online continual NLL. The representation learning under noisy labels gets severely impacted due to the memorization of those noisy labels, which leads to poor generalization. We perform uniform sampling and contrastive learning-based representation learning. We also test the algorithm efficiency in an online continual learning setup. Furthermore, we show the transfer and adaptation of learned representations in one domain to another domain, e.g. source free domain adaptation (SFDA). We study the impact of noisy labels under SFDA settings and propose a novel algorithm that produces state-of-the-art (SOTA) performance.
Identifer | oai:union.ndltd.org:ucf.edu/oai:stars.library.ucf.edu:etd2023-1111 |
Date | 01 January 2023 |
Creators | Karim, Md Nazmul |
Publisher | STARS |
Source Sets | University of Central Florida |
Language | English |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Graduate Thesis and Dissertation 2023-2024 |
Page generated in 0.0078 seconds