1 |
On robustness and explainability of deep learningLe, Hieu 06 February 2024 (has links)
There has been tremendous progress in machine learning and specifically deep learning in the last few decades. However, due to some inherent nature of deep neural networks, many questions regarding explainability and robustness still remain open. More specifically, as deep learning models are shown to be brittle against malicious changes, when do the models fail and how can we construct a more robust model against these types of attacks are of high interest. This work tries to answer some of the questions regarding explainability and robustness of deep learning by tackling the problem at four different topics. First, real world datasets often contain noise which can badly impact classification model performance. Furthermore, adversarial noise can be crafted to alter classification results. Geometric multi-resolution analysis (GMRA) is capable of capturing and recovering manifolds while preserving geomtric features. We showed that GMRA can be applied to retrieve low dimension representation, which is more robust to noise and simplify classification models. Secondly, I showed that adversarial defense in the image domain can be partially achieved without knowing the specific attacking method by employing preprocessing model trained with the task of denoising. Next, I tackle the problem of adversarial generation in the text domain within the context of real world applications. I devised a new method of crafting adversarial text by using filtered unlabeled data, which is usually more abundant compared to labeled data. Experimental results showed that the new method created more natural and relevant adversarial texts compared with current state of the art methods. Lastly, I presented my work in referring expression generation aiming at creating a more explainable natural language model. The proposed method decomposes the referring expression generation task into two subtasks and experimental results showed that generated expressions are more comprehensive to human readers. I hope that all the approaches proposed here can help further our understanding of the explainability and robustness deep learning models.
|
2 |
Improving The Robustness of Artificial Neural Networks via Bayesian ApproachesJun Zhuang (16456041) 30 August 2023 (has links)
<p>Artificial neural networks (ANNs) have achieved extraordinary performance in various domains in recent years. However, some studies reveal that ANNs may be vulnerable in three aspects: label scarcity, perturbations, and open-set emerging classes. Noisy labeling and self-supervised learning approaches address the label scarcity issues, but most of the work couldn't handle the perturbations. Adversarial training methods, topological denoising methods, and mechanism designing methods aim to mitigate the negative effects caused by perturbations. However, adversarial training methods can barely train a robust model under the circumstance of extensive label scarcity; topological denoising methods are not efficient on dynamic data structures; and mechanism designing methods often depend on heuristic explorations. Detection-based methods devote to identifying novel or anomaly instances for further downstream tasks. Nonetheless, such instances may belong to open-set new emerging classes. To embrace the aforementioned challenges, we address the robustness issues of ANNs from two aspects. First, we propose a series of Bayesian label transition models to improve the robustness of Graph Neural Networks (GNNs) in the presence of label scarcity and perturbations in the graph domain. Second, we propose a new non-exhaustive learning model, named NE-GM-GAN, to handle both open-set problems and class-imbalance issues in network intrusion datasets. Extensive experiments with several datasets demonstrate that our proposed models can effectively improve the robustness of ANNs.</p>
|
Page generated in 0.0548 seconds