<p>In this dissertation, we study three important problems in modern deep learning: adversarial robustness, visualization, and partially monotonic function modeling. In the first part, we study the trade-off between robustness and standard accuracy in deep neural network (DNN) classifiers. We introduce sensible adversarial learning and demonstrate the synergistic effect between pursuits of standard natural accuracy and robustness. Specifically, we define a sensible adversary which is useful for learning a robust model while keeping high natural accuracy. We theoretically establish that the Bayes classifier is the most robust multi-class classifier with the 0-1 loss under sensible adversarial learning. We propose a novel and efficient algorithm that trains a robust model using implicit loss truncation. Our experiments demonstrate that our method is effective in promoting robustness against various attacks and keeping high natural accuracy. </p>
<p>In the second part, we study nonlinear dimensional reduction with the manifold assumption, often called manifold learning. Despite the recent advances in manifold learning, current state-of-the-art techniques focus on preserving only local or global structure information of the data. Moreover, they are transductive; the dimensional reduction results cannot be generalized to unseen data. We propose iGLoMAP, a novel inductive manifold learning method for dimensional reduction and high-dimensional data visualization. iGLoMAP preserves both local and global structure information in the same algorithm by preserving geodesic distance between data points. We establish the consistency property of our geodesic distance estimators. iGLoMAP can provide the lower-dimensional embedding for an unseen, novel point without any additional optimization. We successfully apply iGLoMAP to the simulated and real-data settings with competitive experiments against state-of-the-art methods.</p>
<p>In the third part, we study partially monotonic DNNs. We model such a function by using the fundamental theorem for line integrals, where the gradient is parametrized by DNNs. For the validity of the model formulation, we develop a symmetric penalty for gradient modeling. Unlike existing methods, our method allows partially monotonic modeling for general DNN architectures and monotonic constraints on multiple variables. We empirically show the necessity of the symmetric penalty on a simulated dataset.</p>
Identifer | oai:union.ndltd.org:purdue.edu/oai:figshare.com:article/20152400 |
Date | 01 July 2022 |
Creators | Jungeum Kim (12957389) |
Source Sets | Purdue University |
Detected Language | English |
Type | Text, Thesis |
Rights | CC BY 4.0 |
Relation | https://figshare.com/articles/thesis/Bridging_the_gap_between_human_and_computer_vision_in_machine_learning_adversarial_and_manifold_learning_for_high-dimensional_data/20152400 |
Page generated in 0.0022 seconds