Return to search

Robustness of Neural Networks for Discrete Input: An Adversarial Perspective

In the past few years, evaluating on adversarial examples has become a standard

procedure to measure robustness of deep learning models. Literature on adversarial

examples for neural nets has largely focused on image data, which are represented as

points in continuous space. However, a vast proportion of machine learning models

operate on discrete input, and thus demand a similar rigor in understanding their

vulnerabilities and robustness. We study robustness of neural network architectures

for textual and graph inputs, through the lens of adversarial input perturbations.

We will cover methods for both attacks and defense; we will focus on 1) addressing

challenges in optimization for creating adversarial perturbations for discrete data;

2) evaluating and contrasting white-box and black-box adversarial examples; and 3)

proposing efficient methods to make the models robust against adversarial attacks.

Identiferoai:union.ndltd.org:uoregon.edu/oai:scholarsbank.uoregon.edu:1794/24535
Date30 April 2019
CreatorsEbrahimi, Javid
ContributorsLowd, Daniel
PublisherUniversity of Oregon
Source SetsUniversity of Oregon
Languageen_US
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
RightsAll Rights Reserved.

Page generated in 0.0019 seconds