Defending models against Natural Language Processing adversarial attacks is a challenge because of the discrete nature of the text dataset. However, given the variety of Natural Language Processing applications, it is important to make text processing models more robust and secure. This paper aims to develop techniques that will help text processing models such as BERT to combat adversarial samples that contain misspellings. These developed models are more robust than off the shelf spelling checkers.
Identifer | oai:union.ndltd.org:purdue.edu/oai:figshare.com:article/14195849 |
Date | 06 April 2021 |
Creators | Nivedita Nighojkar (8063438) |
Source Sets | Purdue University |
Detected Language | English |
Type | Text, Thesis |
Rights | CC BY 4.0 |
Relation | https://figshare.com/articles/thesis/DEFENDING_BERT_AGAINST_MISSPELLINGS/14195849 |
Page generated in 0.0019 seconds