Return to search

DEFENDING BERT AGAINST MISSPELLINGS

Defending models against Natural Language Processing adversarial attacks is a challenge because of the discrete nature of the text dataset. However, given the variety of Natural Language Processing applications, it is important to make text processing models more robust and secure. This paper aims to develop techniques that will help text processing models such as BERT to combat adversarial samples that contain misspellings. These developed models are more robust than off the shelf spelling checkers.

  1. 10.25394/pgs.14195849.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/14195849
Date06 April 2021
CreatorsNivedita Nighojkar (8063438)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/DEFENDING_BERT_AGAINST_MISSPELLINGS/14195849

Page generated in 0.0017 seconds