Return to search

Využití adverzálních příkladů pro zpracování přirozeného jazyka / Using Adversarial Examples in Natural Language Processing

Machine learning has been paid a lot of attention in recent years. One of the studied fields is employment of adversarial examples. These are artifi- cially constructed examples which evince two main features. They resemble the real training data and they deceive already trained model. The ad- versarial examples have been comprehensively investigated in the context of deep convolutional neural networks which process images. Nevertheless, their properties have been rarely examined in connection with NLP-processing networks. This thesis evaluates the effect of using the adversarial examples during the training of the recurrent neural networks. More specifically, the main focus is put on the recurrent networks whose text input is in the form of a sequence of word/character embeddings, which have not been pretrained in advance. The effects of the adversarial training are studied by evaluating multiple NLP datasets with various characteristics.

Identiferoai:union.ndltd.org:nusl.cz/oai:invenio.nusl.cz:365176
Date January 2017
CreatorsBělohlávek, Petr
ContributorsŽabokrtský, Zdeněk, Libovický, Jindřich
Source SetsCzech ETDs
LanguageEnglish
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/masterThesis
Rightsinfo:eu-repo/semantics/restrictedAccess

Page generated in 0.0016 seconds