Return to search

Vytváření matoucích vzorů ve strojovém učení / Creating Adversarial Examples in Machine Learning

This thesis examines adversarial examples in machine learning, specifically in the im- age classification domain. State-of-the-art deep learning models are able to recognize patterns better than humans. However, we can significantly reduce the model's accu- racy by adding imperceptible, yet intentionally harmful noise. This work investigates various methods of creating adversarial images as well as techniques that aim to defend deep learning models against these malicious inputs. We choose one of the contemporary defenses and design an attack that utilizes evolutionary algorithms to deceive it. Our experiments show an interesting difference between adversarial images created by evolu- tion and images created with the knowledge of gradients. Last but not least, we test the transferability of our created samples between various deep learning models. 1

Identiferoai:union.ndltd.org:nusl.cz/oai:invenio.nusl.cz:438031
Date January 2021
CreatorsKumová, Věra
ContributorsPilát, Martin, Neruda, Roman
Source SetsCzech ETDs
LanguageEnglish
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/masterThesis
Rightsinfo:eu-repo/semantics/restrictedAccess

Page generated in 0.0021 seconds