• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • Tagged with
  • 14
  • 14
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

USING RANDOMNESS TO DEFEND AGAINST ADVERSARIAL EXAMPLES IN COMPUTER VISION

Huangyi Ge (14187059) 29 November 2022 (has links)
<p>Computer vision applications such as image classification and object detection often suffer from adversarial examples. For example, adding a small amount of noise to input images can trick the model into misclassification. Over the years, many defense mechanisms have been proposed, and different researchers have made seemingly contradictory claims on their effectiveness. This dissertation first presents an analysis of possible adversarial models and proposes an evaluation framework for comparing different more powerful and realistic adversary strategies. Then, this dissertation proposes two randomness-based defense mechanisms Random Spiking (RS) and MoNet to improve the robustness of image classifiers. Random Spiking generalizes dropout and introduces random noises in the training process in a controlled manner. MoNet uses the combination of secret randomness and Floyd-Steinberg dithering. Specifically, input images are first processed using Floyd-Steinberg dithering to reduce their color depth, and then the pixels are encrypted using the AES block cipher under a secret, random key. Evaluations under our proposed framework suggest RS and MoNet deliver better protection against adversarial examples than many existing schemes. Notably, MoNet significantly improves the resilience against transferability of adversarial examples, at the cost of a small drop in prediction accuracy. Furthermore, we extend the usage of MoNet to the object detection network and use it to align with model ensemble strategies (Affirmative and WBF (weighted fusion boxes)) and Test Time Augmentation (TTA). We call such a strategy 3MIX. Evaluations found that 3Mix can significantly improve the mean average precision (mAP) on both benign inputs and adversarial examples. In addition, 3Mix is a lightweight approach to migrate the adversarial examples without training new models.</p>
2

Robustness of a neural network used for image classification : The effect of applying distortions on adversarial examples

Östberg, Rasmus January 2018 (has links)
Powerful classifiers as neural networks have long been used to recognise images; these images might depict objects like animals, people or plain text. Distorted images affect the neural network's ability to recognise them, they might be distorted or changed due to distortions related to the camera.Camera related distortions, and how they affect the accuracy, have previously been explored. Recently, it has been proven that images can be intentionally made harder to recognise, an effect that last even after they have been photographed.Such images are known as adversarial examples.The purpose of this thesis is to evaluate how well a neural network can recognise adversarial examples which are also distorted. To evaluate the network, the adversarial examples are distorted in different ways and thereafter fed to the neural network.Different kinds of distortions (rotation, blur, contrast and skew) were used to distort the examples. For each type and strength of distortion the network's ability to classify was measured.Here, it is shown that all distortions influenced the neural network's ability to recognise images.It is concluded that the type and strength of a distortion are important factors when classifying distorted adversarial examples, but also that some distortions, rotation and skew, are able to keep their characteristic influence on the accuracy, even if they are influenced by other distortions.
3

Detecting Adversarial Examples by Measuring their Stress Response

January 2019 (has links)
abstract: Machine learning (ML) and deep neural networks (DNNs) have achieved great success in a variety of application domains, however, despite significant effort to make these networks robust, they remain vulnerable to adversarial attacks in which input that is perceptually indistinguishable from natural data can be erroneously classified with high prediction confidence. Works on defending against adversarial examples can be broadly classified as correcting or detecting, which aim, respectively at negating the effects of the attack and correctly classifying the input, or detecting and rejecting the input as adversarial. In this work, a new approach for detecting adversarial examples is proposed. The approach takes advantage of the robustness of natural images to noise. As noise is added to a natural image, the prediction probability of its true class drops, but the drop is not sudden or precipitous. The same seems to not hold for adversarial examples. In other word, the stress response profile for natural images seems different from that of adversarial examples, which could be detected by their stress response profile. An evaluation of this approach for detecting adversarial examples is performed on the MNIST, CIFAR-10 and ImageNet datasets. Experimental data shows that this approach is effective at detecting some adversarial examples on small scaled simple content images and with little sacrifice on benign accuracy. / Dissertation/Thesis / Masters Thesis Computer Science 2019
4

A Different Approach to Attacking and Defending Deep Neural Networks

Fourati, Fares 06 1900 (has links)
Adversarial examples are among the most widespread attacks in adversarial machine learning. In this work, we define new targeted and non-targeted attacks that are computationally less expensive than standard adversarial attacks. Besides practical purposes in some scenarios, these attacks can improve our understanding of the robustness of machine learning models. Moreover, we introduce a new training scheme to improve the performance of pre-trained neural networks and defend against our attacks. We examine the differences between our method, standard training, and standard adversarial training on pre-trained models. We find that our method protects the networks better against our attacks. Furthermore, unlike usual adversarial training, which reduces standard accuracy when applied to previously trained networks, our method maintains and sometimes even improves standard accuracy.
5

Methods for Increasing Robustness of Deep Convolutional Neural Networks

Uličný, Matej January 2015 (has links)
Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep neural networks seem vulnerable to small amounts of non-random noise, created by exploiting the input to output mapping of the network. Applying this noise to an input image drastically decreases classication performance. Such image is referred to as an adversarial example. The purpose of this thesis is to examine how known regularization/robustness methods perform on adversarial examples. The robustness methods: dropout, low-pass filtering, denoising autoencoder, adversarial training and committees have been implemented, combined and tested. For the well-known benchmark, the MNIST (Mixed National Institute of Standards and Technology) dataset, the best combination of robustness methods has been found. Emerged from the results of the experiments, ensemble of models trained on adversarial examples is considered to be the best approach for MNIST. Harmfulness of the adversarial noise and some robustness experiments are demonstrated on CIFAR10 (The Canadian Institute for Advanced Research) dataset as well. Apart from robustness tests, the thesis describes experiments with human classification performance on noisy images and the comparison with performance of deep neural network.
6

Matoucí vzory ve strojovém učení / Adversarial Examples in Machine Learning

Kocián, Matěj January 2018 (has links)
Deep neural networks have been recently achieving high accuracy on many important tasks, most notably image classification. However, these models are not robust to slightly perturbed inputs known as adversarial examples. These can severely decrease the accuracy and thus endanger systems where such machine learning models are employed. We present a review of adversarial examples literature. Then we propose new defenses against adversarial examples: a network combining RBF units with convolution, which we evaluate on MNIST and get better accuracy than with an adversarially trained CNN, and input space discretization, which we evaluate on MNIST and ImageNet and obtain promising results. Finally, we explore a way of generating adversarial perturbation without access to the input to be perturbed. 1
7

Tvorba nepřátelských vzorů hlubokými generativními modely / Adversarial examples design by deep generative models

Čermák, Vojtěch January 2021 (has links)
In the thesis, we explore the prospects of creating adversarial examples using various generative models. We design two algorithms to create unrestricted adversarial examples by perturbing the vectors of latent representation and exploiting the target classifier's decision boundary properties. The first algorithm uses linear interpolation combined with bisection to extract candidate samples near the decision boundary of the targeted classifier. The second algorithm applies the idea behind the FGSM algorithm on vectors of latent representation and uses additional information from gradients to obtain better candidate samples. In an empirical study on MNIST, SVHN and CIFAR10 datasets, we show that the candidate samples contain adversarial examples, samples that look like some class to humans but are classified as a different class by machines. Additionally, we show that standard defence techniques are vulnerable to our attacks.
8

Vytváření matoucích vzorů ve strojovém učení / Creating Adversarial Examples in Machine Learning

Kumová, Věra January 2021 (has links)
This thesis examines adversarial examples in machine learning, specifically in the im- age classification domain. State-of-the-art deep learning models are able to recognize patterns better than humans. However, we can significantly reduce the model's accu- racy by adding imperceptible, yet intentionally harmful noise. This work investigates various methods of creating adversarial images as well as techniques that aim to defend deep learning models against these malicious inputs. We choose one of the contemporary defenses and design an attack that utilizes evolutionary algorithms to deceive it. Our experiments show an interesting difference between adversarial images created by evolu- tion and images created with the knowledge of gradients. Last but not least, we test the transferability of our created samples between various deep learning models. 1
9

Attacking Computer Vision Models Using Occlusion Analysis to Create Physically Robust Adversarial Images

Loh, Jacobsen 01 June 2020 (has links) (PDF)
Self-driving cars rely on their sense of sight to function effectively in chaotic and uncontrolled environments. Thanks to recent developments in computer vision, specifically convolutional neural networks, autonomous vehicles have developed the ability to see at or above human-level capabilities, which in turn has allowed for rapid advances in self-driving cars. Unfortunately, much like humans being confused by simple optical illusions, convolutional neural networks are susceptible to simple adversarial inputs. As there is no overlap between the optical illusions that fool humans and the adversarial examples that threaten convolutional neural networks, little is understood as to why these adversarial examples dupe such advanced models and what effective mitigation techniques might exist to resolve these issues. This thesis focuses on these adversarial images. By extending existing work, this thesis is able to offer a unique perspective on adversarial examples. Furthermore, these extensions are used to develop a novel attack that can generate physically robust adversarial examples. These physically robust instances provide a unique challenge as they transcend both individual models and the digital domain, thereby posing a significant threat to the efficacy of convolutional neural networks and their dependent applications.
10

Towards Real-World Adversarial Examples in AI-Driven Cybersecurity

Liu, Hao January 2022 (has links)
No description available.

Page generated in 0.1059 seconds