Powerful classifiers as neural networks have long been used to recognise images; these images might depict objects like animals, people or plain text. Distorted images affect the neural network's ability to recognise them, they might be distorted or changed due to distortions related to the camera.Camera related distortions, and how they affect the accuracy, have previously been explored. Recently, it has been proven that images can be intentionally made harder to recognise, an effect that last even after they have been photographed.Such images are known as adversarial examples.The purpose of this thesis is to evaluate how well a neural network can recognise adversarial examples which are also distorted. To evaluate the network, the adversarial examples are distorted in different ways and thereafter fed to the neural network.Different kinds of distortions (rotation, blur, contrast and skew) were used to distort the examples. For each type and strength of distortion the network's ability to classify was measured.Here, it is shown that all distortions influenced the neural network's ability to recognise images.It is concluded that the type and strength of a distortion are important factors when classifying distorted adversarial examples, but also that some distortions, rotation and skew, are able to keep their characteristic influence on the accuracy, even if they are influenced by other distortions.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:hig-26118 |
Date | January 2018 |
Creators | Östberg, Rasmus |
Publisher | Högskolan i Gävle, Datavetenskap |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0019 seconds