Self-driving cars rely on their sense of sight to function effectively in chaotic and uncontrolled environments. Thanks to recent developments in computer vision, specifically convolutional neural networks, autonomous vehicles have developed the ability to see at or above human-level capabilities, which in turn has allowed for rapid advances in self-driving cars. Unfortunately, much like humans being confused by simple optical illusions, convolutional neural networks are susceptible to simple adversarial inputs. As there is no overlap between the optical illusions that fool humans and the adversarial examples that threaten convolutional neural networks, little is understood as to why these adversarial examples dupe such advanced models and what effective mitigation techniques might exist to resolve these issues.
This thesis focuses on these adversarial images. By extending existing work, this thesis is able to offer a unique perspective on adversarial examples. Furthermore, these extensions are used to develop a novel attack that can generate physically robust adversarial examples. These physically robust instances provide a unique challenge as they transcend both individual models and the digital domain, thereby posing a significant threat to the efficacy of convolutional neural networks and their dependent applications.
Identifer | oai:union.ndltd.org:CALPOLY/oai:digitalcommons.calpoly.edu:theses-3649 |
Date | 01 June 2020 |
Creators | Loh, Jacobsen |
Publisher | DigitalCommons@CalPoly |
Source Sets | California Polytechnic State University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Master's Theses |
Page generated in 0.0019 seconds