Security assurance in a computer system can be viewed as distinguishing between self and non-self. Artificial Immune Systems (AIS) are a class of machine learning (ML) techniques inspired by the behavior of innate biological immune systems, which have evolved to accurately classify self-behavior from non-self-behavior. This work aims to leverage AIS-based ML techniques for identifying certain behavioral traits in high level hardware descriptions, including unsafe or undesirable behaviors, whether such behavior exists due to human error during development or due to intentional, malicious circuit modifications, known as hardware Trojans, without the need fora golden reference model. We explore the use of Negative Selection and Clonal Selection Algorithms, which have historically been applied to malware detection on software binaries, to detect potentially unsafe or malicious behavior in hardware. We present a software tool which analyzes Trojan-inserted benchmarks, extracts their control and data-flow graphs (CDFGs), and uses this to train an AIS behavior model, against which new hardware descriptions may be tested.
Identifer | oai:union.ndltd.org:USF/oai:scholarcommons.usf.edu:etd-9189 |
Date | 20 February 2019 |
Creators | Zareen, Farhath |
Publisher | Scholar Commons |
Source Sets | University of South Flordia |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Graduate Theses and Dissertations |
Page generated in 0.0022 seconds