• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Robust Anomaly Detection in Critical Infrastructure

Abdelaty, Maged Fathy Youssef 14 September 2022 (has links)
Critical Infrastructures (CIs) such as water treatment plants, power grids and telecommunication networks are critical to the daily activities and well-being of our society. Disruption of such CIs would have catastrophic consequences for public safety and the national economy. Hence, these infrastructures have become major targets in the upsurge of cyberattacks. Defending against such attacks often depends on an arsenal of cyber-defence tools, including Machine Learning (ML)-based Anomaly Detection Systems (ADSs). These detection systems use ML models to learn the profile of the normal behaviour of a CI and classify deviations that go well beyond the normality profile as anomalies. However, ML methods are vulnerable to both adversarial and non-adversarial input perturbations. Adversarial perturbations are imperceptible noises added to the input data by an attacker to evade the classification mechanism. Non-adversarial perturbations can be a normal behaviour evolution as a result of changes in usage patterns or other characteristics and noisy data from normally degrading devices, generating a high rate of false positives. We first study the problem of ML-based ADSs being vulnerable to non-adversarial perturbations, which causes a high rate of false alarms. To address this problem, we propose an ADS called DAICS, based on a wide and deep learning model that is both adaptive to evolving normality and robust to noisy data normally emerging from the system. DAICS adapts the pre-trained model to new normality with a small number of data samples and a few gradient updates based on feedback from the operator on false alarms. The DAICS was evaluated on two datasets collected from real-world Industrial Control System (ICS) testbeds. The results show that the adaptation process is fast and that DAICS has an improved robustness compared to state-of-the-art approaches. We further investigated the problem of false-positive alarms in the ADSs. To address this problem, an extension of DAICS, called the SiFA framework, is proposed. The SiFA collects a buffer of historical false alarms and suppresses every new alarm that is similar to these false alarms. The proposed framework is evaluated using a dataset collected from a real-world ICS testbed. The evaluation results show that the SiFA can decrease the false alarm rate of DAICS by more than 80%. We also investigate the problem of ML-based network ADSs that are vulnerable to adversarial perturbations. In the case of network ADSs, attackers may use their knowledge of anomaly detection logic to generate malicious traffic that remains undetected. One way to solve this issue is to adopt adversarial training in which the training set is augmented with adversarially perturbed samples. This thesis presents an adversarial training approach called GADoT that leverages a Generative Adversarial Network (GAN) to generate adversarial samples for training. GADoT is validated in the scenario of an ADS detecting Distributed Denial of Service (DDoS) attacks, which have been witnessing an increase in volume and complexity. For a practical evaluation, the DDoS network traffic was perturbed to generate two datasets while fully preserving the semantics of the attack. The results show that adversaries can exploit their domain expertise to craft adversarial attacks without requiring knowledge of the underlying detection model. We then demonstrate that adversarial training using GADoT renders ML models more robust to adversarial perturbations. However, the evaluation of adversarial robustness is often susceptible to errors, leading to robustness overestimation. We investigate the problem of robustness overestimation in network ADSs and propose an adversarial attack called UPAS to evaluate the robustness of such ADSs. The UPAS attack perturbs the inter-arrival time between packets by injecting a random time delay before packets from the attacker. The attack is validated by perturbing malicious network traffic in a multi-attack dataset and used to evaluate the robustness of two robust ADSs, which are based on a denoising autoencoder and an adversarially trained ML model. The results demonstrate that the robustness of both ADSs is overestimated and that a standardised evaluation of robustness is needed.

Page generated in 0.0259 seconds