Return to search

Adversarial Attacks Against Network Intrusion Detection Systems

<p dir="ltr">The explosive growth of computer networks over the past few decades has significantly enhanced communication capabilities. However, this expansion has also attracted malicious attackers seeking to compromise and disable these networks for personal gain. Network Intrusion Detection Systems (NIDS) were developed to detect threats and alert users to potential attacks. As the types and methods of attacks have grown exponentially, NIDS have struggled to keep pace. A paradigm shift occurred when NIDS began using Machine Learning (ML) to differentiate between anomalous and normal traffic, alleviating the challenge of tracking and defending against new attacks. However, the adoption of ML-based anomaly detection in NIDS has unraveled a new avenue of exploitation due to the inherent inadequacy of machine learning models - their susceptibility to adversarial attacks.</p><p dir="ltr">In this work, we explore the application of adversarial attacks from the image domain to bypass Network Intrusion Detection Systems (NIDS). We evaluate both white-box and black-box adversarial attacks against nine popular ML-based NIDS models. Specifically, we investigate Projected Gradient Descent (PGD) attacks on two ML models, transfer attacks using adversarial examples generated by the PGD attack, the score-based Zeroth Order Optimization attack, and two boundary-based attacks, namely the Boundary and HopSkipJump attacks. Through comprehensive experiments using the NSL-KDD dataset, we find that logistic regression and multilayer perceptron models are highly vulnerable to all studied attacks, whereas decision trees, random forests, and XGBoost are moderately vulnerable to transfer attacks or PGD-assisted transfer attacks with approximately 60 to 70% attack success rate (ASR), but highly susceptible to targeted HopSkipJump or Boundary attacks with close to a 100% ASR. Moreover, SVM-linear is highly vulnerable to both transfer attacks and targeted HopSkipJump or Boundary attacks achieving around 100% ASR, whereas SVM-rbf is highly vulnerable to transfer attacks with a 77% ASR but only moderately to targeted HopSkipJump or Boundary attacks with a 52% ASR. Finally, both KNN and Label Spreading models exhibit robustness against transfer-based attacks with less than 30% ASR but are highly vulnerable to targeted HopSkipJump or Boundary attacks with a 100% ASR with a large perturbation. Our findings may provide insights for designing future NIDS that are robust against potential adversarial attacks.</p>

  1. 10.25394/pgs.26361145.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/26361145
Date26 July 2024
CreatorsSanidhya Sharma (19203919)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/Adversarial_Attacks_Against_Network_Intrusion_Detection_Systems/26361145

Page generated in 0.0025 seconds