<p dir="ltr">In network security, the exponential growth of intrusions stimulates research toward developing advanced artificial intelligence (AI) techniques for intrusion detection systems (IDS). However, the reliance on AI for IDS presents challenges, including the performance variability of different AI models and the lack of explainability of their decisions, hindering the comprehension of outputs by human security analysts. Hence, this thesis proposes end-to-end explainable AI (XAI) frameworks tailored to enhance the understandability and performance of AI models in this context.</p><p><br></p><p dir="ltr">The first chapter benchmarks seven black-box AI models across one real-world and two benchmark network intrusion datasets, laying the foundation for subsequent analyses. Subsequent chapters delve into feature selection methods, recognizing their crucial role in enhancing IDS performance by extracting the most significant features for identifying anomalies in network security. Leveraging XAI techniques, novel feature selection methods are proposed, showcasing superior performance compared to traditional approaches.</p><p><br></p><p dir="ltr">Also, this thesis introduces an in-depth evaluation framework for black-box XAI-IDS, encompassing global and local scopes. Six evaluation metrics are analyzed, including descrip tive accuracy, sparsity, stability, efficiency, robustness, and completeness, providing insights into the limitations and strengths of current XAI methods.</p><p><br></p><p dir="ltr">Finally, the thesis addresses the potential of ensemble learning techniques in improving AI-based network intrusion detection by proposing a two-level ensemble learning framework comprising base learners and ensemble methods trained on input datasets to generate evalua tion metrics and new datasets for subsequent analysis. Feature selection is integrated into both levels, leveraging XAI-based and Information Gain-based techniques.</p><p><br></p><p dir="ltr">Holistically, this thesis offers a comprehensive approach to enhancing network intrusion detection through the synergy of AI, XAI, and ensemble learning techniques by providing open-source codes and insights into model performances. Therefore, it contributes to the security advancement of interpretable AI models for network security, empowering security analysts to make informed decisions in safeguarding networked systems.<br></p>
Identifer | oai:union.ndltd.org:purdue.edu/oai:figshare.com:article/25838623 |
Date | 03 September 2024 |
Creators | Osvaldo Guilherme Arreche (18569509) |
Source Sets | Purdue University |
Detected Language | English |
Type | Text, Thesis |
Rights | CC BY 4.0 |
Relation | https://figshare.com/articles/thesis/EXPLAINABLE_AI_METHODS_FOR_ENHANCING_AI-BASED_NETWORK_INTRUSION_DETECTION_SYSTEMS/25838623 |
Page generated in 0.0019 seconds