Return to search

Towards eXplainable Artificial Intelligence (XAI) in cybersecurity

A 2023 cybersecurity research study highlighted the risk of increased technology investment not being matched by a proportional investment in cybersecurity, exposing organizations to greater cyber identity compromise vulnerabilities and risk. The result is that a survey of security professionals found that 240\% expected growth in digital identities, 68\% were concerned about insider threats from employee layoffs and churn, 99\% expect identity compromise due to financial cutbacks, geopolitical factors, cloud adoption and hybrid work, while 74\% were concerned about confidential data loss through employees, ex-employees and third party vendors. In the light of continuing growth of this type of criminal activity, those responsible for keeping such risks under control have no alternative than to use continually more defensive measures to prevent them from happening and causing unnecessary businesses losses. This research project explores a real-life case study: an Artificial Intelligence (AI) information systems solution implemented in a mid-size organization facing significant cybersecurity threats. A holistic approach was taken, where AI was complemented with key non-technical elements such as organizational structures, business processes, standard operating documentation and training - oriented towards driving behaviours conducive to a strong cybersecurity posture for the organization. Using Design Science Research (DSR) guidelines, the process for conceptualizing, designing, planning and implementing the AI project was richly described from both a technical and information systems perspective. In alignment with DSR, key artifacts are documented in this research, such as a model for AI implementation that can create significant value for practitioners. The research results illustrate how an iterative, data-driven approach to development and operations is essential, with explainability and interpretability taking centre stage in driving adoption and trust. This case study highlighted how critical communication, training and cost-containment strategies can be to the success of an AI project in a mid-size organization. / Thesis / Doctor of Science (PhD) / Artificial Intelligence (AI) is now pervasive in our lives, intertwined with myriad other technology elements in the fabric of society and organizations. Instant translations, complex fraud detection and AI assistants are not the fodder of science fiction any longer. However, realizing its bene fits in an organization can be challenging. Current AI implementations are different from traditional information systems development. AI models need to be trained with large amounts of data, iteratively focusing on
outcomes rather than business requirements. AI projects may require an atypical set of skills and significant financial resources, while creating risks such as bias, security, interpretability, and privacy.
The research explores a real-life case study in a mid-size organization using Generative AI to improve its cybersecurity posture. A model for successful AI implementations is proposed, including the non-technical elements that practitioners should consider when pursuing AI in their organizations.

Identiferoai:union.ndltd.org:mcmaster.ca/oai:macsphere.mcmaster.ca:11375/29695
Date January 2024
CreatorsLopez, Eduardo
ContributorsArcher, Norm, Sartipi, Kamran, Business Administration
Source SetsMcMaster University
LanguageEnglish
Detected LanguageEnglish
TypeThesis

Page generated in 0.002 seconds