Return to search

Unsupervised Anomaly Detection and Explainability for Ladok Logs

Anomaly detection is the process of finding outliers in data. This report will explore the use of unsupervised machine learning for anomaly detection as well as the importance of explaining the decision making of the model. The project focuses on identifying anomalous behaviour in Ladok data from their frontend access logs, with emphasis on security issues, specifically attempted intrusion. This is done by implementing an anomaly detection model which consists of a stacked autoencoder and k-means clustering as well as examining the data using only k-means. In order to attempt to explain the decision making progress, SHAP is used. SHAP is a explainability model that measure the feature importance. The report will include an overview of the necessary theory of machine learning, anomaly detection and explainability, the implementation of the model as well as examine how to explain the process of the decision making in a black box model. Further, the results are presented and a discussion is held about how the models have performed on the data. Lastly, the report concludes whether the chosen approach has been appropriate and proposes how the work could be improved in future work. The study concludes that the results from this approach was not the desired outcome, and might therefore not be the most suitable.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:umu-213774
Date January 2023
CreatorsEdholm, Mimmi
PublisherUmeå universitet, Institutionen för datavetenskap
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess
RelationUMNAD ; 1434

Page generated in 0.0019 seconds