Return to search

Evaluating Machine Learning Intrusion Detection System classifiers : Using a transparent experiment approach

There have been many studies performing experiments that showcase the potential of machine learning solutions for intrusion detection, but their experimental approaches are non-transparent and vague, making it difficult to replicate their trained methods and results. In this thesis we exemplify a healthier experimental methodology. A survey was performed to investigate evaluation metrics. Three experiments implementing and benchmarking machine learning classifiers, using different optimization techniques, were performed to set up a frame of reference for future work, as well as signify the importance of using descriptive metrics and disclosing implementation. We found a set of metrics that more accurately describes the models, and we found guidelines that we would like future researchers to fulfill in order to make their work more comprehensible. For future work we would like to see more discussion regarding metrics, and a new dataset that is more generalizable.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:his-17192
Date January 2019
CreatorsAugustsson, Christian, Egeberg Jacobson, Pontus, Scherqvist, Erik
PublisherHögskolan i Skövde, Institutionen för informationsteknologi, Högskolan i Skövde, Institutionen för informationsteknologi, Högskolan i Skövde, Institutionen för informationsteknologi
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0018 seconds