There have been many studies performing experiments that showcase the potential of machine learning solutions for intrusion detection, but their experimental approaches are non-transparent and vague, making it difficult to replicate their trained methods and results. In this thesis we exemplify a healthier experimental methodology. A survey was performed to investigate evaluation metrics. Three experiments implementing and benchmarking machine learning classifiers, using different optimization techniques, were performed to set up a frame of reference for future work, as well as signify the importance of using descriptive metrics and disclosing implementation. We found a set of metrics that more accurately describes the models, and we found guidelines that we would like future researchers to fulfill in order to make their work more comprehensible. For future work we would like to see more discussion regarding metrics, and a new dataset that is more generalizable.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:his-17192 |
Date | January 2019 |
Creators | Augustsson, Christian, Egeberg Jacobson, Pontus, Scherqvist, Erik |
Publisher | Högskolan i Skövde, Institutionen för informationsteknologi, Högskolan i Skövde, Institutionen för informationsteknologi, Högskolan i Skövde, Institutionen för informationsteknologi |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0017 seconds