There has in recent years been interdisciplinary research on utilizing machine learning for detecting and classifying neurodegenerative disorders with the sole goal of outperforming state-of-the-art models in terms of metrics such as accuracy, specificity, and sensitivity. Specifically, these studies have been conducted using existing networks on ”novel” methods of pre-processing data or by developing new convolutional neural networks. As of now, no work has looked into how different normalization techniques affect a deep or shallow convolutional neural network in terms of numerical stability, its performance, explainability, and interpretability. This work delves into what normalization technique is most suitable for deep and shallow convolutional neural networks. Two baselines were created, one shallow and one deep, and applied eight different normalization techniques to these model architectures. Conclusions were drawn based on our analysis of numerical stability, performance (metrics), and methods of Explainable Artificial Intelligence. Our findings indicate that normalization techniques affect models differently regarding the mentioned aspects of our analysis, especially numerical stability and explainability. Moreover, we show that there should indeed be a preference to select one method over the other in future studies of this interdisciplinary field.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:hh-45521 |
Date | January 2021 |
Creators | Pllashniku, Edlir, Stanikzai, Zolal |
Publisher | Högskolan i Halmstad, Akademin för informationsteknologi |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0027 seconds