The design and adjustment of convolutional neural network architectures is an opaque and mostly trial and error-driven process.
The main reason for this is the lack of proper paradigms beyond general conventions for the development of neural networks architectures and lacking effective insights into the models that can be propagated back to design decision.
In order for the task-specific design of deep learning solutions to become more efficient and goal-oriented, novel design strategies need to be developed that are founded on an understanding of convolutional neural network models.
This work develops tools for the analysis of the inference process in trained neural network models.
Based on these tools, characteristics of convolutional neural network models are identified that can be linked to inefficiencies in predictive and computational performance.
Based on these insights, this work presents methods for effectively diagnosing these design faults before and during training with little computational overhead.
These findings are empirically tested and demonstrated on architectures with sequential and multi-pathway structures, covering all the common types of convolutional neural network architectures used for classification.
Furthermore, this work proposes simple optimization strategies that allow for goal-oriented and informed adjustment of the neural architecture, opening the potential for a less trial-and-error-driven design process.
Identifer | oai:union.ndltd.org:uni-osnabrueck.de/oai:osnadocs.ub.uni-osnabrueck.de:ds-202205106814 |
Date | 10 May 2022 |
Creators | Richter, Mats L. |
Contributors | Prof. Dr. Gunther Heidemann, Prof. Dr. Julius Schöning, Prof. Dr. Dimitris Pinotsis |
Source Sets | Universität Osnabrück |
Language | English |
Detected Language | English |
Type | doc-type:doctoralThesis |
Format | application/pdf, application/zip |
Rights | Attribution 3.0 Germany, http://creativecommons.org/licenses/by/3.0/de/ |
Page generated in 0.0016 seconds