Deep Learning models, in particular Convolutional Neural Networks, have become the state-of-the-art in different domains, such as image classification, object detection and other computer vision tasks. However, despite their overwhelming predictive performance, they are still, for the most part, considered black-boxes, making it difficult to understand the reasoning behind their outputted decisions. As such, and with the growing interest in deploying such models into real world scenarios, the need for explainable systems has arisen. Therefore, this dissertation tries to mitigate this growing need, by proposing a novel CNN architecture, composed of an explainer and a classifier. The network, trained end-to-end, constitutes an in-model explainability method, that not only outputs decisions as well as visual explanations of what the network is focusing on to produce such decisions.
Identifer | oai:union.ndltd.org:up.pt/oai:repositorio-aberto.up.pt:10216/122958 |
Date | 14 October 2019 |
Creators | Isabel Cristina Rio-Torto de Oliveira |
Contributors | Faculdade de Engenharia |
Source Sets | Universidade do Porto |
Language | English |
Detected Language | English |
Type | Dissertação |
Format | application/pdf |
Rights | openAccess, https://creativecommons.org/licenses/by/4.0/ |
Page generated in 0.0017 seconds