Return to search

Producing Decisions and Explanations: A Joint Approach Towards Explainable CNNs

Deep Learning models, in particular Convolutional Neural Networks, have become the state-of-the-art in different domains, such as image classification, object detection and other computer vision tasks. However, despite their overwhelming predictive performance, they are still, for the most part, considered black-boxes, making it difficult to understand the reasoning behind their outputted decisions. As such, and with the growing interest in deploying such models into real world scenarios, the need for explainable systems has arisen. Therefore, this dissertation tries to mitigate this growing need, by proposing a novel CNN architecture, composed of an explainer and a classifier. The network, trained end-to-end, constitutes an in-model explainability method, that not only outputs decisions as well as visual explanations of what the network is focusing on to produce such decisions.

Identiferoai:union.ndltd.org:up.pt/oai:repositorio-aberto.up.pt:10216/122958
Date14 October 2019
CreatorsIsabel Cristina Rio-Torto de Oliveira
ContributorsFaculdade de Engenharia
Source SetsUniversidade do Porto
LanguageEnglish
Detected LanguageEnglish
TypeDissertação
Formatapplication/pdf
RightsopenAccess, https://creativecommons.org/licenses/by/4.0/

Page generated in 0.0017 seconds