• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Developing a highly accurate, locally interpretable neural network for medical image analysis

Ventura Caballero, Rony David January 2023 (has links)
Background Machine learning techniques, such as convolutional networks, have shown promise in medical image analysis, including the detection of pediatric pneumonia. However, the interpretability of these models is often lacking, compromising their trustworthiness and acceptance in medical applications. The interpretability of machine learning models in medical applications is crucial for trust and bias identification. Aim The aim is to create a locally interpretable neural network that performs comparably to black-box models while being inherently interpretable, enhancing trust in medical machine learning models. Method An MLP ReLU network is trained with Guangzhou Women and Children's Medical Center pediatric chest x-ray image dataset and utilize Aletheia unwrapper for interpretability. A 5-fold cross-validation assesses the network's performance, measuring accuracy and F1 score. The average accuracy and F1 score are 0.90 and 0.91, respectively. To assessthe interpretability results are compared against a CNN network aided with LIME and SHAP to generate explanations. Results Despite lacking convolutional layers, the MLP network satisfactorily categorizes pneumonia images and explanations align with relevant areas of interest from previous studies. Moreover, by comparing it with a state of the art network aided with LIME and SHAP explanations, the local explanations demonstrate to be consistent within areas of the lungs while the post-hoc alternatives often highlighted areas not relevant for the specific task. Conclusion The developed locally interpretable neural network demonstrates promising performance and interpretability. However, additional research and implementation are required for it to outperform the so-called black box models. In a medical setting, a more accurate model despite the score could be crucial, as it could potentially save more lives, which is the ultimate goal of healthcare.

Page generated in 0.0732 seconds