Visual Interpretable and Explainable Deep Learning Models for Brain Tumor MRI and COVID-19 Chest X-ray Images - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Visual Interpretable and Explainable Deep Learning Models for Brain Tumor MRI and COVID-19 Chest X-ray Images

Yusuf Brima
  • Fonction : Auteur
  • PersonId : 1276018
Marcellin Atemkeng

Résumé

Deep learning shows promise for medical image analysis but lacks interpretability, hindering adoption in healthcare. Attribution techniques that explain model reasoning may increase trust in deep learning among clinical stakeholders. This paper aimed to evaluate attribution methods for illuminating how deep neural networks analyze medical images. Using adaptive path-based gradient integration, we attributed predictions from brain tumor MRI and COVID-19 chest X-ray datasets made by recent deep convolutional neural network models. The technique highlighted possible biomarkers, exposed model biases, and offered insights into the links between input and prediction. Our analysis demonstrates the method's ability to elucidate model reasoning on these datasets. The resulting attributions show promise for improving deep learning transparency for domain experts by revealing the rationale behind predictions. This study advances model interpretability to increase trust in deep learning among healthcare stakeholders.
Fichier principal
Vignette du fichier
marcel_atemkeng.pdf (29.06 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04182077 , version 1 (17-08-2023)

Identifiants

  • HAL Id : hal-04182077 , version 1

Citer

Yusuf Brima, Marcellin Atemkeng. Visual Interpretable and Explainable Deep Learning Models for Brain Tumor MRI and COVID-19 Chest X-ray Images. 2023. ⟨hal-04182077⟩
86 Consultations
6 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More