Visual Interpretable and Explainable Deep Learning Models for Brain Tumor MRI and COVID-19 Chest X-ray Images
Résumé
Deep learning shows promise for medical image analysis but lacks interpretability, hindering adoption in healthcare. Attribution techniques that explain model reasoning may increase trust in deep learning among clinical stakeholders. This paper aimed to evaluate attribution methods for illuminating how deep neural networks analyze medical images. Using adaptive path-based gradient integration, we attributed predictions from brain tumor MRI and COVID-19 chest X-ray datasets made by recent deep convolutional neural network models. The technique highlighted possible biomarkers, exposed model biases, and offered insights into the links between input and prediction. Our analysis demonstrates the method's ability to elucidate model reasoning on these datasets. The resulting attributions show promise for improving deep learning transparency for domain experts by revealing the rationale behind predictions. This study advances model interpretability to increase trust in deep learning among healthcare stakeholders.
Mots clés
Attribution Bioimaging Brain tumor MRI COVID-19 Deep Neural Networks Deep Learning Explainability Guided Integrated Gradients Healthcare Integrated Gradients Interpretability Medical Images Mammography Radiology Region-based Saliency Saliency Analysis X-ray
Attribution
Bioimaging
Brain tumor MRI
COVID-19
Deep Neural Networks
Deep Learning
Explainability
Guided Integrated Gradients
Healthcare
Integrated Gradients
Interpretability
Medical Images
Mammography
Radiology
Region-based Saliency
Saliency Analysis
X-ray
Origine | Fichiers produits par l'(les) auteur(s) |
---|