Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2022

Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis

Résumé

The full acceptance of Deep Learning (DL) models in the clinical field is rather low with respect to the quantity of high-performing solutions reported in the literature. Particularly, end users are reluctant to rely on the rough predictions of DL models. Uncertainty quantification methods have been proposed in the literature as a potential response to reduce the rough decision provided by the DL black box and thus increase the interpretability and the acceptability of the result by the final user. In this review, we propose an overview of the existing methods to quantify uncertainty associated to DL predictions. We focus on applications to medical image analysis, which present specific challenges due to the high dimensionality of images and their quality variability, as well as constraints associated to real-life clinical routine. We then discuss the evaluation protocols to validate the relevance of uncertainty estimates. Finally, we highlight the open challenges of uncertainty quantification in the medical field.
Fichier principal
Vignette du fichier
UQ_review_HAL.pdf (692.26 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03806630 , version 1 (07-10-2022)

Licence

Paternité

Identifiants

  • HAL Id : hal-03806630 , version 1

Citer

Benjamin Lambert, Florence Forbes, Alan Tucholka, Senan Doyle, Harmonie Dehaene, et al.. Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis. 2022. ⟨hal-03806630⟩
52 Consultations
409 Téléchargements

Partager

Gmail Facebook X LinkedIn More