Explainability of Image Semantic Segmentation Through SHAP Values
Résumé
The introduction of Deep Neural Networks in high-level applications is significantly increasing. However, the understanding of such model decisions by humans is not straightforward and may limit their use for critical applications. In order to address this issue, recent research work has introduced explanation methods, typically for classification and captioning. Nevertheless, for some tasks, explainability methods need to be developed. This includes image segmentation that is an essential component for many high-level applications. In this paper, we propose a general workflow allowing for the adaptation of a state of the art explainability methods, especially SHAP, to image segmentation tasks.
The approach allows for explanation of single pixels as well image areas. We show the relevance of the approach on a critical application such as oil slick pollution detection on the sea surface. We also show the applicability of the method on a more standard multimedia domain semantic segmentation task. The conducted experiments highlight the relevant features on which the models derive their local results and help identify general model behaviours.
Origine | Fichiers produits par l'(les) auteur(s) |
---|