AI Explainability and Acceptance: A Case Study for Underwater Mine Hunting
Résumé
In critical operational context such as Mine Warfare, Automatic Target Recognition (ATR) algorithms are still hardly accepted. The complexity of their decision-making hampers understanding of predictions despite performances approaching human expert ones. Much research has been done in Explainability Artificial Intelligence (XAI) field to avoid this “black box” effect. This field of research attempts to provide explanations for the decision-making of complex networks to promote their acceptability. Most of the explanation methods applied on image classifier networks provide heat maps. These maps highlight pixels according to their importance in decision-making. In this work, we first implement different XAI methods for the automatic classification of Synthetic Aperture Sonar (SAS) images by convolutional neural networks (CNN). These different methods are based on a post hoc approach. We study and compare the different heat maps obtained. Second, we evaluate the benefits and the usefulness of explainability in an operational framework for collaboration. To do this, different user tests are carried out with different levels of assistance, ranging from classification for an unaided operator to classification with explained ATR. These tests allow us to study whether heat maps are useful in this context. The results obtained show that the heat maps explanation has a disputed utility according to the operators. Heat map presence does not increase the quality of the classifications. On the contrary, it even increases the response time. Nevertheless, half of operators see a certain usefulness in heat maps explanation.