A Deep SAS ATR explainability framework assessment
Résumé
In critical operational situations such as Mine Warfare, Automatic Target Recognition (ATR) algorithms are still hardly accepted. Despite their performances close to human experts, their decision-making complexity impedes prediction understanding. Explainability Artificial Intelligence (XAI) is a field of research that attempts to provide explanations for the decision-making of complex networks to promote their acceptability. In the context of image classifying networks such as ATR, the explanations often result in heat maps. These maps highlight pixels according to their importance in decision making. In this paper, we evaluate XAI benefits in the form of a heat map for collaboration with operator during Synthetic Aperture Sonar (SAS) image classification. We carry out various operator tests with several levels of explanation in order to compare their classification performances. We study the probability of classification, the probability of false alarm and the time taken by the operators. These characteristics are essential in an operational context and must be optimized. We also study the operators opinions and preferences on the presence of explanations to take into account the human aspect. This is essential for collaboration. The results obtained show that heat maps as an explanation have a disputed utility according to the operators. Their presence does not increase the quality of the classifications and on the contrary, it even increases the response time. However in terms of opinions, half of operators see a certain usefulness in heat maps.