Explainable AI Methods for Underwater Mine Warfare
Résumé
Intelligence (AI) has brought new algorithms providing high performance compared to the usual methods. However, the internal behavior of the decision-making process carried out by Neural Networks requires to be finely understood. This questioning has led to the development of the eXplainable Artificial Intelligence (XAI). This is especially true in areas where following AI decisions may have serious consequences, such as underwater mine hunting to increase the AI acceptance. We study the application of XAI methods (backpropagation and perturbation) to the classification (mine vs non-mine) and identification (type of mines) of an object detected by a sonar on the seabed. Although the aim of XAI is to locate relevant features in an image, the classification or identification decisions do not involve the same cognitive process. The main aims of our paper were to verify that the XAI methods, designed for optical images, can be applied to grayscale sonar images (in particular, we explain why backpropagation methods are not suitable for grayscale images, unlike perturbation methods) and whether they are neural network-dependent (two kinds of network have been tested). The features highlighted by XAI methods for the different classes of mines are compared with each other, but also with those involved in the operator decision-making. Three examples of feature extraction are finally discussed in the case of misclassification. Furthermore, the perturbation approach provides the same highlighted areas for both networks, and these areas on which the Neural Networks base their classification can be linked to the explanations given by operators.
| Origine | Fichiers produits par l'(les) auteur(s) |
|---|---|
| Licence |