A methodology to compare XAI explanations on natural language processing
Résumé
Recent advances in eXplainable Artificial Intelligence (XAI) led to many different methods to improve the explainability of deep learning algorithms. There are nowadays many options at hand, and adapting existing ones to new problems may be needed. Hence, one may find oneself in a struggle to choose the right method to generate explanations. Considered explanations may be a set of important features or factual and counterfactual explanations.
This chapter presents two protocols to compare different XAI methods. The first is designed to be applied when no end-users are available. An objective, quantitative metric is being compared to an objective data expert's analysis. The second experiment is designed to take into account end-users feedback. It uses the quantitative metric applied in the first experiment and compares it to users' preferences. Then, the quantitative metric can be used to evaluate explanations, allowing to try multiple explanation methods and adjust them, without the cost of having a systematic end-users evaluation. The protocol can be applied to post-hoc explaining approaches as well as to self-explaining neural networks.
Knowing if an explanation makes sense to end-users helps assess the explanation system but also the prediction itself. This chapter is focused on the methodology to assess such explanations. It helps fasten and reduce the cost of manual result analysis during system design. The first experiment shows that IOU can be used to focus on instances where compared explanations give different results. The second experiment shows IOU alone is not representative of end-users' preferences, and gives clues on how to improve the definition of this quantitative metric.