On the Granularity of Explanations in Model Agnostic NLP Interpretability - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

On the Granularity of Explanations in Model Agnostic NLP Interpretability

Résumé

Current methods for Black-Box NLP interpretability, like LIME or SHAP, are based on altering the text to interpret by removing words and modeling the Black-Box response. In this paper, we outline limitations of this approach when using complex BERT-based classifiers: The word-based sampling produces texts that are out-of-distribution for the classifier and further gives rise to a high-dimensional search space, which can't be sufficiently explored when time or computation power is limited. Both of these challenges can be addressed by using segments as elementary building blocks for NLP interpretability. As illustration, we show that the simple choice of sentences greatly improves on both of these challenges. As a consequence, the resulting explainer attains much better fidelity on a benchmark classification task.

Dates et versions

hal-03936558 , version 1 (12-01-2023)

Identifiants

Citer

Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, Marcin Detyniecki. On the Granularity of Explanations in Model Agnostic NLP Interpretability. XKDD 2022 - ECML PKDD 2022 International Workshop on eXplainable Knowledge Discovery in Data Mining, Sep 2022, Grenoble, France. ⟨hal-03936558⟩
39 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More