SLICE: Supersense-based Lightweight Interpretable Contextual Embeddings - Archive ouverte HAL Access content directly
Conference Papers Year : 2020

SLICE: Supersense-based Lightweight Interpretable Contextual Embeddings

Abstract

Contextualised embeddings such as BERT have become de facto state-of-the-art references in many NLP applications, thanks to their impressive performances. However, their opaqueness makes it hard to interpret their behaviour. SLICE is a hybrid model that combines supersense labels with contextual embeddings. We introduce a weakly supervised method to learn interpretable embeddings from raw corpora and small lists of seed words. Our model is able to represent both a word and its context as embeddings into the same compact space, whose dimensions correspond to interpretable supersenses. We assess the model in a task of supersense tagging for French nouns. The little amount of supervision required makes it particularly well suited for low-resourced scenarios. Thanks to its interpretability, we perform linguistic analyses about the predicted supersenses in terms of input word and context representations.
Fichier principal
Vignette du fichier
main.pdf (238.94 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03017741 , version 1 (21-11-2020)

Identifiers

  • HAL Id : hal-03017741 , version 1

Cite

Cindy Aloui, Carlos Ramisch, Alexis Nasr, Lucie Barque. SLICE: Supersense-based Lightweight Interpretable Contextual Embeddings. The 28th International Conference on Computational Linguistics (COLING 2020), Dec 2020, Barcelona (on line), Spain. ⟨hal-03017741⟩
84 View
118 Download

Share

Gmail Facebook X LinkedIn More