Enriching Language Models with Semantics - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Enriching Language Models with Semantics

Résumé

Recent advances in language model (LM) pre-training from large-scale corpora have shown to improve various natural language processing tasks. They achieve performances comparable to non-expert humans on the GLUE benchmark for natural language understanding (NLU). While the improvement of the different contextualized representations comes from (i) the usage of more and more data, (ii) changing the types of lexical pre-training tasks or (iii) increasing the model size, NLU is more than memorizing word co-occurrences. But how much world knowledge and common sense can those language model capture? How much can those models infer from just the lexical information? To overcome this problem, some approaches include semantic information in the training process. In this paper, we highlight existing approaches to combine different types of semantics with language models during the pre-training or fine-tuning phase.
Fichier principal
Vignette du fichier
1325_paper.pdf (77.16 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02879286 , version 1 (23-06-2020)

Identifiants

  • HAL Id : hal-02879286 , version 1

Citer

Tobias Mayer. Enriching Language Models with Semantics. ECAI 2020 - 24th European Conference on Artificial Intelligence, Aug 2020, Santiago de Compostela / Online, Spain. ⟨hal-02879286⟩
193 Consultations
483 Téléchargements

Partager

Gmail Facebook X LinkedIn More