Enriching Language Models with Semantics - Archive ouverte HAL Access content directly
Conference Papers Year :

Enriching Language Models with Semantics

Abstract

Recent advances in language model (LM) pre-training from large-scale corpora have shown to improve various natural language processing tasks. They achieve performances comparable to non-expert humans on the GLUE benchmark for natural language understanding (NLU). While the improvement of the different contextualized representations comes from (i) the usage of more and more data, (ii) changing the types of lexical pre-training tasks or (iii) increasing the model size, NLU is more than memorizing word co-occurrences. But how much world knowledge and common sense can those language model capture? How much can those models infer from just the lexical information? To overcome this problem, some approaches include semantic information in the training process. In this paper, we highlight existing approaches to combine different types of semantics with language models during the pre-training or fine-tuning phase.
Fichier principal
Vignette du fichier
1325_paper.pdf (77.16 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02879286 , version 1 (23-06-2020)

Identifiers

  • HAL Id : hal-02879286 , version 1

Cite

Tobias Mayer. Enriching Language Models with Semantics. ECAI 2020 - 24th European Conference on Artificial Intelligence, Aug 2020, Santiago de Compostela / Online, Spain. ⟨hal-02879286⟩
174 View
454 Download

Share

Gmail Facebook Twitter LinkedIn More