Enriching Language Models with Semantics
Abstract
Recent advances in language model (LM) pre-training from large-scale corpora have shown to improve various natural language processing tasks. They achieve performances comparable to non-expert humans on the GLUE benchmark for natural language understanding (NLU). While the improvement of the different contextualized representations comes from (i) the usage of more and more data, (ii) changing the types of lexical pre-training tasks or (iii) increasing the model size, NLU is more than memorizing word co-occurrences. But how much world knowledge and common sense can those language model capture? How much can those models infer from just the lexical information? To overcome this problem, some approaches include semantic information in the training process. In this paper, we highlight existing approaches to combine different types of semantics with language models during the pre-training or fine-tuning phase.
Origin : Files produced by the author(s)
Loading...