Novel-WD: Exploring acquisition of Novel World Knowledge in LLMs Using Prefix-Tuning - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2023

Novel-WD: Exploring acquisition of Novel World Knowledge in LLMs Using Prefix-Tuning

Résumé

Teaching new information to pre-trained large language models (PLM) is a crucial but challenging task. Model adaptation techniques, such as fine-tuning and parameter-efficient training, are often prone to catastrophic forgetting, and most existing benchmarks focus on task adaptation rather than acquiring new information. This work studies and quantifies how PLM may learn and remember new world knowledge facts that do not occur in their pre-training corpus, which only contains world knowledge up to a certain date. To that purpose, we first propose NOVEL-WD, a new dataset consisting of sentences containing novel facts extracted from recent Wikidata updates, along with two evaluation tasks in the form of causal language modeling and multiple choice questions (MCQ). We make this dataset freely available to the community, and beyond the dataset itself, we release a procedure to build again later on new versions of similar datasets with up-to-date information. In a second part, we explore the use of prefix-tuning for novel information learning, and analyze how much information can be stored within a given prefix. We show that a single fact can reliably be encoded within a single prefix, and that the capacity of the prefix increases with its length and with the base model size.
Fichier principal
Vignette du fichier
Prefix_tuning_information_compression.pdf (254 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04269919 , version 1 (03-11-2023)

Licence

Identifiants

  • HAL Id : hal-04269919 , version 1

Citer

Maxime Méloux, Christophe Cerisara. Novel-WD: Exploring acquisition of Novel World Knowledge in LLMs Using Prefix-Tuning. 2023. ⟨hal-04269919⟩
96 Consultations
55 Téléchargements

Partager

More