The Curious Decline of Linguistic Diversity: Training Language Models on Synthetic Text - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2024

The Curious Decline of Linguistic Diversity: Training Language Models on Synthetic Text

Résumé

This study investigates the consequences of training language models on synthetic data generated by their predecessors, an increasingly prevalent practice given the prominence of powerful generative models. Diverging from the usual emphasis on performance metrics, we focus on the impact of this training methodology on linguistic diversity, especially when conducted recursively over time. To assess this, we adapt and develop a set of novel metrics targeting lexical, syntactic, and semantic diversity, applying them in recursive finetuning experiments across various natural language generation tasks in English. Our findings reveal a consistent decrease in the diversity of the model outputs through successive iterations, especially remarkable for tasks demanding high levels of creativity. This trend underscores the potential risks of training language models on synthetic text, particularly concerning the preservation of linguistic richness. Our study highlights the need for careful consideration of the long-term effects of such training approaches on the linguistic capabilities of language models.
Fichier principal
Vignette du fichier
2311.09807v2.pdf (606.96 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04593399 , version 1 (29-05-2024)

Identifiants

Citer

Yanzhu Guo, Guokan Shang, Michalis Vazirgiannis, Chloé Clavel. The Curious Decline of Linguistic Diversity: Training Language Models on Synthetic Text. NAACL 2024 Findings - Annual Conference of the North American Chapter of the Association for Computational Linguistics, Jun 2024, Mexico City, Mexico. ⟨hal-04593399⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More