Pré-Publication, Document De Travail Année : 2021

Learning Natural Language Generation from Scratch

Guillaume Quispe
  • Fonction : Auteur
  • PersonId : 1110581
Charles Ollion
  • Fonction : Auteur
Sylvain Le Corff
Florian Strub
  • Fonction : Auteur
  • PersonId : 1105611
Olivier Pietquin
  • Fonction : Auteur
  • PersonId : 1011048

Résumé

This paper introduces TRUncated ReinForcement Learning for Language (TrufLL), an original ap- proach to train conditional language models from scratch by only using reinforcement learning (RL). As RL methods unsuccessfully scale to large action spaces, we dynamically truncate the vocabulary space using a generic language model. TrufLL thus enables to train a language agent by solely interacting with its environment without any task-specific prior knowledge; it is only guided with a task-agnostic language model. Interestingly, this approach avoids the dependency to labelled datasets and inherently reduces pre- trained policy flaws such as language or exposure biases. We evaluate TrufLL on two visual question generation tasks, for which we report positive results over performance and language metrics, which we then corroborate with a human evaluation. To our knowledge, it is the first approach that successfully learns a language generation policy (almost) from scratch.

Fichier principal
Vignette du fichier
trufll_arxiv.pdf (4.85 Mo) Télécharger le fichier

Dates et versions

hal-03348492 , version 1 (19-09-2021)

Licence

Identifiants

Citer

Alice Martin Donati, Guillaume Quispe, Charles Ollion, Sylvain Le Corff, Florian Strub, et al.. Learning Natural Language Generation from Scratch. 2021. ⟨hal-03348492⟩
119 Consultations
141 Téléchargements

Altmetric

Partager

  • More