12 shades of RDF: Impact of Syntaxes on Data Extraction with Language Models - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2024

12 shades of RDF: Impact of Syntaxes on Data Extraction with Language Models

Résumé

The fine-tuning of generative pre-trained language models (PLMs) on a new task can be impacted by the choice made for representing the inputs and outputs. This article focuses on the linearization process used to structure and represent, as output, facts extracted from text. On a restricted relation extraction (RE) task, we challenged T5 and BART by fine-tuning them on 12 linearizations, including RDF standard syntaxes and variations. Our benchmark covers: the validity of the produced triples, the performance of the model, the training behaviours and the resources needed. We show these PLMs can learn some syntaxes more easily than others, and we identify an efficient ``Turtle Light'' syntax supporting the quick and robust learning of the RE task.
Fichier non déposé

Dates et versions

hal-04548076 , version 1 (16-04-2024)

Licence

Paternité

Identifiants

  • HAL Id : hal-04548076 , version 1

Citer

Célian Ringwald, Fabien Gandon, Catherine Faron, Franck Michel, Hanna Abi Akl. 12 shades of RDF: Impact of Syntaxes on Data Extraction with Language Models. 2024. ⟨hal-04548076⟩
0 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More