12 shades of RDF: Impact of Syntaxes on Data Extraction with Language Models - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

12 shades of RDF: Impact of Syntaxes on Data Extraction with Language Models

Résumé

The fine-tuning of generative pre-trained language models (PLMs) on a new task can be impacted by the choice made for representing the inputs and outputs. This article focuses on the linearization process used to structure and represent, as output, facts extracted from text. On a restricted relation extraction (RE) task, we challenged T5 and BART by fine-tuning them on 12 linearizations, including RDF standard syntaxes and variations thereof. Our benchmark covers: the validity of the produced triples, the performance of the model, the training behaviours and the resources needed. We show these PLMs can learn some syntaxes more easily than others, and we identify a promising ``Turtle Light'' syntax supporting the quick and robust learning of the RE task.
Fichier principal
Vignette du fichier
ESWC_CameraReady-38.pdf (370.12 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04581124 , version 1 (21-05-2024)

Licence

Identifiants

  • HAL Id : hal-04581124 , version 1

Citer

Célian Ringwald, Fabien Gandon, Catherine Faron, Franck Michel, Hanna Abi Akl. 12 shades of RDF: Impact of Syntaxes on Data Extraction with Language Models. ESWC 2024 Extended Semantic Web Conference, May 2024, Hersonissos, Greece. ⟨hal-04581124⟩
260 Consultations
82 Téléchargements

Partager

More