Exploring Semantics in Pretrained Language Model Attention - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Exploring Semantics in Pretrained Language Model Attention

Résumé

Abstract Meaning Representations (AMRs) encode the semantics of sentences in the form of graphs. Vertices represent instances of concepts, and labeled edges represent semantic relations between those instances. Language models (LMs) operate by computing weights of edges of per layer complete graphs whose vertices are words in a sentence or a whole paragraph. In this work, we investigate the ability of the attention heads of two LMs, RoBERTa and GPT2, to detect the semantic relations encoded in an AMR. This is an attempt to show semantic capabilities of those models without finetuning. To do so, we apply both unsupervised and supervised learning techniques.
Fichier principal
Vignette du fichier
sem2024.pdf (5.72 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04634835 , version 1 (04-07-2024)

Identifiants

  • HAL Id : hal-04634835 , version 1

Citer

Frédéric Charpentier, Jairo Cugliari, Adrien Guille. Exploring Semantics in Pretrained Language Model Attention. 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024), Jun 2024, Mexico City, Mexico. pp.326-333. ⟨hal-04634835⟩
29 Consultations
26 Téléchargements

Partager

More