Probing Pretrained Language Models with Hierarchy Properties - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Probing Pretrained Language Models with Hierarchy Properties

Résumé

Since Pretrained Language Models (PLMs) are the cornerstone of the most recent Information Retrieval models, the way they encode semantic knowledge is particularly important. However, little attention has been given to studying the PLMs’ capability to capture hierarchical semantic knowledge. Traditionally, evaluating such knowledge encoded in PLMs relies on their performance on task-dependent evaluations based on proxy tasks, such as hypernymy detection. Unfortunately, this approach potentially ignores other implicit and complex taxonomic relations. In this work, we propose a task-agnostic evaluation method able to evaluate to what extent PLMs can capture complex taxonomy relations, such as ancestors and siblings. This evaluation, based on intrinsic properties capturing these relations, shows that the lexico-semantic knowledge implicitly encoded in PLMs does not always capture hierarchical relations. We further demonstrate that the proposed properties can be injected into PLMs to improve their understanding of hierarchy. Through evaluations on taxonomy reconstruction, hypernym discovery and reading comprehension tasks, we show that knowledge about hierarchy is moderately but not systematically transferable across tasks.
Fichier non déposé

Dates et versions

hal-04750070 , version 1 (23-10-2024)

Identifiants

Citer

Jesús Lovón-Melgarejo, Jose G. Moreno, Romaric Besançon, Olivier Ferret, Lynda Tamine. Probing Pretrained Language Models with Hierarchy Properties. 46th European Conference on Information Retrieval, ECIR 2024, Mar 2024, Glasgow, United Kingdom. pp.126-142, ⟨10.1007/978-3-031-56060-6_9⟩. ⟨hal-04750070⟩
58 Consultations
0 Téléchargements

Altmetric

Partager

More