Probing Pretrained Language Models with Hierarchy Properties
Résumé
Since Pretrained Language Models (PLMs) are the cornerstone of the most recent Information Retrieval models, the way they encode semantic knowledge is particularly important. However, little attention has been given to studying the PLMs’ capability to capture hierarchical semantic knowledge. Traditionally, evaluating such knowledge encoded in PLMs relies on their performance on task-dependent evaluations based on proxy tasks, such as hypernymy detection. Unfortunately, this approach potentially ignores other implicit and complex taxonomic relations. In this work, we propose a task-agnostic evaluation method able to evaluate to what extent PLMs can capture complex taxonomy relations, such as ancestors and siblings. This evaluation, based on intrinsic properties capturing these relations, shows that the lexico-semantic knowledge implicitly encoded in PLMs does not always capture hierarchical relations. We further demonstrate that the proposed properties can be injected into PLMs to improve their understanding of hierarchy. Through evaluations on taxonomy reconstruction, hypernym discovery and reading comprehension tasks, we show that knowledge about hierarchy is moderately but not systematically transferable across tasks.