Benchmarking LLM-based Ontology Conceptualization: A Proposal
Résumé
This study presents a benchmark proposal designed to enhance knowledge engineering tasks through the use of large language models (LLMs). As LLMs become increasingly pivotal in knowledge extraction and modeling, it is crucial to evaluate and improve their performance. Building on prior work aiming at reverse generating competency questions (CQs) from existing ontologies, we introduce a benchmark focused on specific knowledge modeling tasks including ontology documentation, ontology generation, and query generation. In addition, we propose a baseline evaluation framework that applies various techniques, such as semantic comparison, ontology evaluation criteria, and structural comparison, using both existing ground truth datasets and newly proposed ontologies with corresponding CQs and documentation. This rigorous evaluation aims to provide a deeper understanding of LLM capabilities and contribute to their optimization in knowledge engineering applications.
Fichier principal
ISWC_2024_Special_Session_on_LLM_for_KE (2).pdf (124.74 Ko)
Télécharger le fichier
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|