Knowledge Graph Construction Using Large Language Models
Résumé
We explore the construction of knowledge graphs (KGs) using Large Language Models (LLMs), focusing on the application of GPT-4 to extract and structure information from scientific articles. Traditional methods such as Named Entity Recognition and Relation Extraction encounter limitations such as predefined entities and supervised learning requirements. Leveraging the advancements in LLMs, particularly their few-shot learning capabilities, offers a promising solution to these challenges. We propose a plug-and-play approach that does not rely on pretraining or fine-tuning, aiming for broad applicability across various KG construction scenarios. Our methodology involves chunking the text, structuring outputs, aggregating them, and performing post-processing to ensure data integrity. This preliminary work shows promising results in capturing and structuring semantic information from scientific papers but challenges such as optimizing chunking, resolving entity and relationship variations, and enhancing conceptual integration remain.