Breaking boundaries in citation parsing: a comparative study of generative LLMs and traditional out-of-the-box citation parsers
Résumé
The task of citation string parsing has been the focus of many efforts. Traditional tools explicitly designed to parse bibliographic information, such as Bilbo, Grobid, and Parscit, have long been established in the academic landscape. Recently, with the emergence of general conversational LLMs (Large Language Models) such as OpenAI’s ChatGPT and Llama, an interesting question arises: can such language models, originally developed for natural language understanding (NLU), be employed to efficiently process bibliographies, and how would their performance for this task compare to that of dedicated bibliographic parsing tools? In this article, we propose an experiment to measure the ability of LLMs to analyse citation strings in different citation styles. We use a synthetic dataset with 12 different citation styles. We evaluate the output of two generative LLMs, ChatGPT 3.5 and Llama 2 7B, and two out-of-the-box citation parsers, CERMINE and Neural ParsCit. The results show that the LLMs tend to outperform the citation parsers for all citation styles and labels.