Retrieve, Generate, Evaluate: A Case Study for Medical Paraphrases Generation with Small Language Models - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Retrieve, Generate, Evaluate: A Case Study for Medical Paraphrases Generation with Small Language Models

Résumé

Recent surge in the accessibility of large language models (LLMs) to the general population can lead to untrackable use of such models for medical-related recommendations. Language generation via LLMs models has two key problems: firstly, they are prone to hallucination and therefore, for any medical purpose they require scientific and factual grounding; secondly, LLMs pose tremendous challenge to computational resources due to their gigantic model size. In this work, we introduce pRAGe, a pipeline for Retrieval Augmented Generation and evaluation of medical paraphrases generation using Small Language Models (SLM). We study the effectiveness of SLMs and the impact of external knowledge base for medical paraphrase generation in French.
Fichier non déposé

Dates et versions

hal-04701647 , version 1 (19-09-2024)

Licence

Identifiants

  • HAL Id : hal-04701647 , version 1

Citer

Ioana Buhnila, Aman Sinha, Mathieu Constant. Retrieve, Generate, Evaluate: A Case Study for Medical Paraphrases Generation with Small Language Models. Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024) @ ACL 2024 (Association for Computational Linguistics), Aug 2024, Bangkok, Thailand. pp.189-203. ⟨hal-04701647⟩
20 Consultations
3 Téléchargements

Partager

More