A Survey of Evaluation Methods of Generated Medical Textual Reports - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

A Survey of Evaluation Methods of Generated Medical Textual Reports

Résumé

Medical Report Generation (MRG) is a sub-task of Natural Language Generation (NLG) and aims to present information from various sources in textual form and synthesize salient information, with the goal of reducing the time spent by domain experts in writing medical reports and providing support information for decision-making. Given the specificity of the medical domain, the evaluation of automatically generated medical reports is of paramount importance to the validity of these systems. Therefore, in this paper, we focus on the evaluation of automatically generated medical reports from the perspective of automatic and human evaluation. We present evaluation methods for general NLG evaluation and how they have been applied to domain-specific medical tasks. The study shows that MRG evaluation methods are very diverse, and that further work is needed to build shared evaluation methods. The state of the art also emphasizes that such an evaluation must be task specific and include human assessments, requesting the participation of experts in the field.

Dates et versions

hal-04257234 , version 1 (24-10-2023)

Identifiants

Citer

Yongxin Zhou, Fabien Ringeval, François Portet. A Survey of Evaluation Methods of Generated Medical Textual Reports. The 5th Clinical Natural Language Processing Workshop, Jul 2023, Toronto, Canada. pp.447-459, ⟨10.18653/v1/2023.clinicalnlp-1.48⟩. ⟨hal-04257234⟩
62 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More