Neura: a specialized large language model solution in neurology - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2024

Neura: a specialized large language model solution in neurology

Nathan Torcida
  • Fonction : Auteur
Timothée Carette
  • Fonction : Auteur
Bertil Delsaut
  • Fonction : Auteur
Hugo Kermorvant
  • Fonction : Auteur
Sofia Maldonado Slootjes
  • Fonction : Auteur
Alexis Robin
  • Fonction : Auteur
Sofiène Hadidane
  • Fonction : Auteur
Vito Tota
  • Fonction : Auteur
Salim El Hadwe
  • Fonction : Auteur
Nicolas Massager
  • Fonction : Auteur
Stanislas Lagarde
  • Fonction : Auteur
Romain Carron
  • Fonction : Auteur

Résumé

Large language models’ (LLM) ability in natural language processing holds promise for diverse applications, yet their deployment in fields such as neurology faces domain-specific challenges. Hence, we introduce Neura: a scalable, explainable solution to specialize LLM. Blindly evaluated on a select set of five complex clinical cases compared to a cohort of 13 neurologists, Neura achieved normalized scores of 86.17% overall, 85% for differential diagnoses, and 88.24% for final diagnoses (55.11%, 46.15%, and 70.93% for neurologists) with rapid response times of 28.8 and 19 seconds (9 minutes and 37.2 seconds and 8 minutes and 51 seconds for neurologists) while consistently providing relevant, accurately cited information. These findings support the emerging role of LLM-driven applications to articulate human-acquired and integrated data with a vast corpus of knowledge, augmenting human experiential reasoning for clinical and research purposes.

Dates et versions

hal-04517036 , version 1 (22-03-2024)

Identifiants

Citer

Sami Barrit, Nathan Torcida, Aurélien Mazeraud, Sébastien Boulogne, Jeanne Benoit, et al.. Neura: a specialized large language model solution in neurology. 2024. ⟨hal-04517036⟩
68 Consultations
0 Téléchargements

Altmetric

Partager

More