Pretrained Language Models v. Court Ruling Predictions - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

Pretrained Language Models v. Court Ruling Predictions

Résumé

NLP systems are increasingly used in the law domain, either by legal institutions or by the industry. As a result there is a pressing need to characterize their strengths and weaknesses and understand their inner workings. This article presents a case study on the task of judicial decision prediction, on a small dataset from French Courts of Appeal. Specifically, our dataset of around 1000 decisions is about the habitual place of residency of children from divorced parents. The task consists in predicting, from the facts and reasons of the documents, whether the court rules that children should live with their mother or their father. Instead of feeding the whole document to a classifier, we carefully construct the dataset to make sure that the input to the classifier does not contain any 'spoilers' (it is often the case in court rulings that information all along the document mentions the final decision). Our results are mostly negative: even classifiers based on French pretrained language models (Flaubert, JuriBERT) do not classify the decisions with a reasonable accuracy. However, they can extract the decision when it is part of the input. With regards to these results, we argue that there is a strong caveat when constructing legal NLP datasets automatically.
Fichier principal
Vignette du fichier
36_Article_legal_decision.pdf (148.88 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04337720 , version 1 (12-12-2023)

Identifiants

Citer

Olivia Vaudaux, Caroline Bazzoli, Maximin Coavoux, Géraldine Vial, Étienne Vergès. Pretrained Language Models v. Court Ruling Predictions. Natural Legal Language Processing Workshop 2023, Dec 2023, Singapore, Singapore. pp.38-43, ⟨10.18653/v1/2023.nllp-1.5⟩. ⟨hal-04337720⟩
344 Consultations
70 Téléchargements

Altmetric

Partager

More