BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French Tweets - Archive ouverte HAL
Communication Dans Un Congrès Année : 2021

BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French Tweets

Résumé

We introduce BERTweetFR, the first largescale pre-trained language model for French tweets. Our model is initialized using the general-domain French language model CamemBERT (Martin et al., 2020) which follows the base architecture of BERT. Experiments show that BERTweetFR outperforms all previous general-domain French language models on two downstream Twitter NLP tasks of offensiveness identification and named entity recognition. The dataset used in the offensiveness detection task is first created and annotated by our team, filling in the gap of such analytic datasets in French. We make our model publicly available in the transformers library with the aim of promoting future research in analytic tasks for French tweets.

Mots clés

Fichier principal
Vignette du fichier
2021.wnut-1.49.pdf (127.98 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04447453 , version 1 (08-02-2024)

Licence

Identifiants

Citer

Yanzhu Guo, Virgile Rennard, Christos Xypolopoulos, Michalis Vazirgiannis. BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French Tweets. Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), Nov 2021, Online, Dominican Republic. pp.445-450, ⟨10.18653/v1/2021.wnut-1.49⟩. ⟨hal-04447453⟩
14 Consultations
20 Téléchargements

Altmetric

Partager

More