SimSCL: A Simple fully-Supervised Contrastive Learning Framework for Text Representation
Résumé
During the last few years, deep supervised learning models have been shown to achieve state-of-the-art results for Natural Language Processing tasks. Most of these models are trained by minimizing the commonly used cross-entropy loss. However, the latter may suffer from several shortcomings such as sub-optimal generalization and unstable fine-tuning. Inspired by the recent works on self-supervised contrastive representation learning, we present SimSCL, a framework for binary text classification task that relies on two simple concepts: (i) Sampling positive and negative examples given an anchor by considering that sentences belonging to the same class as the anchor as positive examples and samples belonging to a different class as negative examples and (ii) Using a novel fully-supervised contrastive loss that enforces more compact clustering by leveraging label information more effectively. The experimental results show that our framework outperforms the standard cross-entropy loss in several benchmark datasets. Further experiments on Moroccan and Algerian dialects demonstrate that our framework also works well for under-resource languages.
Fichier principal
Springer_Lecture_Notes_in_Computer_Science__3_.pdf (2.03 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|