Narcissist! Do you need so much attention?
Résumé
After the rise of Word2vec came the BERT era, with
large architectures allowing to deal with polysemy by
taking into account the contextual information, lead-
ing to great performance improvement on classic nlp
tasks. BERT systems are considered universal: they
can be fine-tuned to address any task efficiently. How-
ever, these systems are huge to deploy, not trivial to
fine-tune, and may not be fitted to some corpora, e.g.
domain-specific and small ones. For instance, we con-
sider the deft 2018 corpus of tweets and show that
CamemBERT is not appropriate to this corpus and task.
According to the Occam’s razor principle, we thus de-
signed MiniBERT, a tiny BERT architecture that includes
a simplified self-attention mechanism and does require
neither pre-training, nor external data. We show that
this easily trainable and deployable system obtains
encouraging results on deft, whilst providing inter-
pretable results.
Origine | Fichiers produits par l'(les) auteur(s) |
---|