mALBERT: Is a Compact Multilingual BERT Model Still Worth It?
Abstract
Within the current trend of Pretained Language Models (PLM), emerge more and more criticisms about the ethical and
ecological impact of such models. In this article, considering these critical remarks, we propose to focus on smaller
models, such as compact models like ALBERT, which are more ecologically virtuous than these PLM. However,
PLMs enable huge breakthroughs in Natural Language Processing tasks, such as Spoken and Natural Language
Understanding, classification, Question–Answering tasks. PLMs also have the advantage of being multilingual, and,
as far as we know, a multilingual version of compact ALBERT models does not exist. Considering these facts, we
propose the free release of the first version of a multilingual compact ALBERT model, pre-trained using Wikipedia
data, which complies with the ethical aspect of such a language model. We also evaluate the model against classical
multilingual PLMs in classical NLP tasks. Finally, this paper proposes a rare study on the subword tokenization
impact on language performances.
Origin | Files produced by the author(s) |
---|