Multilingual fake news detection: a study on various models and training scenarios
Résumé
Amidst the surge in global online news consumption, tackling the escalating challenge of fake news requires a multilingual approach. While extensive research has explored fake news detection from various perspectives, a notable gap persists—the majority of studies concentrate on the English language. This highlights, the need for more research focusing on other languages, especially considering the scarcity of available non-English fake news datasets, particularly in low-resource settings. Focused on mBERT, XLM-RoBERTa, and LASER embeddings, this study addresses three key questions. Firstly, it evaluates the efficacy of several multilingual models across languages, highlighting the robust performance of mBERT and XLM-RoBERTa. Secondly, it examines the impact of multilingual and cross-lingual training data, demonstrating the effectiveness of multilingual training, including its potential in zero-shot and transfer learning scenarios. Thirdly, it compares multilingual models with translation-based strategies, revealing the superior performance of the former in multilingual fake news detection. Leveraging two datasets encompassing news in English, Spanish, French, Portuguese, Italian, Hindi, Indonesian, Swahili, and Vietnamese, our research underscores the effectiveness of multilingual approaches offering valuable insights for future research to combat the global problem of fake news more effectively.