The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset - Archive ouverte HAL
Conference Papers Year : 2022

The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset

Hugo Laurençon
  • Function : Author
Lucile Saulnier
  • Function : Author
Thomas Wang
  • Function : Author
Teven Le Scao
  • Function : Author
Leandro von Werra
  • Function : Author
Huu Nguyen
  • Function : Author
Jörg Frohberg
  • Function : Author
Mario Šaško
  • Function : Author
Quentin Lhoest
  • Function : Author
Gérard Dupont
  • Function : Author
Loubna Ben Allal
  • Function : Author
Giada Pistilli
Olivier Nguyen
  • Function : Author
Pierre Colombo
Tristan Thrush
  • Function : Author
Sebastian Nagel
  • Function : Author
Manuel Romero Muñoz
  • Function : Author
Vu Minh Chien
  • Function : Author
Manan Dey
  • Function : Author
SAP
Long Phan
  • Function : Author
Hieu Tran
  • Function : Author
Ian Yu
  • Function : Author
Suhas Pai
  • Function : Author
Violette Lepercq
  • Function : Author
Suzana Ilić
  • Function : Author
Margaret Mitchell
  • Function : Author
Sasha Luccioni
  • Function : Author
Yacine Jernite
  • Function : Author

Abstract

As language models grow ever larger, the need for large-scale high-quality text datasets has never been more pressing, especially in multilingual settings. The BigScience workshop, a 1-year international and multidisciplinary initiative, was formed with the goal of researching and training large language models as a values-driven undertaking, putting issues of ethics, harm, and governance in the foreground. This paper documents the data creation and curation efforts undertaken by BigScience to assemble the Responsible Open-science Open-collaboration Text Sources (ROOTS) corpus, a 1.6TB dataset spanning 59 languages that was used to train the 176-billion-parameter BigScience Large Open-science Open-access Multilingual (BLOOM) language model. We further release a large initial subset of the corpus and analyses thereof, and hope to empower large-scale monolingual and multilingual modeling projects with both the data and the processing tools, as well as stimulate research around this large multilingual corpus.
Fichier principal
Vignette du fichier
the_bigscience_roots_corpus_a_.pdf (2.17 Mo) Télécharger le fichier
Origin Publisher files allowed on an open archive

Dates and versions

hal-03823922 , version 1 (21-10-2022)

Licence

Identifiers

  • HAL Id : hal-03823922 , version 1

Cite

Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, et al.. The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset. Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, Nov 2022, New Orleans, United States. ⟨hal-03823922⟩

Collections

CENTRALESUPELEC
528 View
296 Download

Share

More