DSL for parallelizing Machine Learning algorithms on multicore architecture - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

DSL for parallelizing Machine Learning algorithms on multicore architecture

Nel Gerbault Nanvou Tsopgny
  • Fonction : Auteur
  • PersonId : 1150913
Thomas Messi Nguélé
  • Fonction : Auteur
  • PersonId : 8299
  • IdHAL : messi

Résumé

Machine Learning algorithms must run on large amounts of data in order to produce powerful classification, regression, and clustering models. The larger the size of the data required to run these algorithms, the higher the execution time of these algorithms. Programmers of machine learning-related applications can take advantage of the rise of multi/many core architectures to reduce this long runtime. However, these programmers may find it difficult to write efficient parallel programs that run on these architectures because they used to implement these algorithms sequentially. It is therefore difficult to write low-level parallel code specific to the platform. Several DSLs have already been proposed in the context of parallelizing Machine Learning algorithms. But most of them are embedded in high level languages such as Python (case of Qjam) or Scala (case of OptiML). In order for such an (embedded) DSL to produce code with good performances (execution time and speedup), the host language must have intrinsic characteristics allowing to have them. In this paper, we propose FastML, a Domain Specific Language embedded in the C language. The idea of FastML is to offer to the programmer learning primitives (such as gradient descent) already parallelized according to the Map-Reduce model, that he will just have to call by specifying the parameters, depending on the Machine Learning algorithm he wants to implement. The first experiments carried out on a machine with 8 cores and 8GB of RAM show that FastML gives promising results in terms of speedup compared to the OptiML DSL and the Scikit-learn platform. For example, with kmeans, FastML produces 4x as speedup (with 7 cores) compared to 1x for Scikit-learn and 0.70x for OptiML.
Fichier principal
Vignette du fichier
DSL_pour_la_parallelisation_des_algo_M_L.pdf (169.15 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03727658 , version 1 (19-07-2022)

Identifiants

  • HAL Id : hal-03727658 , version 1

Citer

Nel Gerbault Nanvou Tsopgny, Thomas Messi Nguélé, Etienne Kouakam. DSL for parallelizing Machine Learning algorithms on multicore architecture. CARI 2022, Jul 2022, Yaounde, Cameroon. ⟨hal-03727658⟩

Collections

CARI2022
85 Consultations
145 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More