Characterizing Distributed Machine Learning and Deep Learning Workloads - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Characterizing Distributed Machine Learning and Deep Learning Workloads

Résumé

Nowadays, machine learning (ML) is widely used in many application domains to analyze datasets and build decision making systems. With the rapid growth of data, ML users switched to distributed machine learning (DML) platforms for faster executions and large-scale training datasets. However, DML platforms introduce complex execution environments that are overwhelming for uninitiated users. To provide guidance for the tuning of DML platforms and achieve good performance, it is crucial to characterize DML workloads. In this work, we focus on popular DML and distributed deep learning (DDL) workloads leveraging Apache Spark. We characterize the impact of several platform parameters related to distributed executions such as parallelization, data shuffle and scheduling on performance. Based on our analysis, we derive key takeaways on DML/DDL workload patterns, as well as unexpected behavior of workloads based on ensemble learning methods.
Fichier principal
Vignette du fichier
COMPAS2021_paper_12 (10).pdf (323.13 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03344132 , version 1 (15-09-2021)

Identifiants

  • HAL Id : hal-03344132 , version 1

Citer

Yasmine Djebrouni, Isabelly Rocha, Sara Bouchenak, Lydia y Chen, Pascal Felber, et al.. Characterizing Distributed Machine Learning and Deep Learning Workloads. Conférence francophone d'informatique en Parallélisme, Architecture et Système (ComPAS'2021), Jul 2021, Lyon, France. ⟨hal-03344132⟩
66 Consultations
127 Téléchargements

Partager

Gmail Facebook X LinkedIn More