DStore: A Lightweight Scalable Learning Model Repository with Fine-Grained Tensor-Level Access - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

DStore: A Lightweight Scalable Learning Model Repository with Fine-Grained Tensor-Level Access

Robert Underwood
Randal Burns
  • Fonction : Auteur
  • PersonId : 1259977
Bogdan Nicolae

Résumé

The ability to share and reuse deep learning (DL) models is a key driver that facilitates the rapid adoption of artificial intelligence (AI) in both industrial and scientific applications. However, stateof-the-art approaches to store and access DL models efficiently at scale lag behind. Most often, DL models are serialized by using various formats (e.g., HDF5, SavedModel) and stored as files on POSIX file systems. While simple and portable, such an approach exhibits high serialization and I/O overheads, especially under concurrency. Additionally, the emergence of advanced AI techniques (transfer learning, sensitivity analysis, explainability, etc.) introduces the need for fine-grained access to tensors to facilitate the extraction and reuse of individual or subsets of tensors. Such patterns are underserved by state-of-the-art approaches. Requiring tensors to be read in bulk incurs suboptimal performance, scales poorly, and/or overutilizes network bandwidth. In this paper we propose a lightweight, distributed, RDMA-enabled learning model repository that addresses these challenges. Specifically we introduce several ideas: compact architecture graph representation with stable hashing and client-side metadata caching, scalable load balancing on multiple providers, RDMA-optimized data staging, and direct access to raw tensor data. We evaluate our proposal in extensive experiments that involve different access patterns using learning models of diverse shapes and sizes. Our evaluations show a significant improvement (between 2 and 30× over a variety of state-of-the-art model storage approaches while scaling to half the Cooley cluster at the Argonne Leadership Computing Facility.
Fichier principal
Vignette du fichier
DStore__A_Lightweight_Scalable_Learning_Model_Repository_with_Fine_Grained_Tensor_Level_Access.pdf (1.06 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04119926 , version 1 (07-06-2023)

Licence

Paternité

Identifiants

Citer

Meghana Madhyastha, Robert Underwood, Randal Burns, Bogdan Nicolae. DStore: A Lightweight Scalable Learning Model Repository with Fine-Grained Tensor-Level Access. ICS'23: The 2023 International Conference on Supercomputing, ACM; IEEE, Jun 2023, Orlando, United States. ⟨10.1145/3577193.3593730⟩. ⟨hal-04119926⟩
42 Consultations
21 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More