Make Inference Faster: Efficient GPU Memory Management for Butterfly Sparse Matrix Multiplication - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2024

Make Inference Faster: Efficient GPU Memory Management for Butterfly Sparse Matrix Multiplication

Résumé

This paper is the first to assess the state of existing sparse matrix multiplication algorithms on GPU for the butterfly structure, a promising form of sparsity. This is achieved through a comprehensive benchmark that can be easily modified to add a new implementation. The goal is to provide a simple tool for users to select the optimal implementation based on their settings. Using this benchmark, we find that existing implementations spend up to 50% of their total runtime on memory rewriting operations. We show that these memory operations can be optimized by introducing a new CUDA kernel that minimizes the transfers between the different levels of GPU memory, achieving a median speed-up factor of x1.4 while also reducing energy consumption (median of x0.85). We also demonstrate the broader significance of our results by showing how the new kernel can speed up the inference of neural networks.
Fichier principal
Vignette du fichier
neurips_2024.pdf (909.45 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04584450 , version 1 (23-05-2024)
hal-04584450 , version 2 (23-05-2024)
hal-04584450 , version 3 (08-10-2024)
hal-04584450 , version 4 (03-11-2024)

Identifiants

  • HAL Id : hal-04584450 , version 1

Citer

Antoine Gonon, Léon Zheng, Pascal Carrivain, Quoc-Tung Le. Make Inference Faster: Efficient GPU Memory Management for Butterfly Sparse Matrix Multiplication. 2024. ⟨hal-04584450v1⟩
226 Consultations
133 Téléchargements

Partager

More