Distributed and Parallel Sparse Computing for Very Large Graph Neural Networks - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Distributed and Parallel Sparse Computing for Very Large Graph Neural Networks

Résumé

Deep learning (DL) requires high-performance processing on big data. Graph Neural Networks, a challenging topic in DL using linear algebra methods, need algorithmic solutions to efficiently assign and process graph data on modern distributed and parallel machines, which are considered with mixed arithmetic and various types of tensor/matrix accelerators. Determining compression techniques for the graph’s sparse data structures is one of the key elements.Our first objective is to design and implement a reusable parallel numerical library to resolve large neural network graphs. Our design strategy is drawn on a component-based approach and targets maximum code reuse in various parallel contexts while allowing for performance optimization. The solution could be later integrated into a DL framework like MindSpore.
Fichier non déposé

Dates et versions

hal-03988360 , version 1 (14-02-2023)

Identifiants

Citer

Quentin Petit, Chong Li, Nahid Emad. Distributed and Parallel Sparse Computing for Very Large Graph Neural Networks. 2022 IEEE International Conference on Big Data (Big Data), Dec 2022, Osaka, Japan. pp.6796-6798, ⟨10.1109/BigData55660.2022.10020457⟩. ⟨hal-03988360⟩
13 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More