Distributed and Parallel Sparse Computing for Very Large Graph Neural Networks
Résumé
Deep learning (DL) requires high-performance processing on big data. Graph Neural Networks, a challenging topic in DL using linear algebra methods, need algorithmic solutions to efficiently assign and process graph data on modern distributed and parallel machines, which are considered with mixed arithmetic and various types of tensor/matrix accelerators. Determining compression techniques for the graph’s sparse data structures is one of the key elements.Our first objective is to design and implement a reusable parallel numerical library to resolve large neural network graphs. Our design strategy is drawn on a component-based approach and targets maximum code reuse in various parallel contexts while allowing for performance optimization. The solution could be later integrated into a DL framework like MindSpore.