PARTIME: Scalable and Parallel Processing Over Time with Deep Neural Networks - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2022

PARTIME: Scalable and Parallel Processing Over Time with Deep Neural Networks

Résumé

In this paper, we present PARTIME, a software library written in Python and based on PyTorch, designed specifically to speed up neural networks whenever data is continuously streamed over time, for both learning and inference. Existing libraries are designed to exploit data-level parallelism, assuming that samples are batched, a condition that is not naturally met in applications that are based on streamed data. Differently, PARTIME starts processing each data sample at the time in which it becomes available from the stream. PARTIME wraps the code that implements a feed-forward multi-layer network and it distributes the layer-wise processing among multiple devices, such as Graphics Processing Units (GPUs). Thanks to its pipeline-based computational scheme, PARTIME allows the devices to perform computations in parallel. At inference time this results in scaling capabilities that are theoretically linear with respect to the number of devices. During the learning stage, PARTIME can leverage the non-i.i.d. nature of the streamed data with samples that are smoothly evolving over time for efficient gradient computations. Experiments are performed in order to empirically compare PARTIME with classic non-parallel neural computations in online learning, distributing operations on up to 8 NVIDIA GPUs, showing significant speedups that are almost linear in the number of devices, mitigating the impact of the data transfer overhead.

Dates et versions

hal-03874858 , version 1 (28-11-2022)

Identifiants

Citer

Enrico Meloni, Lapo Faggi, Simone Marullo, Alessandro Betti, Matteo Tiezzi, et al.. PARTIME: Scalable and Parallel Processing Over Time with Deep Neural Networks. 2022. ⟨hal-03874858⟩
25 Consultations
0 Téléchargements

Altmetric

Partager

More