Efficient Distributed Continual Learning for Steering Experiments in Real-Time - Archive ouverte HAL
Article Dans Une Revue Future Generation Computer Systems Année : 2024

Efficient Distributed Continual Learning for Steering Experiments in Real-Time

Résumé

Deep learning has emerged as a powerful method for extracting valuable information from large volumes of data. However, when new training data arrives continuously (i.e., is not fully available from the beginning), incremental training suffers from catastrophic forgetting (i.e., new patterns are reinforced at the expense of previously acquired knowledge). Training from scratch each time new training data becomes available would result in extremely long training times and massive data accumulation. Rehearsal-based continual learning has shown promise for addressing the catastrophic forgetting challenge, but research to date has not addressed performance and scalability. To fill this gap, we propose an approach based on a distributed rehearsal buffer that efficiently complements data-parallel training on multiple GPUs to achieve high accuracy, short runtime, and scalability. It leverages a set of buffers (local to each GPU) and uses several asynchronous techniques for updating these local buffers in an embarrassingly parallel fashion, all while handling the communication overheads necessary to augment input minibatches using unbiased, global sampling. We further propose a generalization of rehearsal buffers to support both classification and generative learning tasks, as well as more advanced rehearsal strategies (notably Dark Experience Replay, leveraging knowledge distillation). We illustrate this approach with a real-life HPC streaming application from the domain of ptychographic image reconstruction. We run extensive experiments on up to 128 GPUs of the ThetaGPU supercomputer to compare our approach with baselines representative of training-from-scratch (the upper bound in terms of accuracy) and incremental training (the lower bound). Results show that rehearsal-based continual learning achieves a top-5 validation accuracy close to the upper bound, while simultaneously exhibiting a runtime close to the lower bound.
Fichier principal
Vignette du fichier
paper.pdf (2.18 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04664176 , version 1 (29-07-2024)
hal-04664176 , version 2 (23-08-2024)

Licence

Identifiants

Citer

Thomas Bouvier, Bogdan Nicolae, Alexandru Costan, Tekin Bicer, Ian Foster, et al.. Efficient Distributed Continual Learning for Steering Experiments in Real-Time. Future Generation Computer Systems, 2024, pp.1-19. ⟨10.1016/j.future.2024.07.016⟩. ⟨hal-04664176v2⟩
198 Consultations
61 Téléchargements

Altmetric

Partager

More