High Throughput Training of Deep Surrogates from Large Ensemble Runs - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

High Throughput Training of Deep Surrogates from Large Ensemble Runs

Marc Schouler
Robert Alexander Caulk
Alejandro Ribés
  • Fonction : Auteur
  • PersonId : 1079356
Bruno Raffin

Résumé

Recent years have seen a surge in deep learning approaches to accelerate numerical solvers, which provide faithful but computationally intensive simulations of the physical world. These deep surrogates are generally trained in a supervised manner from limited amounts of data slowly generated by the same solver they intend to accelerate. We propose an open-source framework that enables the online training of these models from a large ensemble run of simulations. It leverages multiple levels of parallelism to generate rich datasets. The framework avoids I/O bottlenecks and storage issues by directly streaming the generated data. A training reservoir mitigates the inherent bias of streaming while maximizing GPU throughput. Experiment on training a fully connected network as a surrogate for the heat equation shows the proposed approach enables training on 8TB of data in 2 hours with an accuracy improved by 47% and a batch throughput multiplied by 13 compared to a traditional offline procedure.
Fichier principal
Vignette du fichier
main.pdf (1.12 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04213978 , version 1 (28-09-2023)

Licence

Paternité

Identifiants

Citer

Lucas Meyer, Marc Schouler, Robert Alexander Caulk, Alejandro Ribés, Bruno Raffin. High Throughput Training of Deep Surrogates from Large Ensemble Runs. SC 2023 - The International Conference for High Performance Computing, Networking, Storage, and Analysis, Nov 2023, Denver, CO, United States. pp.1-14, ⟨10.1145/3581784.3607083⟩. ⟨hal-04213978⟩
173 Consultations
105 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More