Training Many Neural Networks in Parallel via Back-Propagation - LAAS-Réseaux et Communications Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Training Many Neural Networks in Parallel via Back-Propagation

Résumé

This paper presents two parallel implementations of the Back-propagation algorithm, a widely used approach for Artificial Neural Networks (ANNs) training. These implementations permit one to increase the number of ANNs trained simultaneously taking advantage of the thread-level massive parallelism of GPUs and multi-core architecture of modern CPUs, respectively. Computational experiments are carried out with time series taken from the product demand of a Mexican brewery company; the goal is to optimize delivery of products. We consider also time series of the M3-competition benchmark. The results obtained show the benefits of training several ANNs in parallel compared to other forecasting methods used in the competition. Indeed, training several ANNs in parallel yields to a better fitting of the weights of the network and allows to train in a short time many ANNs for different time series.
Fichier principal
Vignette du fichier
Paper.pdf (1.33 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02115612 , version 1 (30-04-2019)

Identifiants

Citer

Javier A Cruz-López, Vincent Boyer, Didier El Baz. Training Many Neural Networks in Parallel via Back-Propagation. IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW 2017), May 2017, Orlando, United States. pp.501-509, ⟨10.1109/IPDPSW.2017.72⟩. ⟨hal-02115612⟩
117 Consultations
56 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More