Fast Learning Architecture for Neural Networks - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

Fast Learning Architecture for Neural Networks

Résumé

This paper proposes a solution to minimize the learning time of a fully connected neural network. The paper presents a processing architecture in which the treatments applied to the examples of the learning base are strongly parallelized and anticipated, even before the parameters adaptation of the previous examples are completed. This strategy finally leads to a delayed adaptation and the impact of this delay on the learning performances is analysed through a simple replicable school case study. It is shown that a reduction of the adaptation step size could be proposed to compensate errors due to the delayed adaptation. Finally, the gain in processing time for the learning phase is analysed as a function of the network parameters chosen in this study.
Fichier principal
Vignette du fichier
0001611.pdf (462.47 Ko) Télécharger le fichier
Origine Accord explicite pour ce dépôt

Dates et versions

hal-03956819 , version 1 (03-04-2023)

Identifiants

Citer

Ming Jun Zhang, Samuel Garcia, Michel Terre. Fast Learning Architecture for Neural Networks. 2022 30th European Signal Processing Conference (EUSIPCO), Aug 2022, Belgrade, Serbia. pp.1611-1615, ⟨10.23919/EUSIPCO55093.2022.9909812⟩. ⟨hal-03956819⟩
142 Consultations
65 Téléchargements

Altmetric

Partager

More