Fast Learning Architecture for Neural Networks - Archive ouverte HAL
Conference Papers Year : 2022

Fast Learning Architecture for Neural Networks

Abstract

This paper proposes a solution to minimize the learning time of a fully connected neural network. The paper presents a processing architecture in which the treatments applied to the examples of the learning base are strongly parallelized and anticipated, even before the parameters adaptation of the previous examples are completed. This strategy finally leads to a delayed adaptation and the impact of this delay on the learning performances is analysed through a simple replicable school case study. It is shown that a reduction of the adaptation step size could be proposed to compensate errors due to the delayed adaptation. Finally, the gain in processing time for the learning phase is analysed as a function of the network parameters chosen in this study.
Fichier principal
Vignette du fichier
0001611.pdf (462.47 Ko) Télécharger le fichier
Origin Explicit agreement for this submission

Dates and versions

hal-03956819 , version 1 (03-04-2023)

Identifiers

Cite

Ming Jun Zhang, Samuel Garcia, Michel Terre. Fast Learning Architecture for Neural Networks. 2022 30th European Signal Processing Conference (EUSIPCO), Aug 2022, Belgrade, Serbia. pp.1611-1615, ⟨10.23919/EUSIPCO55093.2022.9909812⟩. ⟨hal-03956819⟩
140 View
59 Download

Altmetric

Share

More