Fast Learning Architecture for Neural Networks
Abstract
This paper proposes a solution to minimize the learning time of a fully connected neural network. The paper presents a processing architecture in which the treatments applied to the examples of the learning base are strongly parallelized and anticipated, even before the parameters adaptation of the previous examples are completed. This strategy finally leads to a delayed adaptation and the impact of this delay on the learning performances is analysed through a simple replicable school case study. It is shown that a reduction of the adaptation step size could be proposed to compensate errors due to the delayed adaptation. Finally, the gain in processing time for the learning phase is analysed as a function of the network parameters chosen in this study.
Domains
Engineering Sciences [physics]Origin | Explicit agreement for this submission |
---|