Resource-efficient incremental learning in very high dimensions - Archive ouverte HAL
Communication Dans Un Congrès Année : 2015

Resource-efficient incremental learning in very high dimensions

Résumé

We propose a three-layer neural architecture for incremental multi-class learning that remains resource-efficient even when the number of input dimensions is very high (≥ 1000). This so-called projection-prediction (PROPRE) architecture is strongly inspired by biological information processing in that it uses a prototype-based, topologically organized hidden layers trained with the SOM learning rule controlled by a global, task-related error signal. Furthermore, the SOM learning adapts only the weights of localized neural sub-populations that are similar to the input, which explicitly avoids the catastrophic forgetting effect of MLPs in case new input statistics are presented to the architecture. As the readout layer uses simple linear regression, the approach essentially applies locally linear models to " receptive fields " (RF) defined by SOM prototypes, whereas RF shape is implicitly defined by adjacent prototypes (which avoids the storage of covariance matrices that gets prohibitive for high input dimensionality). Both RF centers and shapes are jointly adapted w.r.t. input statistics and the classification task. Tests on the MNIST dataset show that the algorithm achieves compares favorably compared to the state-of-the-art LWPR algorithm at vastly decreased resource requirements .
Fichier principal
Vignette du fichier
gepperth2015ressource.pdf (151.16 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01251015 , version 1 (05-01-2016)

Identifiants

  • HAL Id : hal-01251015 , version 1

Citer

Alexander Gepperth, Mathieu Lefort, Thomas Hecht, Ursula Körner. Resource-efficient incremental learning in very high dimensions. European Symposium on Artificial Neural Networks (ESANN), Apr 2015, Bruges, Belgium. ⟨hal-01251015⟩
255 Consultations
99 Téléchargements

Partager

More