Feature and structural learning of memory sequences with recurrent and gated spiking neural networks using free-energy: application to speech perception and production II - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2019

Feature and structural learning of memory sequences with recurrent and gated spiking neural networks using free-energy: application to speech perception and production II

Résumé

We present a framework based on iterative free-energy optimization with spiking neural network for modeling the fronto-striatal system (PFC-BG) for the generation and recall of audio memory sequences. In line with neuroimaging studies done in the PFC, we propose a genuine coding strategy using the gain-modulation mechanism to represent abstract sequences based on the rank and location of items within them only. Based on this mechanism, we show that we can construct a repertoire of neurons sensitive to the temporal structure in sequences from which we can represent any novel sequences. The free-energy optimization is used then to explore and to retrieve the missing indices of the items in the correct order for executive control and compositionality. We show that the gain-modulation permits the network to be robust to variabilities and to have long-term dependencies as it implements a gated recurrent neural network. This model, called Inferno Gate, is an extension of the neural architecture INFERNO standing for Iterative Free-Energy Optimization of Recurrent Neural Networks with Gating or Gain-modulation. In experiments done with an audio database of ten thousand MFCC vectors, Inferno Gate is capable to encode efficiently and retrieve chunks of fifty items length. We discuss then about the potential of our network to model the features of the working memory in PFC-BG loop for structural learning, goal-direction and hierarchical reinforcement learning.
Fichier principal
Vignette du fichier
NN_inferno2_190226.pdf (3.28 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02140049 , version 1 (26-05-2019)

Identifiants

  • HAL Id : hal-02140049 , version 1

Citer

Alexandre Pitti, Mathias Quoy, Catherine Lavandier, Sofiane Boucenna. Feature and structural learning of memory sequences with recurrent and gated spiking neural networks using free-energy: application to speech perception and production II. 2019. ⟨hal-02140049⟩
334 Consultations
132 Téléchargements

Partager

Gmail Facebook X LinkedIn More