Learning to be attractive: probabilistic computation with dynamic attractor networks - Archive ouverte HAL
Communication Dans Un Congrès Année : 2016

Learning to be attractive: probabilistic computation with dynamic attractor networks

Résumé

In the context of sensory or higher-level cognitive processing, we present a recurrent neural network model, similar to the popular dynamic neural field (DNF) model, for performing approximate probabilistic computations. The model is biologically plausible, avoids impractical schemes such as log-encoding and noise assumptions, and is well-suited for working in stacked hierarchies. By Lyapunov analysis, we make it very plausible that the model computes the maximum a posteriori (MAP) estimate given a certain input that may be corrupted by noise. Key points of the model are its capability to learn the required posterior distributions and represent them in its lateral weights, the interpretation of stable neural activities as MAP estimates, and of latency as the probability associated with those estimates. We demonstrate for in simple experiments that learning of posterior distributions is feasible and results in correct MAP estimates. Furthermore, a pre-activation of field sites can modify attractor states when the data model is ambiguous, effectively providing an approximate implementation of Bayesian inference.
Fichier principal
Vignette du fichier
root.pdf (1 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01418141 , version 1 (16-12-2016)

Identifiants

  • HAL Id : hal-01418141 , version 1

Citer

Alexander Gepperth, Mathieu Lefort. Learning to be attractive: probabilistic computation with dynamic attractor networks. Internal Conference on Development and LEarning (ICDL), 2016, Cergy-Pontoise, France. ⟨hal-01418141⟩
97 Consultations
209 Téléchargements

Partager

More