Nonlinear neurons in the low-noise limit: a factorial code maximizes information transfer - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Network: Computation in Neural Systems Année : 1994

Nonlinear neurons in the low-noise limit: a factorial code maximizes information transfer

Résumé

We investigate the consequences of maximizing information transfer in a simple neural network (one input layer, one output layer), focussing on the case of non linear transfer functions. We assume that both receptive fields (synaptic efficacies) and transfer functions can be adapted to the environment. The main result is that, for bounded and invertible transfer functions, in the case of a vanishing additive output noise, and no input noise, maximization of information (Linsker's infomax principle) leads to a factorial code - hence to the same solution as required by the redundancy reduction principle of Barlow, or, in the signal processing language, to Independent Component Analysis (ICA). We show also that this result is valid for linear, more generally unbounded, transfer functions, provided optimization is performed under an additive constraint, that is which can be written as a sum of terms, each one being specific to one output neuron. Finally we study the effect of a non zero input noise. We find that, at first order in the input noise, assumed to be small as compared to the - small - output noise, the above results are still valid, provided the output noise is uncorrelated from one neuron to the other.
Fichier non déposé

Dates et versions

hal-00143844 , version 1 (27-04-2007)

Identifiants

Citer

Jean-Pierre Nadal, Nestor Parga. Nonlinear neurons in the low-noise limit: a factorial code maximizes information transfer. Network: Computation in Neural Systems, 1994, 5 (4), pp.565 - 581. ⟨10.1088/0954-898X/5/4/008⟩. ⟨hal-00143844⟩
73 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More