Beyond $\ell$1 sparse coding in V1
Résumé
Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the $\ell$1 norm as a penalty due to its convexity, which makes it amenable to fast and simple algorithmic solvers. In this work, we use biological vision as a test-bed and show that the soft thresholding operation associated to the use of the $\ell$1 norm is highly suboptimal compared to other functions suited to approximating $\ell$p with 0 ≤ p < 1 (including recently proposed continuous exact relaxations), in terms of performance. We show that $\ell$1 sparsity employs a pool with more neurons, i.e. has a higher degree of overcompleteness, in order to maintain the same reconstruction error as the other methods considered. More specifically, at the same sparsity level, the thresholding algorithm using the $\ell$1 norm as a penalty requires a dictionary of ten times more units compared to the proposed approach, where a non-convex continuous relaxation of the $\ell$0 pseudo-norm is used, to reconstruct the external stimulus equally well. At a fixed sparsity level, both $\ell$0 - and $\ell$1 -based regularization develop units with receptive field (RF) shapes similar to biological neurons in V1 (and a subset of neurons in V2), but $\ell$0 -based regularization shows approximately five times better reconstruction of the stimulus. Our results in conjunction with recent metabolic findings indicate that for V1 to operate efficiently it should follow a coding regime which uses a regularization that is closer to the $\ell$0 pseudo-norm rather than the $\ell$1 one, and suggests a similar mode of operation for the sensory cortex in general.
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|---|
Licence |