Spike timing-based unsupervised learning of orientation, disparity, and motion representations in a spiking neural network
Résumé
Neuromorphic vision sensors present unique advantages over their frame based counterparts. However, unsupervised learning of efficient visual representations from their asynchronous output is still a challenge, requiring a rethinking of traditional image and video processing methods. Here we present a network of leaky integrate and fire neurons that learns representations similar to those of simple and complex cells in the primary visual cortex of mammals from the input of two event-based vision sensors. Through the combination of spike timing-dependent plasticity and homeostatic mechanisms, the network learns visual feature detectors for orientation, disparity, and motion in a fully unsupervised fashion. We validate our approach on a mobile robotic platform.
Fichier principal
CVPR__Spike_timing_based_unsupervised_learning_of_orientation__disparity__and_motion_representations_in_a_spiking_neural_network.pdf (3.62 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|