Unsupervised Learning of Spatio-Temporal Receptive Fields from an Event-Based Vision Sensor
Résumé
Neuromorphic vision sensors exhibit several advantages compared to conventional frame-based cameras including low latencies, high dynamic range, and low data rates. However, how efficient visual representations can be learned from the output of such sensors in an unsupervised fashion is still an open problem. Here we present a spiking neural network that learns spatio-temporal receptive fields in an unsupervised way from the output of a neuromorphic event-based vision sensor. Learning relies on the combination of spike timing-dependent plasticity with different synaptic delays, the homeostatic regulations of synaptic weights and firing thresholds, and fast inhibition among neurons to decorrelate their responses. Our network develops biologically plausible spatio-temporal receptive fields when trained on real world input and is suited for implementation on neuromorphic hardware.
Fichier principal
Unsupervised Learning of Spatio-Temporal Receptive Fields from an Event-Based Vision Sensor.pdf (2.7 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|