OxRAM+OTS optimization for Binarized Neural Network hardware implementation
Résumé
Low-power memristive devices embedded on GPUs or CPUs logic core is a very promising non-von-Neumann approach to improve significantly the speed and power consumption of Deep Learning accelerators, enhancing its deployment on embedded systems. Among various non-ideal emerging neuromorphic memory devices, synaptic weight hardware implementation using RRAM memories within 1T1R architectures promises high performance on low precision Binarized Neural Networks (BNN). Taking advantage of the RRAM capabilities and allowing to substantially improve the density thanks to the OTS selector, this work proposes to replace the standard 1T1R architecture by a denser 1S1R crossbar system, where a HfO2-based OxRAM is co-integrated with a Ge-Se-Sb-N-based OTS. In this context, an extensive experimental study is performed to optimize the 1S1R stack and programming conditions for extended Read Window Margin and endurance characteristics. Focusing on standard machine learning MNIST image recognition task, we perform offline training simulations in order to define the constraints on the devices during the training process. A very promising Bit Error Rate of ~10-4 is demonstrated together with 1S1R 10 4 error-free programming endurance characteristics, fulfilling the requirements for the application of interest. Based on this simulation and experimental study, BNN figures of merit (system footprint, amount of weight updates, accuracy and tolerance to errors) are optimized by engineering the amount of learnable parameters of the system. Altogether, an inherent BNN resilience to 1S1R parasitic bit errors is demonstrated.
Origine | Fichiers produits par l'(les) auteur(s) |
---|