Non Sum-Separable Energy Systems Consideration for Equilibrium Propagation
Résumé
Implementing Stochastic Gradient Descent (SGD)-based algorithms in analog neural networks, particularly with programmable resistors, introduces significant challenges, especially in managing the complexity of weight updates during the backward pass. Therefore, embracing hardware-friendly deep learning algorithms becomes essential for unlocking the full capabilities of neuromorphic computing architectures, ensuring they can efficiently support advanced computational tasks. The Equilibrium Propagation (EqProp) algorithm was introduced as a hardware-friendly algorithm that offers a promising approach for training analog neural networks by estimating error gradients without needing a separate computational circuit. While the EqProp gradient estimation method has been widely used, it often simplifies the analysis, potentially leading to incomplete or inaccurate expressions of a network’s true learning dynamics. In this talk, we will discuss the EqProp algorithm and introduce a gradient of conventional electrical power that aims to provide a more accurate representation of the energy function in analog neural circuits as energy-based models.