Towards an efficient computation of masks for multichannel speech enhancement
Résumé
Most of recent advances in speech enhancement (SE) have been enabled by the use of complex deep neural network (DNN) architectures. Although these results are convincing, they are not yet applicable in small wearable devices like hearing aids. In this paper, we propose a DNN-based SE which benefits from the spatial information to simplify the requirements of the DNN architecture. We show that the DNN inference is the most time and energy consuming step and we simplify the architecture of a convolutional recurrent neural network by removing its recurrent layer. This achieves comparable performance to the initial architecture, while reducing the processing time and energy consumption by a factor of 4.4.
Origine | Fichiers produits par l'(les) auteur(s) |
---|