3D Sound source localization in reverberant rooms using Deep Learning and microphone arrays : Simulation and experiments
Résumé
Acoustic source localization is a well-studied topic in array signal processing , which could benefit from the emergence of data inference tools. We present our recent developments on the use of a Deep neural network, BeamLearning [2, 3], fed with raw multichannel audio for 3D sound source localization in reverberating environments. The data driven approaches allow to avoid the simplifying assumptions that most traditional localization methods incorporate. However, for an efficient training process, supervised machine learning algorithms rely on precisely labeled large sized datasets. There is therefore a critical need to generate a large number of 3D audio data recorded by microphone arrays in various environments. When the dataset is simulated either with numerical models or with 3D soundfield synthesis, the physical validity is also critical. Therefore, an efficient tensor GPU-based computation of synthetic room impulse responses based on fractional delays for image source models is used. We also present the use of physical 3D soundfield synthesis [1] for the learning process on microphone arrays. We discuss the advantages of this reproducible and semi-automated process, which allows to deal with arbitrary array geometries. We also analyze the localization performances of the BeamLearning approach fed with this dataset, which allows a 3D precision as high as 5 degrees in a reverberant room.