Binaural Sound Source Localization Using a Hybrid Time and Frequency Domain Model
Résumé
This paper introduces a new approach to sound source localization using head-related transfer function (HRTF) characteristics, which enables precise full-sphere localization from raw data. While previous research focused primarily on using extensive microphone arrays in the frontal plane, this arrangement often encountered limitations in accuracy and robustness when dealing with smaller microphone arrays. Our model proposes using both time and frequency domain for sound source localization while utilizing Deep Learning (DL) approach. The performance of our proposed model, surpasses the current state-of-the-art results. Specifically, it boasts an average angular error of 0.24◦ and an average Euclidean distance of 0.01 meters, while the known state-of-the-art gives average angular error of 19.07◦ and average Euclidean distance of 1.08 meters. This level of accuracy is of paramount importance for a wide range of applications, including robotics, virtual reality, and aiding individuals with cochlear implants (CI).