Auditory Cortex-Inspired Spectral Attention Modulation for Binaural Sound Localization in HRTF Mismatch
Résumé
In applications like noise cancellation and virtual reality, precise sound source localization is crucial. Existing datadriven binaural systems offer high performance in adverse conditions such as noise and reverberation but face limitations with real-time operation and performance degradation in HRTF mismatch scenarios. Our work introduces a compact Vision Transformer tailored to address these issues, with a primary focus on horizontal speech localization. Inspired by the auditory cortex, our model uniquely incorporates spectral attention mechanisms using encoded speech representations. This architecture enhances generalization on the azimuth plane under mismatched HRTFs. Our empirical results show a marked improvement over conventional DNN, CNN-based and Transformer-based models, both in noisy and noise-free environments. Significantly, the proposed model maintains high accuracy in localizing adjacent azimuths, ideal for realworld applications.
Origine | Fichiers produits par l'(les) auteur(s) |
---|