IDENTIFICATION OF BEST-MATCHING HRTFs FROM BINAURAL SELFIES AND MACHINE LEARNING
Résumé
Augmented reality applications consist in embedding synthetic sound events into the real world of the listener. The accuracy of the spatial processing applied to the virtual sound objects is essential for the overall quality of experience. It requires means for automatically identifying the acoustic properties of the environment, including the head-related transfer functions (HRTFs) of the listener. The long-term aim of the study is automatic selection of the best matching HRTFs set within a database, given binaural selfies, i.e. signals recorded in arbitrary environments by the listener equipped with in-ear microphones. The approach builds upon prior machine learning methods capable of end-to-end estimation of the direction of incidence of a sound source from binaural signals. Extracted features are then exploited by an additional model to estimate the best matching set of HRTFs among available databases. The mobility of the listener during the recording is an asset to accumulate knowledge about these features, enhancing the confidence when estimating the HRTFs set likelihood. As a proof of concept, the method is first performed with synthesized as well as real binaural selfies of listeners whose HRTFs belong to the database to verify that they are actually elected as best-matching.
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|---|
Licence |
Copyright (Tous droits réservés)
|