A Proposal-based Paradigm for Self-supervised Sound Source Localization in Videos - ROBOTLEARN
Communication Dans Un Congrès Année : 2022

A Proposal-based Paradigm for Self-supervised Sound Source Localization in Videos

Résumé

Humans can easily recognize where and how the sound is produced via watching a scene and listening to corresponding audio cues. To achieve such cross-modal perception on machines, existing methods only use the maps generated by interpolation operations to localize the sound source. As semantic object-level localization is more attractive for potential practical applications, we argue that these existing map-based approaches only provide a coarse-grained and indirect description of the sound source. In this paper, we advocate a novel proposal-based paradigm that can directly perform semantic object-level localization, without any manual annotations. We incorporate the global response map as an unsupervised spatial constraint to weight the proposals according to how well they cover the estimated global shape of the sound source. As a result, our proposal-based sound source localization can be cast into a simpler Multiple Instance Learning (MIL) problem by filtering those instances corresponding to large sound-unrelated regions. Our method achieves state-of-the-art (SOTA) performance when compared to several baselines on multiple datasets.
Fichier principal
Vignette du fichier
Xuan-CVPR-2022.pdf (3.39 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03626420 , version 1 (31-03-2022)

Identifiants

Citer

Hanyu Xuan, Zhiliang Wu, Jian Yang, Yan Yan, Xavier Alameda-Pineda. A Proposal-based Paradigm for Self-supervised Sound Source Localization in Videos. CVPR 2022 - IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun 2022, New Orleans, United States. pp.1-10, ⟨10.1109/CVPR52688.2022.00110⟩. ⟨hal-03626420⟩
258 Consultations
243 Téléchargements

Altmetric

Partager

More