Communication Dans Un Congrès Année : 2025

Audio-JEPA: Joint-Embedding Predictive Architecture for Audio Representation Learning

Résumé

Building on the Joint-Embedding Predictive Architecture (JEPA) paradigm, a recent self-supervised learning framework that predicts latent representations of masked regions in high-level feature spaces, we propose Audio-JEPA (Audio Joint-Embedding Predictive Architecture), tailored specifically for audio data. Audio-JEPA uses a simple Vision Transformer backbone to predict latent representations of masked spectrogram patches rather than reconstructing raw audio. We pre-train on unlabeled AudioSet clips (10s, 32kHz) with random patch masking on mel-spectrograms. We evaluate on the X-ARES suite covering speech, music, and environmental sound tasks. Although our implementation is a straightforward translation of the original model to audio, the results still show comparable performance to wav2vec 2.0 and data2vec while using less than one-fifth of their training data and with no hyper-parameter tuning. All code and pretrained checkpoints will be released on GitHub.

Fichier principal
Vignette du fichier
Audio-JEPA.pdf (671.42 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-05128180 , version 1 (24-06-2025)

Licence

Identifiants

Citer

Ludovic Tuncay, Etienne Labbé, Emmanouil Benetos, Thomas Pellegrini. Audio-JEPA: Joint-Embedding Predictive Architecture for Audio Representation Learning. ICME 2025, Jun 2025, Nantes, France. ⟨hal-05128180⟩
2923 Consultations
902 Téléchargements

Altmetric

Partager

  • More