Challenge on Sound Scene Synthesis: Evaluating Text-to-Audio Generation - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Challenge on Sound Scene Synthesis: Evaluating Text-to-Audio Generation

Junwon Lee
  • Fonction : Auteur
  • PersonId : 1154420
Laurie M Heller
  • Fonction : Auteur
Keunwoo Choi
  • Fonction : Auteur
Brian McFee
Keisuke Imoto
  • Fonction : Auteur
Yuki Okamoto
  • Fonction : Auteur

Résumé

Despite significant advancements in neural text-to-audio generation, challenges persist in controllability and evaluation. This paper addresses these issues through the Sound Scene Synthesis challenge held as part of the Detection and Classification of Acoustic Scenes and Events 2024. We present an evaluation protocol combining objective metric, namely Fréchet Audio Distance, with perceptual assessments, utilizing a structured prompt format to enable diverse captions and effective evaluation. Our analysis reveals varying performance across sound categories and model architectures, with larger models generally excelling but innovative lightweight approaches also showing promise. The strong correlation between objective metrics and human ratings validates our evaluation approach. We discuss outcomes in terms of audio quality, controllability, and architectural considerations for text-to-audio synthesizers, providing direction for future research.
Fichier principal
Vignette du fichier
DCASE_Sound_Scene_Synthesis_challenge____Neurips.pdf (2.2 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04794208 , version 1 (20-11-2024)

Identifiants

  • HAL Id : hal-04794208 , version 1

Citer

Junwon Lee, Modan Tailleur, Laurie M Heller, Keunwoo Choi, Mathieu Lagrange, et al.. Challenge on Sound Scene Synthesis: Evaluating Text-to-Audio Generation. AudioImagination Workshop @ Neurips, Neurips, 2024, Vancouver (BC), France. ⟨hal-04794208⟩
0 Consultations
0 Téléchargements

Partager

More