Multimodal Integration in Audio-Visual Speech Recognition How Far Are We From Human-Level Robustness? - Archive ouverte HAL
Conference Papers Year : 2024

Multimodal Integration in Audio-Visual Speech Recognition How Far Are We From Human-Level Robustness?

Abstract

This paper introduces a novel evaluation framework, inspired by methods from human psychophysics, to systematically assess the robustness of multimodal integration in audiovisual speech recognition (AVSR) models relative to human abilities. We present preliminary results on AV-HuBERT [Shi et al., 2022a,b] suggesting that multimodal integration in state-of-the-art (SOTA) AVSR models remains mediocre when compared to human performance and we discuss avenues for improvement.
Fichier principal
Vignette du fichier
86_Multimodal_Integration_in_A.pdf (358.64 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04801000 , version 1 (24-11-2024)

Identifiers

  • HAL Id : hal-04801000 , version 1

Cite

Marianne Schweitzer, Anna Montagnini, Abdellah Fourtassi, Thomas Schatz. Multimodal Integration in Audio-Visual Speech Recognition How Far Are We From Human-Level Robustness?. NeurIPS 2024 Workshop on Behavioral Machine Learning, Dec 2024, Vancouver (BC), Canada. ⟨hal-04801000⟩
0 View
0 Download

Share

More