Run-Time Adaptation of Neural Beamforming for Robust Speech Dereverberation and Denoising - Archive ouverte HAL
Conference Papers Year : 2024

Run-Time Adaptation of Neural Beamforming for Robust Speech Dereverberation and Denoising

Abstract

This paper describes speech enhancement for realtime automatic speech recognition (ASR) in real environments. A standard approach to this task is to use neural beamforming that can work efficiently in an online manner. It estimates the masks of clean dry speech from a noisy echoic mixture spectrogram with a deep neural network (DNN) and then computes a enhancement filter used for beamforming. The performance of such a supervised approach, however, is drastically degraded under mismatched conditions. This calls for run-time adaptation of the DNN. Although the ground-truth speech spectrogram required for adaptation is not available at run time, blind dereverberation and separation methods such as weighted prediction error (WPE) and fast multichannel nonnegative matrix factorization (FastMNMF) can be used for generating pseudo groundtruth data from a mixture. Based on this idea, a prior work proposed a dual-process system based on a cascade of WPE and minimum variance distortionless response (MVDR) beamforming asynchronously fine-tuned by block-online FastMNMF. To integrate the dereverberation capability into neural beamforming and make it fine-tunable at run time, we propose to use weighted power minimization distortionless response (WPD) beamforming, a unified version of WPE and minimum power distortionless response (MPDR), whose joint dereverberation and denoising filter is estimated using a DNN. We evaluated the impact of run-time adaptation under various conditions with different numbers of speakers, reverberation times, and signal-to-noise ratios (SNRs).
Fichier principal
Vignette du fichier
apsipa2024_fujita.pdf (2.45 Mo) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04736454 , version 1 (14-10-2024)

Identifiers

  • HAL Id : hal-04736454 , version 1

Cite

Yoto Fujita, Aditya Arie Nugraha, Diego Di Carlo, Yoshiaki Bando, Mathieu Fontaine, et al.. Run-Time Adaptation of Neural Beamforming for Robust Speech Dereverberation and Denoising. 2024 APSIPA : Asia-Pacific Signal and Information Processing Association, Dec 2024, Macau China, China. ⟨hal-04736454⟩
17 View
12 Download

Share

More