Invariance-based layer regularization for sound event detection - Archive ouverte HAL
Conference Papers Year : 2024

Invariance-based layer regularization for sound event detection

Abstract

Experimental and theoretical evidences suggest that invariance constraints can improve the performance and generalization capabilities of a classification model. While invariance-based regularization has become part of the standard tool-belt of machine learning practitioners, this regularization is usually applied near the decision layers or at the end of the feature extracting layers of a deep classification network. However, the optimal placement of invariance constraints inside a deep classifier is yet an open question. In particular, it would be beneficial to link it to the structural properties of the network (\textit{e.g.} its architecture), or its dynamical properties (\textit{e.g.} the effectively used volume of its latent spaces). The purpose of this article is to initiate an investigation on these aspects. We use the experimental framework of the DCASE 2023 Task 4A challenge, which considers the training of a sound event classifier in a semi-supervised manner. We show that the optimal placement of invariance constraints improves the performance of the standard baseline for this task.
Fichier principal
Vignette du fichier
eusipco.pdf (349.11 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04645968 , version 1 (15-07-2024)

Identifiers

  • HAL Id : hal-04645968 , version 1

Cite

David Perera, Slim Essid, Richard Gaël. Invariance-based layer regularization for sound event detection. European Signal Processing Conference, Aug 2024, Lyon, France. ⟨hal-04645968⟩
328 View
68 Download

Share

More