Perceptual–Neural–Physical Sound Matching - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

Perceptual–Neural–Physical Sound Matching

Résumé

Sound matching algorithms seek to approximate a target waveform by parametric audio synthesis. Deep neural networks have achieved promising results in matching sustained harmonic tones. However, the task is more challenging when targets are nonstationary and inharmonic, e.g., percussion. We attribute this problem to the inadequacy of loss function. On one hand, mean square error in the parametric domain, known as "P-loss", is simple and fast but fails to accommodate the differing perceptual significance of each parameter. On the other hand, mean square error in the spectrotemporal domain, known as "spectral loss", is perceptually motivated and serves in differentiable digital signal processing (DDSP). Yet, spectral loss is a poor predictor of pitch intervals and its gradient may be computationally expensive; hence a slow convergence. Against this conundrum, we present Perceptual-Neural-Physical loss (PNP). PNP is the optimal quadratic approximation of spectral loss while being as fast as P-loss during training. We instantiate PNP with physical modeling synthesis as decoder and joint time-frequency scattering transform (JTFS) as spectral representation. We demonstrate its potential on matching synthetic drum sounds in comparison with other loss functions.
Fichier principal
Vignette du fichier
pnp_icassp_han.pdf (275.45 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04027307 , version 1 (13-03-2023)

Licence

Identifiants

Citer

Han Han, Vincent Lostanlen, Mathieu Lagrange. Perceptual–Neural–Physical Sound Matching. Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Jun 2023, Rhodes Island, Greece, France. pp.1-5, ⟨10.1109/ICASSP49357.2023.10095391⟩. ⟨hal-04027307⟩
64 Consultations
98 Téléchargements

Altmetric

Partager

More