Subjective evaluation of spatial distorsions induced by a sound source separation process - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Subjective evaluation of spatial distorsions induced by a sound source separation process

Résumé

The fields of video games, simulations and virtual reality are now tending to develop increasingly high-performance, realistic and immersive technologies. Efforts are made in terms of sound devices and sound processing to synthesize realistic sound scenes in a 3-D environment. One of the greatest challenges is the ability to analyze a 3-D audio stream corresponding to a complex sound scene in its basic components (i.e. individual sound sources), to modify the scene (e.g. to change the locations of sound sources) and to resynthesize a modified 3-D audio stream. This situation is referred to as "spatial remix".Actually, the spatial remix problem is still an open field. Work in progress rely on sound separation algorithms to analyze a sound scene, but these techniques are not perfect and can damage the reconstructed source signals. These are referred to as "separation artefacts", including transient alteration of the target source and rejections of other sources into the target source. Objective and subjective evaluation of separation artefacts have been conducted [1], but these studies usually consider the separated source signals alone, i.e. when each source is listened to separately. This is different form the spatial remix problem, where all sources are listened to simultaneously.In that case, one may wonder if the separation artefacts can affect the spatial image of the synthesized 3-D sound scene. According to the perceptual mechanisms involved in spatial hearing, hypothesis can be made on the kind of spatial distortions that could occur in this context. Indeed, as transients are important cues to precisely localize sounds sources, its alteration may result in a localization blur or source widening. On the other hand, when separated sources are spatialized and played simultaneously, rejections of one source into another may also produce unwanted effects such as a feeling of moving sources and "phantom" sources emergence. This paper presents a new methodology to perceptually evaluate the spatial distortions that can occur in a spatial remix context. It consists in carrying out a localization test on complex scenes composed of three synthetic musical instruments played on a set of loudspeakers. In order to eliminate possible issues related to the spatial audio rendering device, we consider a simple case: We consider only three spatial positions, each corresponding to a single loudspeaker. Then, the spatial remix is restrained to a simple permutation of the source locations.The test is run through a virtual interface, using a head mounted display. The subject is placed in a simple visual virtual environment and is asked to surround with a remote the areas where each instrument is perceived. This experimental device allows the subject to report precisely both instruments position and size. A single instrument can also be spotted at multiple locations. Perceived source positions are approximated as ellipses from which center position and dimensions can easily be deduced. In order to quantify spatial distortions, the localization task is performed on both clean and degraded versions of the same musical extract. Localization performances in both cases are then compared taking the clean sources case as a reference. In this paper, the methodology is applied to assess the quality of Non-Negative Matrix Factorization source separation algorithm developped by Leglaive [2] which performs separation on convolutive mixtures.Our study reveals that the source separation process leads to perceptible degradations of the spatial image. Three main kinds of spatial distortions have been characterized. First, in the majority of degraded cases, "phantom" sources have been observed. This artifact mainly concerns percussive sources. The results also show a significant increase in the perceived width of the degraded sources. Finally, azimuth and elevation localization error is significantly higher in the case of scenes composed of separated sources.[1] V. Emiya, E. Vincent, N. Harlander and V. Hohmann, "Subjective and Objective Quality Assessment of Audio Source Separation," in <i>IEEE Transactions on Audio, Speech, and Language Processing</i>, vol. 19, no. 7, pp. 2046-2057, Sept. 2011.[2] S. Leglaive, R. Badeau and G. Richard, "Separating time-frequency sources from time-domain convolutive mixtures using non-negative matrix factorization," <i>2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA)</i>, New Paltz, NY, 2017, pp. 264-268.
Fichier principal
Vignette du fichier
000015.pdf (684.57 Ko) Télécharger le fichier
Origine : Publication financée par une institution
Loading...

Dates et versions

hal-02275177 , version 1 (30-08-2019)

Identifiants

Citer

Simon Fargeot, Olivier Derrien, Gaetan Parseihian, Mitsuko Aramaki, Richard Kronland-Martinet. Subjective evaluation of spatial distorsions induced by a sound source separation process. EAA Spatial Audio Signal Processing Symposium, Sep 2019, Paris, France. pp.67-72, ⟨10.25836/sasp.2019.15⟩. ⟨hal-02275177⟩
119 Consultations
126 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More