Applications of sound field analysis and synthesis in 3D audio context
Résumé
The recent development of massive arrays of microphones or loudspeakers has stimulated numerous studies on sound field analysis and synthesis together with the development of 3D audio applications that offer a refined auditory experience. Advanced 3D audio techniques such as High Order Ambisonics (HOA) or Wave Field Synthesis (WFS) rely on large arrays of loudspeakers distributed on the room boundaries. The radiation properties of a sound source may be simulated by digitally controlled spherical loudspeaker arrays (LSA). On the recording side, spherical microphone arrays (SMA) are used to capture a soundscape or a musical ensemble performance with high spatial resolution. In room acoustics, high-resolution sound field characterization can be achieved by measuring directional room impulse responses (DRIR) that combine microphone and loudspeaker arrays. The measured DRIRs may be then exploited in convolution-based reverberators for the auralization of room acoustics with faithful rendering of its spatial attributes or for 3D audio-mixing applications. In this particular context, the sound engineer will typically want to fine tune the perceptual attributes of the original DRIRs in order to better fit the aesthetic of the mixing. Such parametric control first requires the development of an analysis-synthesis framework that operates on a space-time-frequency representation of the DRIRs. The theoretical and perceptual properties of these spatialization techniques are presented and illustrated in various contexts ranging from music performance, post-production and broadcast to virtual reality applications. Meanwhile, the ever-growing expansion of mobile devices calls for the deployment of broadcast solutions able to deliver 3D audio content and that allow for a personalized binaural rendering over headphones on the end user side.