Comparing Representations for Audio Synthesis Using Generative Adversarial Networks
Résumé
—In this paper, we compare different audio signal
representations, including the raw audio waveform and a variety
of time-frequency representations, for the task of audio synthesis
with Generative Adversarial Networks (GANs). We conduct the
experiments on a subset of the NSynth dataset. The architecture
follows the benchmark Progressive Growing Wasserstein GAN.
We perform experiments both in a fully non-conditional manner
as well as conditioning the network on the pitch information. We
quantitatively evaluate the generated material utilizing standard
metrics for assessing generative models, and compare training
and sampling times. We show that complex-valued as well
as the magnitude and Instantaneous Frequency of the ShortTime Fourier Transform achieve the best results, and yield fast
generation and inversion times. The code for feature extraction,
training and evaluating the model is available online.
Origine | Fichiers produits par l'(les) auteur(s) |
---|