Agnostic latent diversity enhancement in generative modeling
Abstract
Generative modeling methods can generate images from textual or visual inputs. However, diversity in the generated images persists as a major challenge of the existing approaches. In this work, we address this issue
head-on by demonstrating that: (a) the diversity of a generated batch of images is intrinsically linked to the diversity within the latent variables; (b) leveraging the geometry of the latent space, we can establish an effective
metric for quantifying diversity; and (c) employing this insight allows one to achieve a significantly enhanced diversity in image generation beyond the capabilities of traditional random independent sampling. This advancement is consistent across a variety of generative models, including Generative Adversarial Networks (GANs) and latent diffusion models. To facilitate further research and application in this field, we are also releasing a comprehensive package that enables easy reproduction of our experiments. We integrate our contributions into a widely recognized tool for generative image modeling, ensuring that our improvements are accessible to the broader community.
Origin | Files produced by the author(s) |
---|