Diverse Diffusion: Enhancing Image Diversity in Text-to-Image Generation - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Diverse Diffusion: Enhancing Image Diversity in Text-to-Image Generation

Mariia Zameshina
  • Fonction : Auteur
  • PersonId : 1294683
Olivier Teytaud

Résumé

Latent diffusion models excel at producing high-quality images from text. Yet, concerns appear about the lack of diversity in the generated imagery. To tackle this, we introduce Diverse Diffusion, a method for boosting image diversity beyond gender and ethnicity, spanning into richer realms, including color diversity. Diverse Diffusion is a general unsupervised technique that can be applied to existing text-to-image models. Our approach focuses on finding vectors in the Stable Diffusion latent space that are distant from each other. We generate multiple vectors in the latent space until we find a set of vectors that meets the desired distance requirements and the required batch size. To evaluate the effectiveness of our diversity methods, we conduct experiments examining various characteristics, including color diversity, LPIPS metric, and ethnicity/gender representation in images featuring humans. The results of our experiments emphasize the significance of diversity in generating realistic and varied images, offering valuable insights for improving text-to-image models. Through the enhancement of image diversity, our approach contributes to the creation of more inclusive and representative AI-generated art.
Fichier principal
Vignette du fichier
main.pdf (4.24 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04242350 , version 1 (18-10-2023)

Identifiants

Citer

Mariia Zameshina, Olivier Teytaud, Laurent Najman. Diverse Diffusion: Enhancing Image Diversity in Text-to-Image Generation. 2023. ⟨hal-04242350⟩
102 Consultations
32 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More