Precision-Recall Divergence Optimization for Generative Modeling with GANs and Normalizing Flows
Résumé
Achieving a balance between image quality (precision) and diversity (recall) is a significant challenge in the domain of generative models. Current state-of-theart models primarily rely on optimizing heuristics, such as the Fréchet Inception Distance. While recent developments have introduced principled methods for evaluating precision and recall, they have yet to be successfully integrated into the training of generative models. Our main contribution is a novel training method for generative models, such as Generative Adversarial Networks and Normalizing Flows, which explicitly optimizes a user-defined trade-off between precision and recall. More precisely, we show that achieving a specified precisionrecall trade-off corresponds to minimizing a unique f-divergence from a family we call the PR-divergences. Conversely, any f-divergence can be written as a linear combination of PR-divergences and corresponds to a weighted precisionrecall trade-off. Through comprehensive evaluations, we show that our approach improves the performance of existing state-of-the-art models like BigGAN in terms of either precision or recall when tested on datasets such as ImageNet.
Domaines
Apprentissage [cs.LG]
Fichier principal
NeurIPS-2023-precision-recall-divergence-optimization-for-generative-modeling-with-gans-and-normalizing-flows-Paper-Conference (1).pdf (9.99 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|