EdiBERT: a generative model for image editing - Archive ouverte HAL
Article Dans Une Revue Transactions on Machine Learning Research Journal Année : 2023

EdiBERT: a generative model for image editing

Résumé

Advances in computer vision are pushing the limits of image manipulation, with generative models sampling highly-realistic detailed images on various tasks. However, a specialized model is often developed and trained for each specific task, even though many image edition tasks share similarities. In denoising, inpainting, or image compositing, one always aims at generating a realistic image from a low-quality one. In this paper, we aim at making a step towards a unified approach for image editing. To do so, we propose Ed-iBERT, a bidirectional transformer that re-samples image patches conditionally to a given image. Using one generic objective, we show that the model resulting from a single training matches state-of-the-art GANs inversion on several tasks: image denoising, image completion, and image composition. We also provide several insights on the latent space of vector-quantized auto-encoders, such as locality and reconstruction capacities. The code is available at https://github.com/EdiBERT4ImageManipulation/EdiBERT.
Fichier principal
Vignette du fichier
299_edibert_a_generative_model_for.pdf (23.66 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04443258 , version 1 (07-02-2024)

Licence

Identifiants

Citer

Thibaut Issenhuth, Ugo Tanielian, Jérémie Mary, David Picard. EdiBERT: a generative model for image editing. Transactions on Machine Learning Research Journal, 2023. ⟨hal-04443258⟩
32 Consultations
10 Téléchargements

Altmetric

Partager

More