Informative Multimodal Unsupervised Image-to-Image Translation - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Informative Multimodal Unsupervised Image-to-Image Translation

Résumé

We propose a new method of multimodal image translation, called InfoMUNIT, which is an extension of the state-of-the-art method MUNIT. Our method allows controlling the style of the generated images and improves their quality and diversity. It learns to maximize the mutual information between a subset of style code and the distribution of the output images. Experiments show that our model cannot only translate one image from the source domain to multiple images in the target domain but also explore and manipulate features of the outputs without annotation. Furthermore, it achieves a superior diversity and a competitive image quality to state-of-the-art methods in multiple image translation tasks.

Dates et versions

hal-04432108 , version 1 (01-02-2024)

Identifiants

Citer

Tien Tai Doan, Guillaume Ghyselinck, Blaise Hanczar. Informative Multimodal Unsupervised Image-to-Image Translation. 9th International Conference of Security, Privacy and Trust Management (SPTM 2021), Apr 2021, Copenhagen, Denmark. pp.37--51, ⟨10.5121/csit.2021.110503⟩. ⟨hal-04432108⟩
6 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More