Towards Zero-Shot Multimodal Machine Translation - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2024

Towards Zero-Shot Multimodal Machine Translation

Résumé

Current multimodal machine translation (MMT) systems rely on fully supervised data (i.e models are trained on sentences with their translations and accompanying images). However, this type of data is costly to collect, limiting the extension of MMT to other language pairs for which such data does not exist. In this work, we propose a method to bypass the need for fully supervised data to train MMT systems, using multimodal English data only. Our method, called ZeroMMT, consists in adapting a strong text-only machine translation (MT) model by training it on a mixture of two objectives: visually conditioned masked language modelling and the Kullback-Leibler divergence between the original and new MMT outputs. We evaluate on standard MMT benchmarks and the recently released CoMMuTE, a contrastive benchmark aiming to evaluate how well models use images to disambiguate English sentences. We obtain disambiguation performance close to state-of-the-art MMT models trained additionally on fully supervised examples. To prove that our method generalizes to languages with no fully supervised training data available, we extend the CoMMuTE evaluation dataset to three new languages: Arabic, Russian and Chinese. We further show that we can control the trade-off between disambiguation capabilities and translation fidelity at inference time using classifier-free guidance and without any additional data. Our code, data and trained models are publicly accessible.

Dates et versions

hal-04736377 , version 1 (14-10-2024)

Identifiants

Citer

Matthieu Futeral, Cordelia Schmid, Benoît Sagot, Rachel Bawden. Towards Zero-Shot Multimodal Machine Translation. 2024. ⟨hal-04736377⟩
17 Consultations
0 Téléchargements

Altmetric

Partager

More