OneEncoder: A Lightweight Framework for Progressive Alignment of Modalities - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2024

OneEncoder: A Lightweight Framework for Progressive Alignment of Modalities

Résumé

Cross-modal alignment Learning integrates information from different modalities like text, image, audio and video to create unified models. This approach develops shared representations and learns correlations between modalities, enabling applications such as visual question answering and audiovisual content analysis. Current techniques rely on large modality-specific encoders, necessitating fine-tuning or training from scratch on vast aligned datasets (e.g., text-image, text-audio, image-audio). This approach has limitations: (i) it is very expensive due to the need for training large encoders on extensive datasets, (ii) acquiring aligned large paired datasets is challenging, and (iii) adding new modalities requires retraining the entire framework to incorporate these modalities. To address these issues, we propose OneEncoder, a lightweight framework that progressively represents and aligns four modalities (image, text, audio, video). Initially, we train a lightweight Universal Projection module (UP) to align image and text modalities. Then, we freeze the pretrained UP and progressively align future modalities to those already aligned. OneEncoder operates efficiently and cost-effectively, even in scenarios where vast aligned datasets are unavailable, due to its lightweight design. Trained on small paired datasets, it shows strong performance in tasks like classification, querying, and visual question answering, surpassing methods that rely on large datasets and specialized encoders.

Dates et versions

hal-04787322 , version 1 (17-11-2024)

Licence

Identifiants

Citer

Bilal Faye, Hanane Azzag, Mustapha Lebbah. OneEncoder: A Lightweight Framework for Progressive Alignment of Modalities. 2024. ⟨hal-04787322⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More