Less is more: Summarizing Patch Tokens for efficient Multi-Label Class-Incremental Learning - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Less is more: Summarizing Patch Tokens for efficient Multi-Label Class-Incremental Learning

Résumé

Prompt tuning has emerged as an effective rehearsal-free technique for class-incremental learning (CIL) that learns a tiny set of task-specific parameters (or prompts) to instruct a pre-trained transformer to learn on a sequence of tasks. Albeit effective, prompt tuning methods do not lend well in the multi-label class incremental learning (MLCIL) scenario (where an image contains multiple foreground classes) due to the ambiguity in selecting the correct prompt(s) corresponding to different foreground objects belonging to multiple tasks. To circumvent this issue we propose to eliminate the prompt selection mechanism by maintaining task-specific pathways, which allow us to learn representations that do not interact with the ones from the other tasks. Since independent pathways in truly incremental scenarios will result in an explosion of computation due to the quadratically complex multi-head self-attention (MSA) operation in prompt tuning, we propose to reduce the original patch token embeddings into summarized tokens. Prompt tuning is then applied to these fewer summarized tokens to compute the final representation. Our proposed method Multi-Label class incremental learning via summarising pAtch tokeN Embeddings (MULTI-LANE) enables learning disentangled task-specific representations in MLCIL while ensuring fast inference. We conduct experiments in common benchmarks and demonstrate that our MULTI-LANE achieves a new state-of-the-art in MLCIL. Additionally, we show that MULTI-LANE is also competitive in the CIL setting. Source code available at https://github.com/tdemin16/multi-lane

Dates et versions

hal-04794149 , version 1 (20-11-2024)

Identifiants

Citer

Thomas de Min, Massimiliano Mancini, Stéphane Lathuilière, Subhankar Roy, Elisa Ricci. Less is more: Summarizing Patch Tokens for efficient Multi-Label Class-Incremental Learning. CoLLAs 2024 : 3rd Conference on Lifelong Learning Agents, Jul 2024, Pisa (Italy), Italy. ⟨hal-04794149⟩
2 Consultations
0 Téléchargements

Altmetric

Partager

More