Multimodal Personality Recognition using Cross-Attention Transformer and Behaviour Encoding - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

Multimodal Personality Recognition using Cross-Attention Transformer and Behaviour Encoding

Résumé

Personality computing and affective computing have gained recent interest in many research areas. The datasets for the task generally have multiple modalities like video, audio, language and bio-signals. In this paper, we propose a flexible model for the task which exploits all available data. The task involves complex relations and to avoid using a large model for video processing specifically, we propose the use of behaviour encoding which boosts performance with minimal change to the model. Cross-attention using transformers has become popular in recent times and is utilised for fusion of different modalities. Since long term relations may exist, breaking the input into chunks is not desirable, thus the proposed model processes the entire input together. Our experiments show the importance of each of the above contributions
Fichier principal
Vignette du fichier
2021_VISAPP.pdf (458.61 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence
Domaine public

Dates et versions

hal-03519184 , version 1 (10-01-2022)

Licence

Domaine public

Identifiants

Citer

Tanay Agrawal, Dhruv Agarwal, Michal Balazia, Neelabh Sinha, Francois F Bremond. Multimodal Personality Recognition using Cross-Attention Transformer and Behaviour Encoding. VISAPP '22: International Conference on Computer Vision Theory and Applications, IAPR, Feb 2022, virtual, United States. pp.501-508, ⟨10.5220/0010841400003124⟩. ⟨hal-03519184⟩
114 Consultations
262 Téléchargements

Altmetric

Partager

More