Multimodal Neural Network for Sentiment Analysis in Embedded Systems - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Multimodal Neural Network for Sentiment Analysis in Embedded Systems

Résumé

Multimodal neural network in sentiment analysis uses video, text and audio. Processing these three modalities tends to create computationally high models. In the embedded context, all resources and specifically computational resources are restricted. In this paper, we design models dealing with these two antagonist issues. We focused our work on reducing the numbers of model input features and the size of the different neural network architectures. The major contribution in this paper is the design of a specific 3D Residual Network instead of using a basic 3D convolution. Our experiments are focused on the well-known dataset MOSI (Multimodal Corpus of Sentiment Intensity). The objective is to perform similar results as the state of the art. Our best multimodal approach achieves a F1 score of 80% with a number of parameters reduced by 2.2 and the memory load reduced by a factor 13.8, compared to the state of the art. We designed five models, one for each modality (i.e video, audio and text) and one for each fusion technique. The two high-level multimodal fusions presented in this paper are based on the evidence theory and on a neural network approach.

Dates et versions

hal-03445482 , version 1 (24-11-2021)

Licence

Paternité - Pas d'utilisation commerciale - Pas de modification

Identifiants

Citer

Quentin Portes, José Mendes Carvalho, Julien Pinquier, Frédéric Lerasle. Multimodal Neural Network for Sentiment Analysis in Embedded Systems. 16th International Conference on Computer Vision Theory and Applications (VISAPP 2021), Feb 2021, Online, France. pp.387-398, ⟨10.5220/0010224703870398⟩. ⟨hal-03445482⟩
79 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More