Multi-Position Human Activity Recognition using a Multi-Modal Deep Convolutional Neural Network - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Multi-Position Human Activity Recognition using a Multi-Modal Deep Convolutional Neural Network

Résumé

Human Activity Recognition (HAR) is a challenging task due to the complexity of human motions and the variability of datasets. The wide adoption of wearable devices and the incorporation of high-grade motion sensors and biosensors in these devices increased the number of available data that could be employed in sensor-based HAR. In this paper, we propose a multi-modal deep convolutional neural network capable of recognizing different activities using accelerometer data from several body positions. We compare the performance of our proposed system with existing DCNN model architectures. Our experiments on two public datasets for HAR demonstrated that our approach surpassed the performance of both single-position and simple multi-position DCNN models. This study provides valuable insights for the development of efficient edge-AI systems for activity recognition on resource-constrained embedded devices.
Fichier sous embargo
Fichier sous embargo
1 3 20
Année Mois Jours
Avant la publication
samedi 6 septembre 2025
Fichier sous embargo
samedi 6 septembre 2025
Connectez-vous pour demander l'accès au fichier

Dates et versions

hal-04177414 , version 1 (04-08-2023)

Identifiants

Citer

Aimé Cedric Muhoza, Emmanuel Bergeret, Corinne Brdys, Francis Gary. Multi-Position Human Activity Recognition using a Multi-Modal Deep Convolutional Neural Network. 8th International Conference on Smart and Sustainable Technologies (SpliTech), Jun 2023, Split, Croatia. ⟨10.23919/SpliTech58164.2023.10193600⟩. ⟨hal-04177414⟩
31 Consultations
3 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More