A Multi-Componential Analysis of Emotions during Complex Learning with an Intelligent Multi-agent System
Résumé
This paper presents the evaluation of the synchronization of three emotional measurement methods (automatic facial expression recognition, self-report, electrodermal activity) and their agreement regarding learners' emotions. Data were collected from 67 undergraduates enrolled at a North American university whom learned about a complex science topic while interacting with MetaTutor, a multi-agent computerized learning environment. Videos of learners' facial expressions captured with a webcam were analyzed using automatic facial recognition software (FaceReader 5.0). Learners' physiological arousal was recorded using Affectiva's Q-Sensor 2.0 electrodermal activity measurement bracelet. Learners' self-reported their experience of 19 different emotional states on five different occasions during the learning session, which were used as markers to synchronize data from FaceReader and Q-Sensor. We found a high agreement between the facial and self-report data (75.6%), but low levels of agreement between them and the Q-Sensor data, suggesting that a tightly coupled relationship does not always exist between emotional response components.
Origine | Fichiers produits par l'(les) auteur(s) |
---|