Assessing Learning with MetaTutor, a Multi-Agent Hypermedia Learning Environment
Résumé
In this paper we discuss the ways in which assessments of learning were evaluated, designed, and implemented in {MetaTutor} (a multi-agent, hypermedia learning environment about several human body systems; Azevedo et al., 2012, 2013). We also share the lessons that we have learned from assessing learning with {MetaTutor} across three different universities and three studies (N = 336). Techniques and considerations for assessing learning are shared, including pedagogical and psychometric properties of items. Results revealed significant differences in pre to posttest learning across all studies and that changes to the content and assessments of learning lead to lower scores, in particular, on the pretest where cores had previously been high and limited variance. The methods and results of this paper can be used to motivate other researchers who use {CBLEs} to examine and improve the psychometric and pedagogical features of their learning assessments.