Handwritten math exams with multiple assessors: researching the added value of semi-automated assessment with atomic feedback
Résumé
Digital exams often fail in assessing all required mathematical skills. Therefore, it is advised that large-scale exams still feature some handwritten open answer questions. However, assessing those handwritten questions with multiple assessors is often a daunting task in terms of grading reliability and feedback. This paper presents a grading approach using semi-automated assessment with atomic feedback. Exam designers preset atomic feedback items with partial grades; next, assessors should just tick the items relevant to a student's answer, even allowing 'blind grading' where the underlying grades are not shown to the assessors. The approach might lead to a smoother and more reliable correction process in which feedback can be communicated to students and not solely grades. The experiment took place during a large-scale math exam organized by the Flemish Exam Commission, and this paper includes preliminary results of assessors' and students' impressions.
Origine | Fichiers produits par l'(les) auteur(s) |
---|