Fairness of MOOC Completion Predictions Across Demographics and Contextual Variables
Résumé
While machine learning (ML) has been extensively used in Massive Open Online Courses (MOOCs) to predict whether learners are at risk of dropping-out or failing, very few work has investigated the bias or possible unfairness of the predictions generated by these models. This is however important, because MOOCs typically engage very diverse audiences worldwide, and it is unsure whether the existing ML models will generate fair predictions to all learners. In this paper, we explore the fairness of ML models meant to predict course completion in a MOOC mostly offered in Europe an Africa. To do so, we leverage and compare ABROCA and MADD, two fairness metrics that have been proposed specifically in education. Our results show that some ML models are more likely to generate unfair predictions than others. Even in the fairest models, we found biases in their predictions related to how the learners’ enrolled as well as their country, gender, age and job status. These biases are particularly detrimental to African learners, which is a key finding as they are an understudied population in AI fairness analysis in education.