Learning to identify and settle dilemmas through contextual user preferences
Résumé
Artificial Intelligence systems have a significant impact on human lives. Machine Ethics tries to align these systems with human values, by integrating “ethical considerations”. However, most approaches consider a single objective, and thus
cannot accommodate different, contextual human preferences. Multi-Objective Reinforcement Learning algorithms account for various preferences, but they often are not intelligible nor contextual (e.g., weighted preferences). Our novel approach identifies dilemmas, presents them to users, and learns to settle them, based on intelligible and contextualized preferences over actions. We intend to maximize understandability and opportunities for user-system co-construction by showing dilemmas, and triggering interactions, thus empowering users. The block-based architecture enables leveraging simple mechanisms that can be updated and improved. Validation on a Smart Grid use-case shows that our algorithm finds actions for various trade-offs, and quickly learns to settle dilemmas, reducing the cognitive load on users.