Trust in Automation: Analysis and Model of Operator Trust in Decision Aid AI Over Time - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

Trust in Automation: Analysis and Model of Operator Trust in Decision Aid AI Over Time

Résumé

Understanding how human trust in AI evolves over time is essential to identify the limits of each party and provide solutions for optimal collaboration. With this goal in mind, we examine the factors that directly or indirectly influence trust, whether they come from humans, AI, or the environment. We then propose a summary of methods for measuring trust, whether subjective or objective, to show which ones are best suited for longitudinal studies. We then focus on the main driving force behind the evolution of trust: feedback. We justify how learning feedback can be transposed to trust and what types of feedback can be applied to impact the evolution of trust over time. After understanding the factors that influence and how to measure trust, we propose an application example on a maritime surveillance tool with an AI-based decision aid.
Fichier principal
Vignette du fichier
paper8.pdf (897.89 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04328490 , version 1 (07-12-2023)

Identifiants

  • HAL Id : hal-04328490 , version 1

Citer

Vincent Fer, Daniel Lafond, Gilles Coppin, Mathias Bollaert, Olivier Grisvard, et al.. Trust in Automation: Analysis and Model of Operator Trust in Decision Aid AI Over Time. Conference on Artificial Intelligence for Defense, DGA Maîtrise de l'Information, Nov 2023, Rennes, France. ⟨hal-04328490⟩
118 Consultations
176 Téléchargements

Partager

More