Influence of expertise on human and machine visual attention in a medical image classification task
Résumé
In many different domains, experts are able to solve complex tasks after glancing very briefly at an image (e.g. radiologists, pilots...). However the perceptual mechanisms underlying expert performance are still largely unknown. Recently, several machine learning algorithms have been shown to outperform human experts in specific tasks such as skin cancer classification. But similarly as humans, these algorithms often behave as black boxes and their information processing pipeline remains unknown. This lack of transparency and interpretability is highly problematic in applications involving human lives, such as healthcare.
In this work, we directly compare human visual attention to machine visual attention when performing the same visual task. We have designed a medical diagnosis task involving the detection of lesions in 250 small bowels endoscopic images. We collected eye movements from 22 novices and gastroenterologists with various degrees of expertise while they classified these images according to their pathological status. In parallel, we trained a deep learning algorithm on the exact same task. We show that the post-hoc artificial attention maps (i.e. the image regions most used by the algorithm to take a decision) are significantly closer to human expert attention maps than to the ones of human novices. Interestingly, this is true for pathological images, but not for not pathological ones. Through the understanding of the similarities between the visual decision making process of human and machine experts, we hope to inform both the training of new doctors and the architecture of new algorithms.