A probabilistic framework for mutation testing in deep neural networks
Résumé
Context: Mutation Testing (MT) is an important tool in traditional Software Engineering (SE) white-box
testing. It aims to artificially inject faults in a system to evaluate a test suite’s capability to detect them,
assuming that the test suite defects finding capability will then translate to real faults. If MT has long been
used in SE, it is only recently that it started gaining the attention of the Deep Learning (DL) community, with
researchers adapting it to improve the testability of DL models and improve the trustworthiness of DL systems.
Objective: If several techniques have been proposed for MT, most of them neglected the stochasticity inherent
to DL resulting from the training phase. Even the latest MT approaches in DL, which propose to tackle MT
through a statistical approach, might give inconsistent results. Indeed, as their statistic is based on a fixed
set of sampled training instances, it can lead to different results across instances set when results should be
consistent for any instance.
Methods: In this work, we propose a Probabilistic Mutation Testing (PMT) approach that alleviates the
inconsistency problem and allows for a more consistent decision on whether a mutant is killed or not.
Results: We show that PMT effectively allows a more consistent and informed decision on mutations through
evaluation using three models and eight mutation operators used in previously proposed MT methods. We also
analyze the trade-off between the approximation error and the cost of our method, showing that relatively
small error can be achieved for a manageable cost.
Conclusion: Our results showed the limitation of current MT practices in DNN and the need to rethink them.
We believe PMT is the first step in that direction which effectively removes the lack of consistency across test
executions of previous methods caused by the stochasticity of DNN training.