Assurance Cases to face the complexity of ML-based systems verification
Résumé
The verification and validation of AI-based systems raise new issues that are not easily addressed by existing practices and standards. We think that this gap is actually an opportunity to introduce new practices and establish a clearer and more formal link between the engineering activities and artefacts, the expected properties of the system, and the verification and validation evidence.
Therefore, in this paper, we describe and illustrate an approach integrating (i) the definition and modelling
of an AI-based system engineering workflow, (ii) the identification of the trustworthiness properties, and
(iii) the argumentation demonstrating the satisfaction of these properties. This approach is centred on the
model of Assurance Cases, a semi-formal representation of argumentation which supports the claim of system
trustworthiness. In addition, we present supporting tools for this formalism that enable the automatic production of Verification and Validation plans for specific properties of AI-based systems.
Origine | Fichiers produits par l'(les) auteur(s) |
---|