Towards Engineering Processes to Guide the Development of Trustworthy ML Systems
Résumé
Engineering reliable Machine Learning (ML)-based safety-critical systems, such as autonomous vehicles, requires a comprehensive understanding of the intricate interplay between different disciplines, including among others ML algorithms, systems, and safety engineering. This complexity arises from the dynamic nature of ML models, the uncertainty of real-world data, and the potential for adversarial attacks. To address this challenge, the Confiance.ai research program proposes an end-to-end method to guide the development of trustworthy ML systems. This method includes engineering processes and a set of associated tools, which provide model-based guidelines covering the entire ML systems engineering lifecycle. The proposal is the result of collaboration between multidisciplinary experts focused on the trustworthiness of ML systems. The method is illustrated with an example of an ML model robustness evaluation process.