Ensuring the Reliability of AI Systems through Methodological Processes
Résumé
To gain a competitive advantage in the industry through the effective deployment of AI, it is necessary to expand traditional engineering disciplines to encompass AI specific
considerations. This allows to assess and mitigate the risks associated with AI technologies, and hence to leverage their potential to enhance system autonomy. Maintaining a high level of trust among stakeholders, such as regulatory bodies, customers, and end-users, is also crucial. This paper presents findings from the confiance.ai research program, which focuses on developing methodological processes to guide the engineering of reliable AI systems. These processes are the result of collaborative efforts between multidisciplinary experts concerned with the trustworthiness of AI systems. Examples of these processes include trustworthiness risk analysis and data trustworthiness assessment.