Trustworthy ML Assessment methodology
Résumé
An ML-based system is a software system that incorporates machine learning. The adoption of an
ML-based system depends on its ability to deliver the expected service in a secure manner (i.e.,
adherence to specifications), to meet user expectations (i.e., fitness for purpose), and to ensure
uninterrupted service delivery. Thus, trustworthiness is closely related to accountability. It is
therefore imperative that ML-based critical systems are validated, accurate, accountable,
explainable, resilient, secure and compliant with regulations and standards. Most academic
research on machine learning has focused on the models' algorithmic properties. However, it is not
sufficient to rely on advances in algorithmic research alone to develop trustworthy AI products.
This includes data preparation, algorithm design, development and deployment, as well as
operation, monitoring and management. Accordingly, the trustworthiness of such system should
be systematically established and evaluated throughout its lifecycle. Traditional methods for
testing and validating algorithms are inadequate due to the multi-dimensional nature of
trustworthiness, which includes a range of factors such as accountability, accuracy, controllability,
correctness, data quality, reliability, resilience, robustness, security, safety, transparency,
accountability, fairness and privacy. ML-based systems can help identify and address quality
requirements, including socio-technical system risks and process considerations. In this talk, we
highlight how trustworthiness characterisation and assessment are positioned within the ML
engineering process. Meanwhile, we focus on 6 key trustworthiness attributes, namely robustness,
effectiveness, dependability, usability, human agency including explainability/interpretability, and
human oversight, and provide references illustrated with some indicators.