On assessing Ml model robustness: A methodological framework - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

On assessing Ml model robustness: A methodological framework

Afef Awadid
  • Fonction : Auteur

Résumé

Due to their uncertainty and vulnerability to adversarial attacks, machine learning (ML) models can lead to severe consequences, including the loss of human life, when embedded in safety-critical systems such as autonomous vehicles. Therefore, it is crucial to assess the robustness of such models before integrating them into these systems. ML model robustness refers to the ability of an ML model to maintain its level of performance under any circumstances. Against this background, the Confiance.ai research program proposes a methodological framework for assessing the robustness of ML models. The framework encompasses methodological processes (guidelines) captured in Capella models, along with a set of supporting tools. This paper aims to provide an overview of this framework and its application in an industrial setting.
Fichier non déposé

Dates et versions

hal-04682746 , version 1 (30-08-2024)

Identifiants

  • HAL Id : hal-04682746 , version 1

Citer

Afef Awadid, Boris Robert. On assessing Ml model robustness: A methodological framework. Symposium on Scaling AI Assessments Tools, Ecosystems and Business Models (SAIA), Sep 2024, Cologne, Germany. ⟨hal-04682746⟩
56 Consultations
0 Téléchargements

Partager

More