CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness - Archive ouverte HAL
Conference Papers Year : 2022

CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness

Abstract

We present CAISAR, an open-source platform under active development for the characterization of AI systems' robustness and safety. CAISAR provides a unified entry point for defining verification problems by using WhyML, the mature and expressive language of the Why3 verification platform. Moreover, CAISAR orchestrates and composes state-of-the-art machine learning verification tools which, individually, are not able to efficiently handle all problems but, collectively, can cover a growing number of properties. Our aim is to assist, on the one hand, the V&V process by reducing the burden of choosing the methodology tailored to a given verification problem, and on the other hand the tools developers by factorizing useful features-visualization, report generation, property description-in one platform. CAISAR will soon be available at https://git.frama-c.com/pub/caisar.
Fichier principal
Vignette du fichier
main.pdf (329.37 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03687211 , version 1 (03-06-2022)
hal-03687211 , version 2 (21-06-2022)

Identifiers

  • HAL Id : hal-03687211 , version 2

Cite

Julien Girard-Satabin, Michele Alberti, François Bobot, Zakaria Chihani, Augustin Lemesle. CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness. AISafety, Jul 2022, Vienne, Austria. ⟨hal-03687211v2⟩
428 View
262 Download

Share

More