Protecting ownership rights of ML models using watermarking in the light of adversarial attacks - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

Protecting ownership rights of ML models using watermarking in the light of adversarial attacks

Katarzyna Kapusta
  • Fonction : Auteur
  • PersonId : 1189761
Lucas Mattioli
  • Fonction : Auteur
  • PersonId : 1301261
Boussad Addad
  • Fonction : Auteur
  • PersonId : 878451
Mohammed Lansari
  • Fonction : Auteur
  • PersonId : 1197347

Résumé

In this paper, we present and analyze two novel - and seem- ingly distant - research trends in Machine Learning: ML wa- termarking and adversarial patches. First, we show how ML watermarking uses specially crafted inputs to provide a proof of model ownership. Second, we demonstrate how an attacker can craft adversarial samples in order to trigger an abnormal behavior in a model and thus perform an ambiguity attack on ML watermarking. Finally, we describe three countermea- sures that could be applied in order to prevent ambiguity at- tacks. We illustrate our works using the example of a binary classification model for welding inspection.
Fichier principal
Vignette du fichier
Protecting_ownership_rights_of_ML_models_using_watermarking_in_the_light_of_adversarial_attacks.pdf (1.65 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence
Copyright (Tous droits réservés)

Dates et versions

hal-04264033 , version 1 (30-10-2023)

Licence

Copyright (Tous droits réservés)

Identifiants

  • HAL Id : hal-04264033 , version 1

Citer

Katarzyna Kapusta, Lucas Mattioli, Boussad Addad, Mohammed Lansari. Protecting ownership rights of ML models using watermarking in the light of adversarial attacks. Workshop AITA AI Trustworthiness Assessment - AAAI Spring Symposium, Mar 2023, Palo Alto (Californie), United States. ⟨hal-04264033⟩
90 Consultations
157 Téléchargements

Partager

More