Sequential Informed Federated Unlearning: Efficient and Provable Client Unlearning in Federated Optimization - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Sequential Informed Federated Unlearning: Efficient and Provable Client Unlearning in Federated Optimization

Résumé

The aim of Machine Unlearning (MU) is to provide theoretical guarantees on the removal of the contribution of a given data point from a training procedure. Federated Unlearning (FU) consists in extending MU to unlearn a given client's contribution from a federated training routine. Current FU approaches are generally not scalable, and do not come with sound theoretical quantification of the effectiveness of unlearning. In this work we present Informed Federated Unlearning (IFU), a novel efficient and quantifiable FU approach. Upon unlearning request from a given client, IFU identifies the optimal FL iteration from which FL has to be reinitialized, with unlearning guarantees obtained through a randomized perturbation mechanism. The theory of IFU is also extended to account for sequential unlearning requests. Experimental results on different tasks and dataset show that IFU leads to more efficient unlearning procedures as compared to basic retraining and state-of-the-art FU approaches.
Fichier principal
Vignette du fichier
Sequential_Informed_Federated_Unlearning.pdf (395.93 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03910848 , version 1 (22-12-2022)

Identifiants

  • HAL Id : hal-03910848 , version 1

Citer

Yann Fraboni, Richard Vidal, Laetitia Kameni, Marco Lorenzi. Sequential Informed Federated Unlearning: Efficient and Provable Client Unlearning in Federated Optimization. 2023. ⟨hal-03910848⟩
91 Consultations
165 Téléchargements

Partager

Gmail Facebook X LinkedIn More