Marginal and training-conditional guarantees in one-shot federated conformal prediction - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2024

Marginal and training-conditional guarantees in one-shot federated conformal prediction

Résumé

We study conformal prediction in the one-shot federated learning setting. The main goal is to compute marginally and training-conditionally valid prediction sets, at the server-level, in only one round of communication between the agents and the server. Using the quantile-of-quantiles family of estimators and split conformal prediction, we introduce a collection of computationally-efficient and distribution-free algorithms that satisfy the aforementioned requirements. Our approaches come from theoretical results related to order statistics and the analysis of the Beta-Beta distribution. We also prove upper bounds on the coverage of all proposed algorithms when the nonconformity scores are almost surely distinct. For algorithms with training-conditional guarantees, these bounds are of the same order of magnitude as those of the centralized case. Remarkably, this implies that the one-shot federated learning setting entails no significant loss compared to the centralized case. Our experiments confirm that our algorithms return prediction sets with coverage and length similar to those obtained in a centralized setting.
Fichier principal
Vignette du fichier
main.pdf (6.39 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04579882 , version 1 (18-05-2024)

Licence

Identifiants

Citer

Pierre Humbert, Batiste Le Bars, Aurélien Bellet, Sylvain Arlot. Marginal and training-conditional guarantees in one-shot federated conformal prediction. 2024. ⟨hal-04579882⟩
105 Consultations
50 Téléchargements

Altmetric

Partager

More