Privacy in Machine Learning
Résumé
Privacy considerations arise as soon data is collected on individuals, on group on individuals, on moral personas, . . . . More specifically, we look at the setup where one processes data D through a mechanism M which can be anything from data publication, basic statistics computation, decision rule learning, complex machine learning tasks, . . . , and wants the result M(D) to be made public. The natural question on a privacy standpoint is whether the mechanism M can be "reverted" in order to learn sensitive information from D. For instance, if M is the identity function, the publication ofM¹º leaks full information about D and even though the notion of privacy is not rigorously defined yet, we can intuitively qualify such mechanism as "non-private".
This manuscript is a transcription of Prof. Rachel Cummings’ lecture titled Privacy in Machine Learning that was given at the 2022 Spring School of Theoretical Computer Science at the CIRM, Marseille, France. Any error in this document may be due to its transcription and cannot be imputed to Prof. Cummings.
Domaines
Logique en informatique [cs.LO]Origine | Fichiers produits par l'(les) auteur(s) |
---|