Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks
Résumé
Machine learning (ML) has revolutionized various industries, but concerns about privacy
and security have emerged as significant challenges. Membership inference attacks (MIAs)
pose a serious threat by attempting to determine whenever a specific data record was used
to train a ML model. In this study, we evaluate three defense strategies against MIAs: data
augmentation (DA), dropout with L2 regularization, and differential privacy (DP). Through
experiments, we assess the effectiveness of these techniques in mitigating the success of
MIAs while maintaining acceptable model accuracy. Our findings demonstrate that DA not
only improves model accuracy but also enhances privacy protection. The dropout and L2
regularization approach effectively reduces the impact of MIAs without compromising accuracy.
However, adopting DP introduces a trade-off, as it limits MIA influence but affects
model accuracy. Our DA defense strategy, for instance, show promising results, with privacy
improvements of 12.97%, 15.82%, and 10.28% for the MNIST, CIFAR-10, and CIFAR-100
datasets, respectively. These insights contribute to the growing field of privacy protection in
ML and highlight the significance of safeguarding sensitive data. Further research is needed
to advance privacy-preserving techniques and address the evolving landscape ofML security