The Influence of Training Parameters on Neural Networks' Vulnerability to Membership Inference Attacks
Abstract
With Machine Learning (ML) models being increasingly applied in sensitive domains, the related privacy concerns are rising. Neural networks (NN) are vulnerable to, so-called, membership inference attacks (MIA) which aim at determining whether a particular data sample was used for training the model. The factors that render NNs prone to this privacy attack are not yet fully understood. However, previous work suggests that the setup of the models and the training process might impact a model's risk to MIAs. To investigate these factors more in detail, we set out to experimentally evaluate the influence of the training choices in NNs on the models' vulnerability. Our analyses highlight that the batch size, the activation function, and the application and placement of batch normalization and dropout have the highest impact on the success of MIAs. Additionally, we applied statistical analyses to the experiment results and found a highly positive correlation between a model's ability to resist MIAs and its generalization capacity. We also defined a metric to measure the difference in the distributions of loss values between member and non-member data samples and observed that models scoring higher values on that metric were consistently more exposed to the attack. The latter observation was further confirmed by manually generating predictions for member and non-member samples producing loss values within specific distributions and launching MIAs on them.
- Citation
- BibTeX
Bouanani, Ou. & Boenisch, Fr.,
(2022).
The Influence of Training Parameters on Neural Networks' Vulnerability to Membership Inference Attacks.
In:
Demmler, D., Krupka, D. & Federrath, H.
(Hrsg.),
INFORMATIK 2022.
Gesellschaft für Informatik, Bonn.
(S. 1227-1246).
DOI: 10.18420/inf2022_106
@inproceedings{mci/Bouanani2022,
author = {Bouanani,Oussama AND Boenisch,Franziska},
title = {The Influence of Training Parameters on Neural Networks' Vulnerability to Membership Inference Attacks},
booktitle = {INFORMATIK 2022},
year = {2022},
editor = {Demmler, Daniel AND Krupka, Daniel AND Federrath, Hannes} ,
pages = { 1227-1246 } ,
doi = { 10.18420/inf2022_106 },
publisher = {Gesellschaft für Informatik, Bonn},
address = {}
}
author = {Bouanani,Oussama AND Boenisch,Franziska},
title = {The Influence of Training Parameters on Neural Networks' Vulnerability to Membership Inference Attacks},
booktitle = {INFORMATIK 2022},
year = {2022},
editor = {Demmler, Daniel AND Krupka, Daniel AND Federrath, Hannes} ,
pages = { 1227-1246 } ,
doi = { 10.18420/inf2022_106 },
publisher = {Gesellschaft für Informatik, Bonn},
address = {}
}
Dateien | Groesse | Format | Anzeige | |
---|---|---|---|---|
trustai_01.pdf | 204.9Kb | View/ |
Sollte hier kein Volltext (PDF) verlinkt sein, dann kann es sein, dass dieser aus verschiedenen Gruenden (z.B. Lizenzen oder Copyright) nur in einer anderen Digital Library verfuegbar ist. Versuchen Sie in diesem Fall einen Zugriff ueber die verlinkte DOI: 10.18420/inf2022_106
Haben Sie fehlerhafte Angaben entdeckt? Sagen Sie uns Bescheid: Send Feedback
More Info
DOI: 10.18420/inf2022_106
ISBN: 978-3-88579-720-3
ISSN: 1617-5468
xmlui.MetaDataDisplay.field.date: 2022
Language: (en)