Visible to the public Label-Only Model Inversion Attacks via Boundary Repulsion

TitleLabel-Only Model Inversion Attacks via Boundary Repulsion
Publication TypeConference Paper
Year of Publication2022
AuthorsKahla, Mostafa, Chen, Si, Just, Hoang Anh, Jia, Ruoxi
Conference Name2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Date Publishedjun
Keywordsaccountability, Adversarial attack and defense, Black Box Attacks, composability, Computer architecture, Ethics, face recognition, fairness, Metrics, Neural networks, Predictive models, privacy and ethics in vision, pubcrawl, Resiliency, Semantics, Training data, transparency
AbstractRecent studies show that the state-of-the-art deep neural networks are vulnerable to model inversion attacks, in which access to a model is abused to reconstruct private training data of any given target class. Existing attacks rely on having access to either the complete target model (whitebox) or the model's soft-labels (blackbox). However, no prior work has been done in the harder but more practical scenario, in which the attacker only has access to the model's predicted label, without a confidence measure. In this paper, we introduce an algorithm, Boundary-Repelling Model Inversion (BREP-MI), to invert private training data using only the target model's predicted labels. The key idea of our algorithm is to evaluate the model's predicted labels over a sphere and then estimate the direction to reach the target class's centroid. Using the example of face recognition, we show that the images reconstructed by BREP-MI successfully reproduce the semantics of the private training data for various datasets and target model architectures. We compare BREP-MI with the state-of-the-art white-box and blackbox model inversion attacks, and the results show that despite assuming less knowledge about the target model, BREP-MI outperforms the blackbox attack and achieves comparable results to the whitebox attack. Our code is available online.11https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion
DOI10.1109/CVPR52688.2022.01462
Citation Keykahla_label-only_2022