Visible to the public Biblio

Filters: Keyword is privacy and ethics in vision  [Clear All Filters]
2023-03-31
Kahla, Mostafa, Chen, Si, Just, Hoang Anh, Jia, Ruoxi.  2022.  Label-Only Model Inversion Attacks via Boundary Repulsion. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :15025–15033.
Recent studies show that the state-of-the-art deep neural networks are vulnerable to model inversion attacks, in which access to a model is abused to reconstruct private training data of any given target class. Existing attacks rely on having access to either the complete target model (whitebox) or the model's soft-labels (blackbox). However, no prior work has been done in the harder but more practical scenario, in which the attacker only has access to the model's predicted label, without a confidence measure. In this paper, we introduce an algorithm, Boundary-Repelling Model Inversion (BREP-MI), to invert private training data using only the target model's predicted labels. The key idea of our algorithm is to evaluate the model's predicted labels over a sphere and then estimate the direction to reach the target class's centroid. Using the example of face recognition, we show that the images reconstructed by BREP-MI successfully reproduce the semantics of the private training data for various datasets and target model architectures. We compare BREP-MI with the state-of-the-art white-box and blackbox model inversion attacks, and the results show that despite assuming less knowledge about the target model, BREP-MI outperforms the blackbox attack and achieves comparable results to the whitebox attack. Our code is available online.11https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion
2023-01-06
Golatkar, Aditya, Achille, Alessandro, Wang, Yu-Xiang, Roth, Aaron, Kearns, Michael, Soatto, Stefano.  2022.  Mixed Differential Privacy in Computer Vision. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :8366—8376.
We introduce AdaMix, an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data. While pre-training language models on large public datasets has enabled strong differential privacy (DP) guarantees with minor loss of accuracy, a similar practice yields punishing trade-offs in vision tasks. A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset. AdaMix incorporates few-shot training, or cross-modal zero-shot learning, on public data prior to private fine-tuning, to improve the trade-off. AdaMix reduces the error increase from the non-private upper bound from the 167–311% of the baseline, on average across 6 datasets, to 68-92% depending on the desired privacy level selected by the user. AdaMix tackles the trade-off arising in visual classification, whereby the most privacy sensitive data, corresponding to isolated points in representation space, are also critical for high classification accuracy. In addition, AdaMix comes with strong theoretical privacy guarantees and convergence analysis.