Visible to the public When Deep Learning Meets Steganography: Protecting Inference Privacy in the Dark

TitleWhen Deep Learning Meets Steganography: Protecting Inference Privacy in the Dark
Publication TypeConference Paper
Year of Publication2022
AuthorsLiu, Qin, Yang, Jiamin, Jiang, Hongbo, Wu, Jie, Peng, Tao, Wang, Tian, Wang, Guojun
Conference NameIEEE INFOCOM 2022 - IEEE Conference on Computer Communications
Date Publishedmay
Keywordsadversarial attacks, cloud computing, composability, data privacy, Deep Learning, edge computing, Image edge detection, inference privacy, Metrics, Perturbation methods, privacy, pubcrawl, steganography, steganography detection, Weapons
AbstractWhile cloud-based deep learning benefits for high-accuracy inference, it leads to potential privacy risks when exposing sensitive data to untrusted servers. In this paper, we work on exploring the feasibility of steganography in preserving inference privacy. Specifically, we devise GHOST and GHOST+, two private inference solutions employing steganography to make sensitive images invisible in the inference phase. Motivated by the fact that deep neural networks (DNNs) are inherently vulnerable to adversarial attacks, our main idea is turning this vulnerability into the weapon for data privacy, enabling the DNN to misclassify a stego image into the class of the sensitive image hidden in it. The main difference is that GHOST retrains the DNN into a poisoned network to learn the hidden features of sensitive images, but GHOST+ leverages a generative adversarial network (GAN) to produce adversarial perturbations without altering the DNN. For enhanced privacy and a better computation-communication trade-off, both solutions adopt the edge-cloud collaborative framework. Compared with the previous solutions, this is the first work that successfully integrates steganography and the nature of DNNs to achieve private inference while ensuring high accuracy. Extensive experiments validate that steganography has excellent ability in accuracy-aware privacy protection of deep learning.
NotesISSN: 2641-9874
DOI10.1109/INFOCOM48880.2022.9796975
Citation Keyliu_when_2022