Visible to the public TensorClog: An Imperceptible Poisoning Attack on Deep Neural Network Applications

TitleTensorClog: An Imperceptible Poisoning Attack on Deep Neural Network Applications
Publication TypeMiscellaneous
Year of Publication2019
AuthorsShen, J., Zhu, X., Ma, D.
Keywordsabusive data collection, adversarial attack, AI Poisoning, CIFAR-10 dataset results, converged training loss, data converges, data privacy, data results, Deep Learning, deep neural network applications, deep neural networks, different limited information attack scenarios, feature extraction, Human Behavior, imperceptible poisoning attack, Internet, Internet application providers, learning (artificial intelligence), lower inference accuracy, neural nets, Neural networks, Perturbation methods, poisoning attack, privacy, privacy protection purpose, pubcrawl, real-world application, resilience, Resiliency, Scalability, security of data, TensorClog poisoning technique, test error, Training, user data, user privacy violations
Abstract

Internet application providers now have more incentive than ever to collect user data, which greatly increases the risk of user privacy violations due to the emerging of deep neural networks. In this paper, we propose TensorClog-a poisoning attack technique that is designed for privacy protection against deep neural networks. TensorClog has three properties with each of them serving a privacy protection purpose: 1) training on TensorClog poisoned data results in lower inference accuracy, reducing the incentive of abusive data collection; 2) training on TensorClog poisoned data converges to a larger loss, which prevents the neural network from learning the privacy; and 3) TensorClog regularizes the perturbation to remain a high structure similarity, so that the poisoning does not affect the actual content in the data. Applying our TensorClog poisoning technique to CIFAR-10 dataset results in an increase in both converged training loss and test error by 300% and 272%, respectively. It manages to maintain data's human perception with a high SSIM index of 0.9905. More experiments including different limited information attack scenarios and a real-world application transferred from pre-trained ImageNet models are presented to further evaluate TensorClog's effectiveness in more complex situations.

URLhttps://ieeexplore.ieee.org/document/8668758
DOI10.1109/ACCESS.2019.2905915
Citation Keyhen_tensorclog_2019