Biblio

Filters: Author is Ma, D.  [Clear All Filters]
2020-11-04
Shen, J., Zhu, X., Ma, D..  2019.  TensorClog: An Imperceptible Poisoning Attack on Deep Neural Network Applications. IEEE Access. 7:41498—41506.

Internet application providers now have more incentive than ever to collect user data, which greatly increases the risk of user privacy violations due to the emerging of deep neural networks. In this paper, we propose TensorClog-a poisoning attack technique that is designed for privacy protection against deep neural networks. TensorClog has three properties with each of them serving a privacy protection purpose: 1) training on TensorClog poisoned data results in lower inference accuracy, reducing the incentive of abusive data collection; 2) training on TensorClog poisoned data converges to a larger loss, which prevents the neural network from learning the privacy; and 3) TensorClog regularizes the perturbation to remain a high structure similarity, so that the poisoning does not affect the actual content in the data. Applying our TensorClog poisoning technique to CIFAR-10 dataset results in an increase in both converged training loss and test error by 300% and 272%, respectively. It manages to maintain data's human perception with a high SSIM index of 0.9905. More experiments including different limited information attack scenarios and a real-world application transferred from pre-trained ImageNet models are presented to further evaluate TensorClog's effectiveness in more complex situations.