Adversarial Examples Against Deep Neural Network Based Steganalysis
Title | Adversarial Examples Against Deep Neural Network Based Steganalysis |
Publication Type | Conference Paper |
Year of Publication | 2018 |
Authors | Zhang, Yiwei, Zhang, Weiming, Chen, Kejiang, Liu, Jiayang, Liu, Yujia, Yu, Nenghai |
Conference Name | Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security |
Date Published | June 2018 |
Publisher | ACM |
Conference Location | New York, NY, USA |
ISBN Number | 978-1-4503-5625-1 |
Keywords | adversarial examples, Artificial neural networks, Collaboration, comparability, cyber physical systems, Deep Neural Network, Human Behavior, Metrics, policy-based governance, pubcrawl, Resiliency, Scalability, science of security, security, steganalysis, steganography, steganography detection |
Abstract | Deep neural network based steganalysis has developed rapidly in recent years, which poses a challenge to the security of steganography. However, there is no steganography method that can effectively resist the neural networks for steganalysis at present. In this paper, we propose a new strategy that constructs enhanced covers against neural networks with the technique of adversarial examples. The enhanced covers and their corresponding stegos are most likely to be judged as covers by the networks. Besides, we use both deep neural network based steganalysis and high-dimensional feature classifiers to evaluate the performance of steganography and propose a new comprehensive security criterion. We also make a tradeoff between the two analysis systems and improve the comprehensive security. The effectiveness of the proposed scheme is verified with the evidence obtained from the experiments on the BOSSbase using the steganography algorithm of WOW and popular steganalyzers with rich models and three state-of-the-art neural networks. |
URL | https://dl.acm.org/doi/10.1145/3206004.3206012 |
DOI | 10.1145/3206004.3206012 |
Citation Key | zhang_adversarial_2018 |