Visible to the public Adversarial Examples Against Deep Neural Network Based Steganalysis

TitleAdversarial Examples Against Deep Neural Network Based Steganalysis
Publication TypeConference Paper
Year of Publication2018
AuthorsZhang, Yiwei, Zhang, Weiming, Chen, Kejiang, Liu, Jiayang, Liu, Yujia, Yu, Nenghai
Conference NameProceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security
Date PublishedJune 2018
PublisherACM
Conference LocationNew York, NY, USA
ISBN Number978-1-4503-5625-1
Keywordsadversarial examples, Artificial neural networks, Collaboration, comparability, cyber physical systems, Deep Neural Network, Human Behavior, Metrics, policy-based governance, pubcrawl, Resiliency, Scalability, science of security, security, steganalysis, steganography, steganography detection
Abstract

Deep neural network based steganalysis has developed rapidly in recent years, which poses a challenge to the security of steganography. However, there is no steganography method that can effectively resist the neural networks for steganalysis at present. In this paper, we propose a new strategy that constructs enhanced covers against neural networks with the technique of adversarial examples. The enhanced covers and their corresponding stegos are most likely to be judged as covers by the networks. Besides, we use both deep neural network based steganalysis and high-dimensional feature classifiers to evaluate the performance of steganography and propose a new comprehensive security criterion. We also make a tradeoff between the two analysis systems and improve the comprehensive security. The effectiveness of the proposed scheme is verified with the evidence obtained from the experiments on the BOSSbase using the steganography algorithm of WOW and popular steganalyzers with rich models and three state-of-the-art neural networks.

URLhttps://dl.acm.org/doi/10.1145/3206004.3206012
DOI10.1145/3206004.3206012
Citation Keyzhang_adversarial_2018