Visible to the public Evading Deepfake-Image Detectors with White- and Black-Box Attacks

TitleEvading Deepfake-Image Detectors with White- and Black-Box Attacks
Publication TypeConference Paper
Year of Publication2020
AuthorsCarlini, N., Farid, H.
Conference Name2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Date Publishedjun
Keywordsattack case studies, AUC, black-box attack, composability, deepfake-image detectors, disinformation campaigns, Forensics, fraudulent social media profiles, Generators, image area, image classification, Image forensics, image generators, image representation, image sensors, image-forensic classifiers, learning (artificial intelligence), Metrics, neural nets, Neural Network, Optimization, Perturbation methods, popular forensic approach, pubcrawl, resilience, Resiliency, Robustness, security, security of data, significant vulnerabilities, social networking (online), state- of-the-art classifier, synthesizer, synthetic content, synthetically-generated content, target classifier, Training, Twitter, white box, White Box Security
Abstract

It is now possible to synthesize highly realistic images of people who do not exist. Such content has, for example, been implicated in the creation of fraudulent socialmedia profiles responsible for dis-information campaigns. Significant efforts are, therefore, being deployed to detect synthetically-generated content. One popular forensic approach trains a neural network to distinguish real from synthetic content.We show that such forensic classifiers are vulnerable to a range of attacks that reduce the classifier to near- 0% accuracy. We develop five attack case studies on a state- of-the-art classifier that achieves an area under the ROC curve (AUC) of 0.95 on almost all existing image generators, when only trained on one generator. With full access to the classifier, we can flip the lowest bit of each pixel in an image to reduce the classifier's AUC to 0.0005; perturb 1% of the image area to reduce the classifier's AUC to 0.08; or add a single noise pattern in the synthesizer's latent space to reduce the classifier's AUC to 0.17. We also develop a black-box attack that, with no access to the target classifier, reduces the AUC to 0.22. These attacks reveal significant vulnerabilities of certain image-forensic classifiers.

DOI10.1109/CVPRW50498.2020.00337
Citation Keycarlini_evading_2020