Title | Detecting GAN-Generated Imagery Using Saturation Cues |
Publication Type | Conference Paper |
Year of Publication | 2019 |
Authors | McCloskey, S., Albright, M. |
Conference Name | 2019 IEEE International Conference on Image Processing (ICIP) |
Date Published | sep |
Keywords | camera imagery, Cameras, convolutional neural nets, DeepFake, Forensics, Gallium nitride, GAN-generated imagery, GANs, generating network, generative adversarial networks, Generators, Human Behavior, human factors, Image forensics, Metrics, online disinformation campaigns, pubcrawl, resilience, Resiliency, saturation cues, Scalability, social media, social networking (online), Support vector machines, synthetic imagery, Training |
Abstract | Image forensics is an increasingly relevant problem, as it can potentially address online disinformation campaigns and mitigate problematic aspects of social media. Of particular interest, given its recent successes, is the detection of imagery produced by Generative Adversarial Networks (GANs), e.g. `deepfakes'. Leveraging large training sets and extensive computing resources, recent GANs can be trained to generate synthetic imagery which is (in some ways) indistinguishable from real imagery. We analyze the structure of the generating network of a popular GAN implementation [1], and show that the network's treatment of exposure is markedly different from a real camera. We further show that this cue can be used to distinguish GAN-generated imagery from camera imagery, including effective discrimination between GAN imagery and real camera images used to train the GAN. |
DOI | 10.1109/ICIP.2019.8803661 |
Citation Key | mccloskey_detecting_2019 |