Visible to the public Detecting GAN-Generated Imagery Using Saturation Cues

TitleDetecting GAN-Generated Imagery Using Saturation Cues
Publication TypeConference Paper
Year of Publication2019
AuthorsMcCloskey, S., Albright, M.
Conference Name2019 IEEE International Conference on Image Processing (ICIP)
Date Publishedsep
Keywordscamera imagery, Cameras, convolutional neural nets, DeepFake, Forensics, Gallium nitride, GAN-generated imagery, GANs, generating network, generative adversarial networks, Generators, Human Behavior, human factors, Image forensics, Metrics, online disinformation campaigns, pubcrawl, resilience, Resiliency, saturation cues, Scalability, social media, social networking (online), Support vector machines, synthetic imagery, Training
AbstractImage forensics is an increasingly relevant problem, as it can potentially address online disinformation campaigns and mitigate problematic aspects of social media. Of particular interest, given its recent successes, is the detection of imagery produced by Generative Adversarial Networks (GANs), e.g. `deepfakes'. Leveraging large training sets and extensive computing resources, recent GANs can be trained to generate synthetic imagery which is (in some ways) indistinguishable from real imagery. We analyze the structure of the generating network of a popular GAN implementation [1], and show that the network's treatment of exposure is markedly different from a real camera. We further show that this cue can be used to distinguish GAN-generated imagery from camera imagery, including effective discrimination between GAN imagery and real camera images used to train the GAN.
DOI10.1109/ICIP.2019.8803661
Citation Keymccloskey_detecting_2019