Title | A Study on Combating Emerging Threat of Deepfake Weaponization |
Publication Type | Conference Paper |
Year of Publication | 2020 |
Authors | Katarya, R., Lal, A. |
Conference Name | 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC) |
Keywords | audio-visual systems, audios, autoencoders, believable synthetic content, combating emerging threat, Deep Learning, DeepFake, deepfake detection, deepfake technology, deepfake weaponization, fake content, Fake image, Fake Video, Forgery, generative adversarial networks, genuine content, Human Behavior, human computer interaction, human factors, Information integrity, learning (artificial intelligence), low-tech doctored images, machine learning, Medical services, Metrics, Multimedia systems, neural nets, pubcrawl, resilience, Resiliency, Scalability, security of data, signature forgery, spatial steganalysis, temporal steganalysis, Videos, Weapons |
Abstract | A breakthrough in the emerging use of machine learning and deep learning is the concept of autoencoders and GAN (Generative Adversarial Networks), architectures that can generate believable synthetic content called deepfakes. The threat lies when these low-tech doctored images, videos, and audios blur the line between fake and genuine content and are used as weapons to cause damage to an unprecedented degree. This paper presents a survey of the underlying technology of deepfakes and methods proposed for their detection. Based on a detailed study of all the proposed models of detection, this paper presents SSTNet as the best model to date, that uses spatial, temporal, and steganalysis for detection. The threat posed by document and signature forgery, which is yet to be explored by researchers, has also been highlighted in this paper. This paper concludes with the discussion of research directions in this field and the development of more robust techniques to deal with the increasing threats surrounding deepfake technology. |
DOI | 10.1109/I-SMAC49090.2020.9243588 |
Citation Key | katarya_study_2020 |