Visible to the public Biblio

Filters: Keyword is Deepfakes  [Clear All Filters]
2023-08-24
Aliman, Nadisha-Marie, Kester, Leon.  2022.  VR, Deepfakes and Epistemic Security. 2022 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). :93–98.
In recent years, technological advancements in the AI and VR fields have increasingly often been paired with considerations on ethics and safety aimed at mitigating unintentional design failures. However, cybersecurity-oriented AI and VR safety research has emphasized the need to additionally appraise instantiations of intentional malice exhibited by unethical actors at pre- and post-deployment stages. On top of that, in view of ongoing malicious deepfake developments that can represent a threat to the epistemic security of a society, security-aware AI and VR design strategies require an epistemically-sensitive stance. In this vein, this paper provides a theoretical basis for two novel AIVR safety research directions: 1) VR as immersive testbed for a VR-deepfake-aided epistemic security training and 2) AI as catalyst within a deepfake-aided so-called cyborgnetic creativity augmentation facilitating an epistemically-sensitive threat modelling. For illustration, we focus our use case on deepfake text – an underestimated deepfake modality. In the main, the two proposed transdisciplinary lines of research exemplify how AIVR safety to defend against unethical actors could naturally converge toward AIVR ethics whilst counteracting epistemic security threats.
ISSN: 2771-7453
2023-08-17
Ali, Atif, Jadoon, Yasir Khan, Farid, Zulqarnain, Ahmad, Munir, Abidi, Naseem, Alzoubi, Haitham M., Alzoubi, Ali A..  2022.  The Threat of Deep Fake Technology to Trusted Identity Management. 2022 International Conference on Cyber Resilience (ICCR). :1—5.
With the rapid development of artificial intelligence technology, deepfake technology based on deep learning is receiving more and more attention from society or the industry. While enriching people's cultural and entertainment life, in-depth fakes technology has also caused many social problems, especially potential risks to managing network credible identities. With the continuous advancement of deep fakes technology, the security threats and trust crisis caused by it will become more serious. It is urgent to take adequate measures to curb the abuse risk of deep fakes. The article first introduces the principles and characteristics of deep fakes technology and then deeply analyzes its severe challenges to network trusted identity management. Finally, it researches the supervision and technical level and puts forward targeted preventive countermeasures.
2023-06-29
Sahib, Ihsan, AlAsady, Tawfiq Abd Alkhaliq.  2022.  Deep fake Image Detection based on Modified minimized Xception Net and DenseNet. 2022 5th International Conference on Engineering Technology and its Applications (IICETA). :355–360.

This paper deals with the problem of image forgery detection because of the problems it causes. Where The Fake im-ages can lead to social problems, for example, misleading the public opinion on political or religious personages, de-faming celebrities and people, and Presenting them in a law court as evidence, may Doing mislead the court. This work proposes a deep learning approach based on Deep CNN (Convolutional Neural Network) Architecture, to detect fake images. The network is based on a modified structure of Xception net, CNN based on depthwise separable convolution layers. After extracting the feature maps, pooling layers are used with dense connection with Xception output, to in-crease feature maps. Inspired by the idea of a densenet network. On the other hand, the work uses the YCbCr color system for images, which gave better Accuracy of %99.93, more than RGB, HSV, and Lab or other color systems.

ISSN: 2831-753X

Bide, Pramod, Varun, Patil, Gaurav, Shah, Samveg, Patil, Sakshi.  2022.  Fakequipo: Deep Fake Detection. 2022 IEEE 3rd Global Conference for Advancement in Technology (GCAT). :1–5.

Deep learning have a variety of applications in different fields such as computer vision, automated self-driving cars, natural language processing tasks and many more. One of such deep learning adversarial architecture changed the fundamentals of the data manipulation. The inception of Generative Adversarial Network (GAN) in the computer vision domain drastically changed the way how we saw and manipulated the data. But this manipulation of data using GAN has found its application in various type of malicious activities like creating fake images, swapped videos, forged documents etc. But now, these generative models have become so efficient at manipulating the data, especially image data, such that it is creating real life problems for the people. The manipulation of images and videos done by the GAN architectures is done in such a way that humans cannot differentiate between real and fake images/videos. Numerous researches have been conducted in the field of deep fake detection. In this paper, we present a structured survey paper explaining the advantages, gaps of the existing work in the domain of deep fake detection.

2021-04-08
Verdoliva, L..  2020.  Media Forensics and DeepFakes: An Overview. IEEE Journal of Selected Topics in Signal Processing. 14:910—932.
With the rapid progress in recent years, techniques that generate and manipulate multimedia content can now provide a very advanced level of realism. The boundary between real and synthetic media has become very thin. On the one hand, this opens the door to a series of exciting applications in different fields such as creative arts, advertising, film production, and video games. On the other hand, it poses enormous security threats. Software packages freely available on the web allow any individual, without special skills, to create very realistic fake images and videos. These can be used to manipulate public opinion during elections, commit fraud, discredit or blackmail people. Therefore, there is an urgent need for automated tools capable of detecting false multimedia content and avoiding the spread of dangerous false information. This review paper aims to present an analysis of the methods for visual media integrity verification, that is, the detection of manipulated images and videos. Special emphasis will be placed on the emerging phenomenon of deepfakes, fake media created through deep learning tools, and on modern data-driven forensic methods to fight them. The analysis will help highlight the limits of current forensic tools, the most relevant issues, the upcoming challenges, and suggest future directions for research.
2021-01-15
Brockschmidt, J., Shang, J., Wu, J..  2019.  On the Generality of Facial Forgery Detection. 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW). :43—47.
A variety of architectures have been designed or repurposed for the task of facial forgery detection. While many of these designs have seen great success, they largely fail to address challenges these models may face in practice. A major challenge is posed by generality, wherein models must be prepared to perform in a variety of domains. In this paper, we investigate the ability of state-of-the-art facial forgery detection architectures to generalize. We first propose two criteria for generality: reliably detecting multiple spoofing techniques and reliably detecting unseen spoofing techniques. We then devise experiments which measure how a given architecture performs against these criteria. Our analysis focuses on two state-of-the-art facial forgery detection architectures, MesoNet and XceptionNet, both being convolutional neural networks (CNNs). Our experiments use samples from six state-of-the-art facial forgery techniques: Deepfakes, Face2Face, FaceSwap, GANnotation, ICface, and X2Face. We find MesoNet and XceptionNet show potential to generalize to multiple spoofing techniques but with a slight trade-off in accuracy, and largely fail against unseen techniques. We loosely extrapolate these results to similar CNN architectures and emphasize the need for better architectures to meet the challenges of generality.
Matern, F., Riess, C., Stamminger, M..  2019.  Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations. 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW). :83—92.
High quality face editing in videos is a growing concern and spreads distrust in video content. However, upon closer examination, many face editing algorithms exhibit artifacts that resemble classical computer vision issues that stem from face tracking and editing. As a consequence, we wonder how difficult it is to expose artificial faces from current generators? To this end, we review current facial editing methods and several characteristic artifacts from their processing pipelines. We also show that relatively simple visual artifacts can be already quite effective in exposing such manipulations, including Deepfakes and Face2Face. Since the methods are based on visual features, they are easily explicable also to non-technical experts. The methods are easy to implement and offer capabilities for rapid adjustment to new manipulation types with little data available. Despite their simplicity, the methods are able to achieve AUC values of up to 0.866.
Đorđević, M., Milivojević, M., Gavrovska, A..  2019.  DeepFake Video Analysis using SIFT Features. 2019 27th Telecommunications Forum (℡FOR). :1—4.
Recent advantages in changing faces using DeepFake algorithms, which replace a face of one person with a face of another, truly represent what artificial intelligence and deep learning are capable of. Deepfakes in still images or video clips represent forgeries and tampered visual information. They are becoming increasingly successful and even difficult to notice in some cases. In this paper we analyze deepfakes using SIFT (Scale-Invariant Feature Transform) features. The experimental results show that in deepfake analysis using SIFT keypoints can be considered valuable.
Kumar, A., Bhavsar, A., Verma, R..  2020.  Detecting Deepfakes with Metric Learning. 2020 8th International Workshop on Biometrics and Forensics (IWBF). :1—6.

With the arrival of several face-swapping applications such as FaceApp, SnapChat, MixBooth, FaceBlender and many more, the authenticity of digital media content is hanging on a very loose thread. On social media platforms, videos are widely circulated often at a high compression factor. In this work, we analyze several deep learning approaches in the context of deepfakes classification in high compression scenarios and demonstrate that a proposed approach based on metric learning can be very effective in performing such a classification. Using less number of frames per video to assess its realism, the metric learning approach using a triplet network architecture proves to be fruitful. It learns to enhance the feature space distance between the cluster of real and fake videos embedding vectors. We validated our approaches on two datasets to analyze the behavior in different environments. We achieved a state-of-the-art AUC score of 99.2% on the Celeb-DF dataset and accuracy of 90.71% on a highly compressed Neural Texture dataset. Our approach is especially helpful on social media platforms where data compression is inevitable.

Gandhi, A., Jain, S..  2020.  Adversarial Perturbations Fool Deepfake Detectors. 2020 International Joint Conference on Neural Networks (IJCNN). :1—8.
This work uses adversarial perturbations to enhance deepfake images and fool common deepfake detectors. We created adversarial perturbations using the Fast Gradient Sign Method and the Carlini and Wagner L2 norm attack in both blackbox and whitebox settings. Detectors achieved over 95% accuracy on unperturbed deepfakes, but less than 27% accuracy on perturbed deepfakes. We also explore two improvements to deep-fake detectors: (i) Lipschitz regularization, and (ii) Deep Image Prior (DIP). Lipschitz regularization constrains the gradient of the detector with respect to the input in order to increase robustness to input perturbations. The DIP defense removes perturbations using generative convolutional neural networks in an unsupervised manner. Regularization improved the detection of perturbed deepfakes on average, including a 10% accuracy boost in the blackbox case. The DIP defense achieved 95% accuracy on perturbed deepfakes that fooled the original detector while retaining 98% accuracy in other cases on a 100 image subsample.