Visible to the public Biblio

Filters: Keyword is VGG  [Clear All Filters]
2021-01-15
Korshunov, P., Marcel, S..  2019.  Vulnerability assessment and detection of Deepfake videos. 2019 International Conference on Biometrics (ICB). :1—6.
It is becoming increasingly easy to automatically replace a face of one person in a video with the face of another person by using a pre-trained generative adversarial network (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to detect these Deepfake videos. To help developing such methods, in this paper, we present the first publicly available set of Deepfake videos generated from videos of VidTIMIT database. We used open source software based on GANs to create the Deepfakes, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. To demonstrate this impact, we generated videos with low and high visual quality (320 videos each) using differently tuned parameter sets. We showed that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to Deepfake videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting Deepfake videos are necessary. By considering several baseline approaches, we found the best performing method based on visual quality metrics, which is often used in presentation attack detection domain, to lead to 8.97% equal error rate on high quality Deep-fakes. Our experiments demonstrate that GAN-generated Deepfake videos are challenging for both face recognition systems and existing detection methods, and the further development of face swapping technology will make it even more so.
2020-12-07
Khandelwal, S., Rana, S., Pandey, K., Kaushik, P..  2018.  Analysis of Hyperparameter Tuning in Neural Style Transfer. 2018 Fifth International Conference on Parallel, Distributed and Grid Computing (PDGC). :36–41.

Most of the notable artworks of all time are hand drawn by great artists. But, now with the advancement in image processing and huge computation power, very sophisticated synthesised artworks are being produced. Since mid-1990's, computer graphics engineers have come up with algorithms to produce digital paintings, but the results were not visually appealing. Recently, neural networks have been used to do this task and the results seen are like never before. One such algorithm for this purpose is the neural style transfer algorithm, which imparts the pattern from one image to another, producing marvellous pieces of art. This research paper focuses on the roles of various parameters involved in the neural style transfer algorithm. An extensive analysis of how these parameters influence the output, in terms of time, performance and quality of the style transferred image produced is also shown in the paper. A concrete comparison has been drawn on the basis of different time and performance metrics. Finally, optimal values for these discussed parameters have been suggested.

2017-05-19
Selim, Ahmed, Elgharib, Mohamed, Doyle, Linda.  2016.  Painting Style Transfer for Head Portraits Using Convolutional Neural Networks. ACM Trans. Graph.. 35:129:1–129:18.

Head portraits are popular in traditional painting. Automating portrait painting is challenging as the human visual system is sensitive to the slightest irregularities in human faces. Applying generic painting techniques often deforms facial structures. On the other hand portrait painting techniques are mainly designed for the graphite style and/or are based on image analogies; an example painting as well as its original unpainted version are required. This limits their domain of applicability. We present a new technique for transferring the painting from a head portrait onto another. Unlike previous work our technique only requires the example painting and is not restricted to a specific style. We impose novel spatial constraints by locally transferring the color distributions of the example painting. This better captures the painting texture and maintains the integrity of facial structures. We generate a solution through Convolutional Neural Networks and we present an extension to video. Here motion is exploited in a way to reduce temporal inconsistencies and the shower-door effect. Our approach transfers the painting style while maintaining the input photograph identity. In addition it significantly reduces facial deformations over state of the art.