Visible to the public Biblio

Filters: Keyword is Regularization  [Clear All Filters]
2021-01-15
Zhu, K., Wu, B., Wang, B..  2020.  Deepfake Detection with Clustering-based Embedding Regularization. 2020 IEEE Fifth International Conference on Data Science in Cyberspace (DSC). :257—264.

In recent months, AI-synthesized face swapping videos referred to as deepfake have become an emerging problem. False video is becoming more and more difficult to distinguish, which brings a series of challenges to social security. Some scholars are devoted to studying how to improve the detection accuracy of deepfake video. At the same time, in order to conduct better research, some datasets for deepfake detection are made. Companies such as Google and Facebook have also spent huge sums of money to produce datasets for deepfake video detection, as well as holding deepfake detection competitions. The continuous advancement of video tampering technology and the improvement of video quality have also brought great challenges to deepfake detection. Some scholars have achieved certain results on existing datasets, while the results on some high-quality datasets are not as good as expected. In this paper, we propose new method with clustering-based embedding regularization for deepfake detection. We use open source algorithms to generate videos which can simulate distinctive artifacts in the deepfake videos. To improve the local smoothness of the representation space, we integrate a clustering-based embedding regularization term into the classification objective, so that the obtained model learns to resist adversarial examples. We evaluate our method on three latest deepfake datasets. Experimental results demonstrate the effectiveness of our method.

2020-12-11
Mikołajczyk, A., Grochowski, M..  2019.  Style transfer-based image synthesis as an efficient regularization technique in deep learning. 2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR). :42—47.

These days deep learning is the fastest-growing area in the field of Machine Learning. Convolutional Neural Networks are currently the main tool used for the image analysis and classification purposes. Although great achievements and perspectives, deep neural networks and accompanying learning algorithms have some relevant challenges to tackle. In this paper, we have focused on the most frequently mentioned problem in the field of machine learning, that is relatively poor generalization abilities. Partial remedies for this are regularization techniques e.g. dropout, batch normalization, weight decay, transfer learning, early stopping and data augmentation. In this paper we have focused on data augmentation. We propose to use a method based on a neural style transfer, which allows to generate new unlabeled images of high perceptual quality that combine the content of a base image with the appearance of another one. In a proposed approach, the newly created images are described with pseudo-labels, and then used as a training dataset. Real, labeled images are divided into the validation and test set. We validated proposed method on a challenging skin lesion classification case study. Four representative neural architectures are examined. Obtained results show the strong potential of the proposed approach.

2018-05-01
Tran, D. T., Waris, M. A., Gabbouj, M., Iosifidis, A..  2017.  Sample-Based Regularization for Support Vector Machine Classification. 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA). :1–6.

In this paper, we propose a new regularization scheme for the well-known Support Vector Machine (SVM) classifier that operates on the training sample level. The proposed approach is motivated by the fact that Maximum Margin-based classification defines decision functions as a linear combination of the selected training data and, thus, the variations on training sample selection directly affect generalization performance. We show that the exploitation of the proposed regularization scheme is well motivated and intuitive. Experimental results show that the proposed regularization scheme outperforms standard SVM in human action recognition tasks as well as classical recognition problems.

2017-03-08
Liu, Weijian, Chen, Zeqi, Chen, Yunhua, Yao, Ruohe.  2015.  An \#8467;1/2-BTV regularization algorithm for super-resolution. 2015 4th International Conference on Computer Science and Network Technology (ICCSNT). 01:1274–1281.

In this paper, we propose a novelregularization term for super-resolution by combining a bilateral total variation (BTV) regularizer and a sparsity prior model on the image. The term is composed of the weighted least squares minimization and the bilateral filter proposed by Elad, but adding an ℓ1/2 regularizer. It is referred to as ℓ1/2-BTV. The proposed algorithm serves to restore image details more precisely and eliminate image noise more effectively by introducing the sparsity of the ℓ1/2 regularizer into the traditional bilateral total variation (BTV) regularizer. Experiments were conducted on both simulated and real image sequences. The results show that the proposed algorithm generates high-resolution images of better quality, as defined by both de-noising and edge-preservation metrics, than other methods.