Biblio
Deep learning have a variety of applications in different fields such as computer vision, automated self-driving cars, natural language processing tasks and many more. One of such deep learning adversarial architecture changed the fundamentals of the data manipulation. The inception of Generative Adversarial Network (GAN) in the computer vision domain drastically changed the way how we saw and manipulated the data. But this manipulation of data using GAN has found its application in various type of malicious activities like creating fake images, swapped videos, forged documents etc. But now, these generative models have become so efficient at manipulating the data, especially image data, such that it is creating real life problems for the people. The manipulation of images and videos done by the GAN architectures is done in such a way that humans cannot differentiate between real and fake images/videos. Numerous researches have been conducted in the field of deep fake detection. In this paper, we present a structured survey paper explaining the advantages, gaps of the existing work in the domain of deep fake detection.
The existing network intrusion detection methods have less label samples in the training process, and the detection accuracy is not high. In order to solve this problem, this paper designs a network intrusion detection method based on the GAN model by using the adversarial idea contained in the GAN. The model enhances the original training set by continuously generating samples, which expanding the label sample set. In order to realize the multi-classification of samples, this paper transforms the previous binary classification model of the generated adversarial network into a supervised learning multi-classification model. The loss function of training is redefined, so that the corresponding training method and parameter setting are obtained. Under the same experimental conditions, several performance indicators are used to compare the detection ability of the proposed method, the original classification model and other models. The experimental results show that the method proposed in this paper is more stable, robust, accurate detection rate, has good generalization ability, and can effectively realize network intrusion detection.
Intrusion detection systems (IDSs) are an essential cog of the network security suite that can defend the network from malicious intrusions and anomalous traffic. Many machine learning (ML)-based IDSs have been proposed in the literature for the detection of malicious network traffic. However, recent works have shown that ML models are vulnerable to adversarial perturbations through which an adversary can cause IDSs to malfunction by introducing a small impracticable perturbation in the network traffic. In this paper, we propose an adversarial ML attack using generative adversarial networks (GANs) that can successfully evade an ML-based IDS. We also show that GANs can be used to inoculate the IDS and make it more robust to adversarial perturbations.
Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks.
Conditional Generative Adversarial Nets [1](cGAN) was recently proposed as a novel conditional learning method by feeding some extra information into the network. In this paper we propose an improved conditional GANs which use divided z-vector (DzGAN). The computation amount will be reduced because DzGAN can implement conditional learning using not images but one-hot vector by dividing the range of z-vector (e.g. -1\textasciitilde1 to -1\textasciitilde0 and 0\textasciitilde1). In the DzGAN, the discriminator is fed by the images with label using one-hot vector and the generator is fed by divided z-vector (e.g. there are 10 classes In MNIST dataset, the divided z-vector will be z1\textasciitildez10 accordingly) with corresponding label fed into the discriminator, thus we can implement conditional learning. In this paper we use conditional Deep Convolutional Generative Adversarial Networks (cDCGAN) [7] instead of cGAN because cDCGAN can generate clear image better than cGAN. Heuristic experiments of conditional learning which compare the computation amount demonstrate that DzGAN is superior than cDCGAN.
For sketch-based image retrieval (SBIR), we propose a generative adversarial network trained on a large number of sketches and their corresponding real images. To imitate human search process, we attempt to match candidate images with theimaginary image in user single s mind instead of the sketch query, i.e., not only the shape information of sketches but their possible content information are considered in SBIR. Specifically, a conditional generative adversarial network (cGAN) is employed to enrich the content information of sketches and recover the imaginary images, and two VGG-based encoders, which work on real and imaginary images respectively, are used to constrain their perceptual consistency from the view of feature representations. During SBIR, we first generate an imaginary image from a given sketch via cGAN, and then take the output of the learned encoder for imaginary images as the feature of the query sketch. Finally, we build an interactive SBIR system that shows encouraging performance.