Visible to the public Biblio

Filters: Keyword is Adversarial training  [Clear All Filters]
2023-08-18
KK, Sabari, Shrivastava, Saurabh, V, Sangeetha..  2022.  Anomaly-based Intrusion Detection using GAN for Industrial Control Systems. 2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). :1—6.
In recent years, cyber-attacks on modern industrial control systems (ICS) have become more common and it acts as a victim to various kind of attackers. The percentage of attacked ICS computers in the world in 2021 is 39.6%. To identify the anomaly in a large database system is a challenging task. Deep-learning model provides better solutions for handling the huge dataset with good accuracy. On the other hand, real time datasets are highly imbalanced with their sample proportions. In this research, GAN based model, a supervised learning method which generates new fake samples that is similar to real samples has been proposed. GAN based adversarial training would address the class imbalance problem in real time datasets. Adversarial samples are combined with legitimate samples and shuffled via proper proportion and given as input to the classifiers. The generated data samples along with the original ones are classified using various machine learning classifiers and their performances have been evaluated. Gradient boosting was found to classify with 98% accuracy when compared to other
2023-06-22
Cheng, Xin, Wang, Mei-Qi, Shi, Yu-Bo, Lin, Jun, Wang, Zhong-Feng.  2022.  Magical-Decomposition: Winning Both Adversarial Robustness and Efficiency on Hardware. 2022 International Conference on Machine Learning and Cybernetics (ICMLC). :61–66.
Model compression is one of the most preferred techniques for efficiently deploying deep neural networks (DNNs) on resource- constrained Internet of Things (IoT) platforms. However, the simply compressed model is often vulnerable to adversarial attacks, leading to a conflict between robustness and efficiency, especially for IoT devices exposed to complex real-world scenarios. We, for the first time, address this problem by developing a novel framework dubbed Magical-Decomposition to simultaneously enhance both robustness and efficiency for hardware. By leveraging a hardware-friendly model compression method called singular value decomposition, the defending algorithm can be supported by most of the existing DNN hardware accelerators. To step further, by using a recently developed DNN interpretation tool, the underlying scheme of how the adversarial accuracy can be increased in the compressed model is highlighted clearly. Ablation studies and extensive experiments under various attacks/models/datasets consistently validate the effectiveness and scalability of the proposed framework.
ISSN: 2160-1348
2022-06-07
Gayathri, R G, Sajjanhar, Atul, Xiang, Yong, Ma, Xingjun.  2021.  Anomaly Detection for Scenario-based Insider Activities using CGAN Augmented Data. 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :718–725.
Insider threats are the cyber attacks from the trusted entities within an organization. An insider attack is hard to detect as it may not leave a footprint and potentially cause huge damage to organizations. Anomaly detection is the most common approach for insider threat detection. Lack of real-world data and the skewed class distribution in the datasets makes insider threat analysis an understudied research area. In this paper, we propose a Conditional Generative Adversarial Network (CGAN) to enrich under-represented minority class samples to provide meaningful and diverse data for anomaly detection from the original malicious scenarios. Comprehensive experiments performed on benchmark dataset demonstrates the effectiveness of using CGAN augmented data, and the capability of multi-class anomaly detection for insider activity analysis. Moreover, the method is compared with other existing methods against different parameters and performance metrics.
2022-04-13
Nugraha, Beny, Kulkarni, Naina, Gopikrishnan, Akash.  2021.  Detecting Adversarial DDoS Attacks in Software- Defined Networking Using Deep Learning Techniques and Adversarial Training. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :448—454.
In recent years, Deep Learning (DL) has been utilized for cyber-attack detection mechanisms as it offers highly accurate detection and is able to overcome the limitations of standard machine learning techniques. When applied in a Software-Defined Network (SDN) environment, a DL-based detection mechanism shows satisfying detection performance. However, in the case of adversarial attacks, the detection performance deteriorates. Therefore, in this paper, first, we outline a highly accurate flooding DDoS attack detection framework based on DL for SDN environments. Second, we investigate the performance degradation of our detection framework when being tested with two adversary traffic datasets. Finally, we evaluate three adversarial training procedures for improving the detection performance of our framework concerning adversarial attacks. It is shown that the application of one of the adversarial training procedures can avoid detection performance degradation and thus might be used in a real-time detection system based on continual learning.
2021-09-07
Kumar, Nripesh, Srinath, G., Prataap, Abhishek, Nirmala, S. Jaya.  2020.  Attention-based Sequential Generative Conversational Agent. 2020 5th International Conference on Computing, Communication and Security (ICCCS). :1–6.
In this work, we examine the method of enabling computers to understand human interaction by constructing a generative conversational agent. An experimental approach in trying to apply the techniques of natural language processing using recurrent neural networks (RNNs) to emulate the concept of textual entailment or human reasoning is presented. To achieve this functionality, our experiment involves developing an integrated Long Short-Term Memory cell neural network (LSTM) system enhanced with an attention mechanism. The results achieved by the model are shown in terms of the number of epochs versus loss graphs as well as a brief illustration of the model's conversational capabilities.
2021-06-01
Zhang, Han, Song, Zhihua, Feng, Boyu, Zhou, Zhongliang, Liu, Fuxian.  2020.  Technology of Image Steganography and Steganalysis Based on Adversarial Training. 2020 16th International Conference on Computational Intelligence and Security (CIS). :77–80.
Steganography has made great progress over the past few years due to the advancement of deep convolutional neural networks (DCNN), which has caused severe problems in the network security field. Ensuring the accuracy of steganalysis is becoming increasingly difficult. In this paper, we designed a two-channel generative adversarial network (TGAN), inspired by the idea of adversarial training that is based on our previous work. The TGAN consisted of three parts: The first hiding network had two input channels and one output channel. For the second extraction network, the input was a hidden image embedded with the secret image. The third detecting network had two input channels and one output channel. Experimental results on two independent image data sets showed that the proposed TGAN performed well and had better detecting capability compared to other algorithms, thus having important theoretical significance and engineering value.
2021-01-28
Seiler, M., Trautmann, H., Kerschke, P..  2020.  Enhancing Resilience of Deep Learning Networks By Means of Transferable Adversaries. 2020 International Joint Conference on Neural Networks (IJCNN). :1—8.

Artificial neural networks in general and deep learning networks in particular established themselves as popular and powerful machine learning algorithms. While the often tremendous sizes of these networks are beneficial when solving complex tasks, the tremendous number of parameters also causes such networks to be vulnerable to malicious behavior such as adversarial perturbations. These perturbations can change a model's classification decision. Moreover, while single-step adversaries can easily be transferred from network to network, the transfer of more powerful multi-step adversaries has - usually - been rather difficult.In this work, we introduce a method for generating strong adversaries that can easily (and frequently) be transferred between different models. This method is then used to generate a large set of adversaries, based on which the effects of selected defense methods are experimentally assessed. At last, we introduce a novel, simple, yet effective approach to enhance the resilience of neural networks against adversaries and benchmark it against established defense methods. In contrast to the already existing methods, our proposed defense approach is much more efficient as it only requires a single additional forward-pass to achieve comparable performance results.

2020-04-17
Jang, Yunseok, Zhao, Tianchen, Hong, Seunghoon, Lee, Honglak.  2019.  Adversarial Defense via Learning to Generate Diverse Attacks. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). :2740—2749.

With the remarkable success of deep learning, Deep Neural Networks (DNNs) have been applied as dominant tools to various machine learning domains. Despite this success, however, it has been found that DNNs are surprisingly vulnerable to malicious attacks; adding a small, perceptually indistinguishable perturbations to the data can easily degrade classification performance. Adversarial training is an effective defense strategy to train a robust classifier. In this work, we propose to utilize the generator to learn how to create adversarial examples. Unlike the existing approaches that create a one-shot perturbation by a deterministic generator, we propose a recursive and stochastic generator that produces much stronger and diverse perturbations that comprehensively reveal the vulnerability of the target classifier. Our experiment results on MNIST and CIFAR-10 datasets show that the classifier adversarially trained with our method yields more robust performance over various white-box and black-box attacks.

Xie, Cihang, Wu, Yuxin, Maaten, Laurens van der, Yuille, Alan L., He, Kaiming.  2019.  Feature Denoising for Improving Adversarial Robustness. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :501—509.

Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 — it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by 10%. Code is available at https://github.com/facebookresearch/ImageNet-Adversarial-Training.

2020-03-30
Huang, Jinjing, Cheng, Shaoyin, Lou, Songhao, Jiang, Fan.  2019.  Image steganography using texture features and GANs. 2019 International Joint Conference on Neural Networks (IJCNN). :1–8.
As steganography is the main practice of hidden writing, many deep neural networks are proposed to conceal secret information into images, whose invisibility and security are unsatisfactory. In this paper, we present an encoder-decoder framework with an adversarial discriminator to conceal messages or images into natural images. The message is embedded into QR code first which significantly improves the fault-tolerance. Considering the mean squared error (MSE) is not conducive to perfectly learn the invisible perturbations of cover images, we introduce a texture-based loss that is helpful to hide information into the complex texture regions of an image, improving the invisibility of hidden information. In addition, we design a truncated layer to cope with stego image distortions caused by data type conversion and a moment layer to train our model with varisized images. Finally, our experiments demonstrate that the proposed model improves the security and visual quality of stego images.
2018-12-10
Kwon, Hyun, Yoon, Hyunsoo, Choi, Daeseon.  2018.  POSTER: Zero-Day Evasion Attack Analysis on Race Between Attack and Defense. Proceedings of the 2018 on Asia Conference on Computer and Communications Security. :805–807.

Deep neural networks (DNNs) exhibit excellent performance in machine learning tasks such as image recognition, pattern recognition, speech recognition, and intrusion detection. However, the usage of adversarial examples, which are intentionally corrupted by noise, can lead to misclassification. As adversarial examples are serious threats to DNNs, both adversarial attacks and methods of defending against adversarial examples have been continuously studied. Zero-day adversarial examples are created with new test data and are unknown to the classifier; hence, they represent a more significant threat to DNNs. To the best of our knowledge, there are no analytical studies in the literature of zero-day adversarial examples with a focus on attack and defense methods through experiments using several scenarios. Therefore, in this study, zero-day adversarial examples are practically analyzed with an emphasis on attack and defense methods through experiments using various scenarios composed of a fixed target model and an adaptive target model. The Carlini method was used for a state-of-the-art attack, while an adversarial training method was used as a typical defense method. We used the MNIST dataset and analyzed success rates of zero-day adversarial examples, average distortions, and recognition of original samples through several scenarios of fixed and adaptive target models. Experimental results demonstrate that changing the parameters of the target model in real time leads to resistance to adversarial examples in both the fixed and adaptive target models.

2018-11-19
Lal, Shamit, Garg, Vineet, Verma, Om Prakash.  2017.  Automatic Image Colorization Using Adversarial Training. Proceedings of the 9th International Conference on Signal Processing Systems. :84–88.

The paper presents a fully automatic end-to-end trainable system to colorize grayscale images. Colorization is a highly under-constrained problem. In order to produce realistic outputs, the proposed approach takes advantage of the recent advances in deep learning and generative networks. To achieve plausible colorization, the paper investigates conditional Wasserstein Generative Adversarial Networks (WGAN) [3] as a solution to this problem. Additionally, a loss function consisting of two classification loss components apart from the adversarial loss learned by the WGAN is proposed. The first classification loss provides a measure of how much the predicted colored images differ from ground truth. The second classification loss component makes use of ground truth semantic classification labels in order to learn meaningful intermediate features. Finally, WGAN training procedure pushes the predictions to the manifold of natural images. The system is validated using a user study and a semantic interpretability test and achieves results comparable to [1] on Imagenet dataset [10].