Visible to the public Biblio

Filters: Keyword is adversarial attacks  [Clear All Filters]
2023-02-03
Liu, Qin, Yang, Jiamin, Jiang, Hongbo, Wu, Jie, Peng, Tao, Wang, Tian, Wang, Guojun.  2022.  When Deep Learning Meets Steganography: Protecting Inference Privacy in the Dark. IEEE INFOCOM 2022 - IEEE Conference on Computer Communications. :590–599.
While cloud-based deep learning benefits for high-accuracy inference, it leads to potential privacy risks when exposing sensitive data to untrusted servers. In this paper, we work on exploring the feasibility of steganography in preserving inference privacy. Specifically, we devise GHOST and GHOST+, two private inference solutions employing steganography to make sensitive images invisible in the inference phase. Motivated by the fact that deep neural networks (DNNs) are inherently vulnerable to adversarial attacks, our main idea is turning this vulnerability into the weapon for data privacy, enabling the DNN to misclassify a stego image into the class of the sensitive image hidden in it. The main difference is that GHOST retrains the DNN into a poisoned network to learn the hidden features of sensitive images, but GHOST+ leverages a generative adversarial network (GAN) to produce adversarial perturbations without altering the DNN. For enhanced privacy and a better computation-communication trade-off, both solutions adopt the edge-cloud collaborative framework. Compared with the previous solutions, this is the first work that successfully integrates steganography and the nature of DNNs to achieve private inference while ensuring high accuracy. Extensive experiments validate that steganography has excellent ability in accuracy-aware privacy protection of deep learning.
ISSN: 2641-9874
2022-12-20
Lin, Xuanwei, Dong, Chen, Liu, Ximeng, Zhang, Yuanyuan.  2022.  SPA: An Efficient Adversarial Attack on Spiking Neural Networks using Spike Probabilistic. 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid). :366–375.
With the future 6G era, spiking neural networks (SNNs) can be powerful processing tools in various areas due to their strong artificial intelligence (AI) processing capabilities, such as biometric recognition, AI robotics, autonomous drive, and healthcare. However, within Cyber Physical System (CPS), SNNs are surprisingly vulnerable to adversarial examples generated by benign samples with human-imperceptible noise, this will lead to serious consequences such as face recognition anomalies, autonomous drive-out of control, and wrong medical diagnosis. Only by fully understanding the principles of adversarial attacks with adversarial samples can we defend against them. Nowadays, most existing adversarial attacks result in a severe accuracy degradation to trained SNNs. Still, the critical issue is that they only generate adversarial samples by randomly adding, deleting, and flipping spike trains, making them easy to identify by filters, even by human eyes. Besides, the attack performance and speed also can be improved further. Hence, Spike Probabilistic Attack (SPA) is presented in this paper and aims to generate adversarial samples with more minor perturbations, greater model accuracy degradation, and faster iteration. SPA uses Poisson coding to generate spikes as probabilities, directly converting input data into spikes for faster speed and generating uniformly distributed perturbation for better attack performance. Moreover, an objective function is constructed for minor perturbations and keeping attack success rate, which speeds up the convergence by adjusting parameters. Both white-box and black-box settings are conducted to evaluate the merits of SPA. Experimental results show the model's accuracy under white-box attack decreases by 9.2S% 31.1S% better than others, and average success rates are 74.87% under the black-box setting. The experimental results indicate that SPA has better attack performance than other existing attacks in the white-box and better transferability performance in the black-box setting,
2022-11-18
Paudel, Bijay Raj, Itani, Aashish, Tragoudas, Spyros.  2021.  Resiliency of SNN on Black-Box Adversarial Attacks. 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA). :799–806.
Existing works indicate that Spiking Neural Networks (SNNs) are resilient to adversarial attacks by testing against few attack models. This paper studies adversarial attacks on SNNs using additional attack models and shows that SNNs are not inherently robust against many few-pixel L0 black-box attacks. Additionally, a method to defend against such attacks in SNNs is presented. The SNNs and the effects of adversarial attacks are tested on both software simulators as well as on SpiNNaker neuromorphic hardware.
2022-07-15
Fan, Wenqi, Derr, Tyler, Zhao, Xiangyu, Ma, Yao, Liu, Hui, Wang, Jianping, Tang, Jiliang, Li, Qing.  2021.  Attacking Black-box Recommendations via Copying Cross-domain User Profiles. 2021 IEEE 37th International Conference on Data Engineering (ICDE). :1583—1594.
Recommender systems, which aim to suggest personalized lists of items for users, have drawn a lot of attention. In fact, many of these state-of-the-art recommender systems have been built on deep neural networks (DNNs). Recent studies have shown that these deep neural networks are vulnerable to attacks, such as data poisoning, which generate fake users to promote a selected set of items. Correspondingly, effective defense strategies have been developed to detect these generated users with fake profiles. Thus, new strategies of creating more ‘realistic’ user profiles to promote a set of items should be investigated to further understand the vulnerability of DNNs based recommender systems. In this work, we present a novel framework CopyAttack. It is a reinforcement learning based black-box attacking method that harnesses real users from a source domain by copying their profiles into the target domain with the goal of promoting a subset of items. CopyAttack is constructed to both efficiently and effectively learn policy gradient networks that first select, then further refine/craft user profiles from the source domain, and ultimately copy them into the target domain. CopyAttack’s goal is to maximize the hit ratio of the targeted items in the Top-k recommendation list of the users in the target domain. We conducted experiments on two real-world datasets and empirically verified the effectiveness of the proposed framework. The implementation of CopyAttack is available at https://github.com/wenqifan03/CopyAttack.
Nguyen, Phuong T., Di Sipio, Claudio, Di Rocco, Juri, Di Penta, Massimiliano, Di Ruscio, Davide.  2021.  Adversarial Attacks to API Recommender Systems: Time to Wake Up and Smell the Coffee? 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). :253—265.
Recommender systems in software engineering provide developers with a wide range of valuable items to help them complete their tasks. Among others, API recommender systems have gained momentum in recent years as they became more successful at suggesting API calls or code snippets. While these systems have proven to be effective in terms of prediction accuracy, there has been less attention for what concerns such recommenders’ resilience against adversarial attempts. In fact, by crafting the recommenders’ learning material, e.g., data from large open-source software (OSS) repositories, hostile users may succeed in injecting malicious data, putting at risk the software clients adopting API recommender systems. In this paper, we present an empirical investigation of adversarial machine learning techniques and their possible influence on recommender systems. The evaluation performed on three state-of-the-art API recommender systems reveals a worrying outcome: all of them are not immune to malicious data. The obtained result triggers the need for effective countermeasures to protect recommender systems against hostile attacks disguised in training data.
2022-07-01
Manoj, B. R., Sadeghi, Meysam, Larsson, Erik G..  2021.  Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network. ICC 2021 - IEEE International Conference on Communications. :1–6.
Deep learning (DL) is becoming popular as a new tool for many applications in wireless communication systems. However, for many classification tasks (e.g., modulation classification) it has been shown that DL-based wireless systems are susceptible to adversarial examples; adversarial examples are well-crafted malicious inputs to the neural network (NN) with the objective to cause erroneous outputs. In this paper, we extend this to regression problems and show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network. Specifically, we extend the fast gradient sign method (FGSM), momentum iterative FGSM, and projected gradient descent adversarial attacks in the context of power allocation in a maMIMO system. We benchmark the performance of these attacks and show that with a small perturbation in the input of the NN, the white-box attacks can result in infeasible solutions up to 86%. Furthermore, we investigate the performance of black-box attacks. All the evaluations conducted in this work are based on an open dataset and NN models, which are publicly available.
2022-04-20
Olowononi, Felix O., Rawat, Danda B, Liu, Chunmei.  2021.  Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS. IEEE Communications Surveys Tutorials. 23:524–552.
Cyber Physical Systems (CPS) are characterized by their ability to integrate the physical and information or cyber worlds. Their deployment in critical infrastructure have demonstrated a potential to transform the world. However, harnessing this potential is limited by their critical nature and the far reaching effects of cyber attacks on human, infrastructure and the environment. An attraction for cyber concerns in CPS rises from the process of sending information from sensors to actuators over the wireless communication medium, thereby widening the attack surface. Traditionally, CPS security has been investigated from the perspective of preventing intruders from gaining access to the system using cryptography and other access control techniques. Most research work have therefore focused on the detection of attacks in CPS. However, in a world of increasing adversaries, it is becoming more difficult to totally prevent CPS from adversarial attacks, hence the need to focus on making CPS resilient. Resilient CPS are designed to withstand disruptions and remain functional despite the operation of adversaries. One of the dominant methodologies explored for building resilient CPS is dependent on machine learning (ML) algorithms. However, rising from recent research in adversarial ML, we posit that ML algorithms for securing CPS must themselves be resilient. This article is therefore aimed at comprehensively surveying the interactions between resilient CPS using ML and resilient ML when applied in CPS. The paper concludes with a number of research trends and promising future research directions. Furthermore, with this article, readers can have a thorough understanding of recent advances on ML-based security and securing ML for CPS and countermeasures, as well as research trends in this active research area.
Conference Name: IEEE Communications Surveys Tutorials
2022-02-09
Mygdalis, Vasileios, Tefas, Anastasios, Pitas, Ioannis.  2021.  Introducing K-Anonymity Principles to Adversarial Attacks for Privacy Protection in Image Classification Problems. 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP). :1–6.
The network output activation values for a given input can be employed to produce a sorted ranking. Adversarial attacks typically generate the least amount of perturbation required to change the classifier label. In that sense, generated adversarial attack perturbation only affects the output in the 1st sorted ranking position. We argue that meaningful information about the adversarial examples i.e., their original labels, is still encoded in the network output ranking and could potentially be extracted, using rule-based reasoning. To this end, we introduce a novel adversarial attack methodology inspired by the K-anonymity principles, that generates adversarial examples that are not only misclassified, but their output sorted ranking spreads uniformly along K different positions. Any additional perturbation arising from the strength of the proposed objectives, is regularized by a visual similarity-based term. Experimental results denote that the proposed approach achieves the optimization goals inspired by K-anonymity with reduced perturbation as well.
2022-01-31
El-Allami, Rida, Marchisio, Alberto, Shafique, Muhammad, Alouani, Ihsen.  2021.  Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters. 2021 Design, Automation Test in Europe Conference Exhibition (DATE). :774–779.
Deep Learning (DL) algorithms have gained popularity owing to their practical problem-solving capacity. However, they suffer from a serious integrity threat, i.e., their vulnerability to adversarial attacks. In the quest for DL trustworthiness, recent works claimed the inherent robustness of Spiking Neural Networks (SNNs) to these attacks, without considering the variability in their structural spiking parameters. This paper explores the security enhancement of SNNs through internal structural parameters. Specifically, we investigate the SNNs robustness to adversarial attacks with different values of the neuron's firing voltage thresholds and time window boundaries. We thoroughly study SNNs security under different adversarial attacks in the strong white-box setting, with different noise budgets and under variable spiking parameters. Our results show a significant impact of the structural parameters on the SNNs' security, and promising sweet spots can be reached to design trustworthy SNNs with 85% higher robustness than a traditional non-spiking DL system. To the best of our knowledge, this is the first work that investigates the impact of structural parameters on SNNs robustness to adversarial attacks. The proposed contributions and the experimental framework is available online 11https://github.com/rda-ela/SNN-Adversarial-Attacks to the community for reproducible research.
Kumová, Věra, Pilát, Martin.  2021.  Beating White-Box Defenses with Black-Box Attacks. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Deep learning has achieved great results in the last decade, however, it is sensitive to so called adversarial attacks - small perturbations of the input that cause the network to classify incorrectly. In the last years a number of attacks and defenses against these attacks were described. Most of the defenses however focus on defending against gradient-based attacks. In this paper, we describe an evolutionary attack and show that the adversarial examples produced by the attack have different features than those from gradient-based attacks. We also show that these features mean that one of the state-of-the-art defenses fails to detect such attacks.
2021-12-20
Sahay, Rajeev, Brinton, Christopher G., Love, David J..  2021.  Frequency-based Automated Modulation Classification in the Presence of Adversaries. ICC 2021 - IEEE International Conference on Communications. :1–6.
Automatic modulation classification (AMC) aims to improve the efficiency of crowded radio spectrums by automatically predicting the modulation constellation of wireless RF signals. Recent work has demonstrated the ability of deep learning to achieve robust AMC performance using raw in-phase and quadrature (IQ) time samples. Yet, deep learning models are highly susceptible to adversarial interference, which cause intelligent prediction models to misclassify received samples with high confidence. Furthermore, adversarial interference is often transferable, allowing an adversary to attack multiple deep learning models with a single perturbation crafted for a particular classification network. In this work, we present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference. Specifically, we show that adversarial attacks crafted to fool models trained on time-domain features are not easily transferable to models trained using frequency-domain features. In this capacity, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs). We further demonstrate our frequency feature-based classification models to achieve accuracies greater than 99% in the absence of attacks.
2021-11-30
Subramanian, Vinod, Pankajakshan, Arjun, Benetos, Emmanouil, Xu, Ning, McDonald, SKoT, Sandler, Mark.  2020.  A Study on the Transferability of Adversarial Attacks in Sound Event Classification. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :301–305.
An adversarial attack is an algorithm that perturbs the input of a machine learning model in an intelligent way in order to change the output of the model. An important property of adversarial attacks is transferability. According to this property, it is possible to generate adversarial perturbations on one model and apply it the input to fool the output of a different model. Our work focuses on studying the transferability of adversarial attacks in sound event classification. We are able to demonstrate differences in transferability properties from those observed in computer vision. We show that dataset normalization techniques such as z-score normalization does not affect the transferability of adversarial attacks and we show that techniques such as knowledge distillation do not increase the transferability of attacks.
2021-10-12
Zhao, Haojun, Lin, Yun, Gao, Song, Yu, Shui.  2020.  Evaluating and Improving Adversarial Attacks on DNN-Based Modulation Recognition. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–5.
The discovery of adversarial examples poses a serious risk to the deep neural networks (DNN). By adding a subtle perturbation that is imperceptible to the human eye, a well-behaved DNN model can be easily fooled and completely change the prediction categories of the input samples. However, research on adversarial attacks in the field of modulation recognition mainly focuses on increasing the prediction error of the classifier, while ignores the importance of decreasing the perceptual invisibility of attack. Aiming at the task of DNNbased modulation recognition, this study designs the Fitting Difference as a metric to measure the perturbed waveforms and proposes a new method: the Nesterov Adam Iterative Method to generate adversarial examples. We show that the proposed algorithm not only exerts excellent white-box attacks but also can initiate attacks on a black-box model. Moreover, our method decreases the waveform perceptual invisibility of attacks to a certain degree, thereby reducing the risk of an attack being detected.
2021-05-13
Venceslai, Valerio, Marchisio, Alberto, Alouani, Ihsen, Martina, Maurizio, Shafique, Muhammad.  2020.  NeuroAttack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Due to their proven efficiency, machine-learning systems are deployed in a wide range of complex real-life problems. More specifically, Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems. While these systems are going mainstream, they have inherent security and reliability issues. In this paper, we propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues through a high-level attack. Particularly, we trigger a fault-injection based sneaky hardware backdoor through a carefully crafted adversarial input noise. Our results on Deep Neural Networks (DNNs) and SNNs show a serious integrity threat to state-of-the art machine-learning techniques.

Monakhov, Yuri, Monakhov, Mikhail, Telny, Andrey, Mazurok, Dmitry, Kuznetsova, Anna.  2020.  Improving Security of Neural Networks in the Identification Module of Decision Support Systems. 2020 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT). :571–574.
In recent years, neural networks have been implemented while solving various tasks. Deep learning algorithms provide state of the art performance in computer vision, NLP, speech recognition, speaker recognition and many other fields. In spite of the good performance, neural networks have significant drawback- they have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. While being imperceptible to a human eye, such perturbations lead to significant drop in classification accuracy. It is demonstrated by many studies related to neural network security. Considering the pros and cons of neural networks, as well as a variety of their applications, developing of the methods to improve the robustness of neural networks against adversarial attacks becomes an urgent task. In the article authors propose the “minimalistic” attacker model of the decision support system identification unit, adaptive recommendations on security enhancing, and a set of protective methods. Suggested methods allow for significant increase in classification accuracy under adversarial attacks, as it is demonstrated by an experiment outlined in this article.
2021-03-29
Chauhan, R., Heydari, S. Shah.  2020.  Polymorphic Adversarial DDoS attack on IDS using GAN. 2020 International Symposium on Networks, Computers and Communications (ISNCC). :1–6.
Intrusion Detection systems are important tools in preventing malicious traffic from penetrating into networks and systems. Recently, Intrusion Detection Systems are rapidly enhancing their detection capabilities using machine learning algorithms. However, these algorithms are vulnerable to new unknown types of attacks that can evade machine learning IDS. In particular, they may be vulnerable to attacks based on Generative Adversarial Networks (GAN). GANs have been widely used in domains such as image processing, natural language processing to generate adversarial data of different types such as graphics, videos, texts, etc. We propose a model using GAN to generate adversarial DDoS attacks that can change the attack profile and can be undetected. Our simulation results indicate that by continuous changing of attack profile, defensive systems that use incremental learning will still be vulnerable to new attacks.
2021-03-15
Babu, S. A., Ameer, P. M..  2020.  Physical Adversarial Attacks Against Deep Learning Based Channel Decoding Systems. 2020 IEEE Region 10 Symposium (TENSYMP). :1511–1514.

Deep Learning (DL), in spite of its huge success in many new fields, is extremely vulnerable to adversarial attacks. We demonstrate how an attacker applies physical white-box and black-box adversarial attacks to Channel decoding systems based on DL. We show that these attacks can affect the systems and decrease performance. We uncover that these attacks are more effective than conventional jamming attacks. Additionally, we show that classical decoding schemes are more robust than the deep learning channel decoding systems in the presence of both adversarial and jamming attacks.

2021-03-09
Soni, D. K., Sharma, H., Bhushan, B., Sharma, N., Kaushik, I..  2020.  Security Issues Seclusion in Bitcoin System. 2020 IEEE 9th International Conference on Communication Systems and Network Technologies (CSNT). :223—229.

In the dawn of crypto-currencies the most talked currency is Bitcoin. Bitcoin is widely flourished digital currency and an exchange trading commodity implementing peer-to-peer payment network. No central athourity exists in Bitcoin. The users in network or pool of bitcoin need not to use real names, rather they use pseudo names for managing and verifying transactions. Due to the use of pseudo names bitcoin is apprehended to provide anonymity. However, the most transparent payment network is what bitcoin is. Here all the transactions are publicly open. To furnish wholeness and put a stop to double-spending, Blockchain is used, which actually works as a ledger for management of Bitcoins. Blockchain can be misused to monitor flow of bitcoins among multiple transactions. When data from external sources is amalgamated with insinuation acquired from the Blockchain, it may result to reveal user's identity and profile. In this way the activity of user may be traced to an extent to fraud that user. Along with the popularity of Bitcoins the number of adversarial attacks has also gain pace. All these activities are meant to exploit anonymity and privacy in Bitcoin. These acivities result in loss of bitcoins and unlawful profit to attackers. Here in this paper we tried to present analysis of major attacks such as malicious attack, greater than 52% attacks and block withholding attack. Also this paper aims to present analysis and improvements in Bitcoin's anonymity and privacy.

2021-03-04
Kalin, J., Ciolino, M., Noever, D., Dozier, G..  2020.  Black Box to White Box: Discover Model Characteristics Based on Strategic Probing. 2020 Third International Conference on Artificial Intelligence for Industries (AI4I). :60—63.

In Machine Learning, White Box Adversarial Attacks rely on knowing underlying knowledge about the model attributes. This works focuses on discovering to distrinct pieces of model information: the underlying architecture and primary training dataset. With the process in this paper, a structured set of input probes and the output of the model become the training data for a deep classifier. Two subdomains in Machine Learning are explored - image based classifiers and text transformers with GPT-2. With image classification, the focus is on exploring commonly deployed architectures and datasets available in popular public libraries. Using a single transformer architecture with multiple levels of parameters, text generation is explored by fine tuning off different datasets. Each dataset explored in image and text are distinguishable from one another. Diversity in text transformer outputs implies further research is needed to successfully classify architecture attribution in text domain.

2020-11-04
Apruzzese, G., Colajanni, M., Ferretti, L., Marchetti, M..  2019.  Addressing Adversarial Attacks Against Security Systems Based on Machine Learning. 2019 11th International Conference on Cyber Conflict (CyCon). 900:1—18.

Machine-learning solutions are successfully adopted in multiple contexts but the application of these techniques to the cyber security domain is complex and still immature. Among the many open issues that affect security systems based on machine learning, we concentrate on adversarial attacks that aim to affect the detection and prediction capabilities of machine-learning models. We consider realistic types of poisoning and evasion attacks targeting security solutions devoted to malware, spam and network intrusion detection. We explore the possible damages that an attacker can cause to a cyber detector and present some existing and original defensive techniques in the context of intrusion detection systems. This paper contains several performance evaluations that are based on extensive experiments using large traffic datasets. The results highlight that modern adversarial attacks are highly effective against machine-learning classifiers for cyber detection, and that existing solutions require improvements in several directions. The paper paves the way for more robust machine-learning-based techniques that can be integrated into cyber security platforms.

2020-10-29
Vi, Bao Ngoc, Noi Nguyen, Huu, Nguyen, Ngoc Tran, Truong Tran, Cao.  2019.  Adversarial Examples Against Image-based Malware Classification Systems. 2019 11th International Conference on Knowledge and Systems Engineering (KSE). :1—5.

Malicious software, known as malware, has become urgently serious threat for computer security, so automatic mal-ware classification techniques have received increasing attention. In recent years, deep learning (DL) techniques for computer vision have been successfully applied for malware classification by visualizing malware files and then using DL to classify visualized images. Although DL-based classification systems have been proven to be much more accurate than conventional ones, these systems have been shown to be vulnerable to adversarial attacks. However, there has been little research to consider the danger of adversarial attacks to visualized image-based malware classification systems. This paper proposes an adversarial attack method based on the gradient to attack image-based malware classification systems by introducing perturbations on resource section of PE files. The experimental results on the Malimg dataset show that by a small interference, the proposed method can achieve success attack rate when challenging convolutional neural network malware classifiers.

Choi, Seok-Hwan, Shin, Jin-Myeong, Liu, Peng, Choi, Yoon-Ho.  2019.  Robustness Analysis of CNN-based Malware Family Classification Methods Against Various Adversarial Attacks. 2019 IEEE Conference on Communications and Network Security (CNS). :1—6.

As malware family classification methods, image-based classification methods have attracted much attention. Especially, due to the fast classification speed and the high classification accuracy, Convolutional Neural Network (CNN)-based malware family classification methods have been studied. However, previous studies on CNN-based classification methods focused only on improving the classification accuracy of malware families. That is, previous studies did not consider the cases that the accuracy of CNN-based malware classification methods can be decreased under the existence of adversarial attacks. In this paper, we analyze the robustness of various CNN-based malware family classification models under adversarial attacks. While adding imperceptible non-random perturbations to the input image, we measured how the accuracy of the CNN-based malware family classification model can be affected. Also, we showed the influence of three significant visualization parameters(i.e., the size of input image, dimension of input image, and conversion color of a special character)on the accuracy variation under adversarial attacks. From the evaluation results using the Microsoft malware dataset, we showed that even the accuracy over 98% of the CNN-based malware family classification method can be decreased to less than 7%.

2020-10-05
Zhou, Xingyu, Li, Yi, Barreto, Carlos A., Li, Jiani, Volgyesi, Peter, Neema, Himanshu, Koutsoukos, Xenofon.  2019.  Evaluating Resilience of Grid Load Predictions under Stealthy Adversarial Attacks. 2019 Resilience Week (RWS). 1:206–212.
Recent advances in machine learning enable wider applications of prediction models in cyber-physical systems. Smart grids are increasingly using distributed sensor settings for distributed sensor fusion and information processing. Load forecasting systems use these sensors to predict future loads to incorporate into dynamic pricing of power and grid maintenance. However, these inference predictors are highly complex and thus vulnerable to adversarial attacks. Moreover, the adversarial attacks are synthetic norm-bounded modifications to a limited number of sensors that can greatly affect the accuracy of the overall predictor. It can be much cheaper and effective to incorporate elements of security and resilience at the earliest stages of design. In this paper, we demonstrate how to analyze the security and resilience of learning-based prediction models in power distribution networks by utilizing a domain-specific deep-learning and testing framework. This framework is developed using DeepForge and enables rapid design and analysis of attack scenarios against distributed smart meters in a power distribution network. It runs the attack simulations in the cloud backend. In addition to the predictor model, we have integrated an anomaly detector to detect adversarial attacks targeting the predictor. We formulate the stealthy adversarial attacks as an optimization problem to maximize prediction loss while minimizing the required perturbations. Under the worst-case setting, where the attacker has full knowledge of both the predictor and the detector, an iterative attack method has been developed to solve for the adversarial perturbation. We demonstrate the framework capabilities using a GridLAB-D based power distribution network model and show how stealthy adversarial attacks can affect smart grid prediction systems even with a partial control of network.
2020-09-04
Wu, Yi, Liu, Jian, Chen, Yingying, Cheng, Jerry.  2019.  Semi-black-box Attacks Against Speech Recognition Systems Using Adversarial Samples. 2019 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN). :1—5.
As automatic speech recognition (ASR) systems have been integrated into a diverse set of devices around us in recent years, security vulnerabilities of them have become an increasing concern for the public. Existing studies have demonstrated that deep neural networks (DNNs), acting as the computation core of ASR systems, is vulnerable to deliberately designed adversarial attacks. Based on the gradient descent algorithm, existing studies have successfully generated adversarial samples which can disturb ASR systems and produce adversary-expected transcript texts designed by adversaries. Most of these research simulated white-box attacks which require knowledge of all the components in the targeted ASR systems. In this work, we propose the first semi-black-box attack against the ASR system - Kaldi. Requiring only partial information from Kaldi and none from DNN, we can embed malicious commands into a single audio chip based on the gradient-independent genetic algorithm. The crafted audio clip could be recognized as the embedded malicious commands by Kaldi and unnoticeable to humans in the meanwhile. Experiments show that our attack can achieve high attack success rate with unnoticeable perturbations to three types of audio clips (pop music, pure music, and human command) without the need of the underlying DNN model parameters and architecture.
2020-08-28
Gopinath, Divya, S. Pasareanu, Corina, Wang, Kaiyuan, Zhang, Mengshi, Khurshid, Sarfraz.  2019.  Symbolic Execution for Attribution and Attack Synthesis in Neural Networks. 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :282—283.

This paper introduces DeepCheck, a new approach for validating Deep Neural Networks (DNNs) based on core ideas from program analysis, specifically from symbolic execution. DeepCheck implements techniques for lightweight symbolic analysis of DNNs and applies them in the context of image classification to address two challenging problems: 1) identification of important pixels (for attribution and adversarial generation); and 2) creation of adversarial attacks. Experimental results using the MNIST data-set show that DeepCheck's lightweight symbolic analysis provides a valuable tool for DNN validation.