Visible to the public Biblio

Filters: Keyword is Adversarial Machine Learning  [Clear All Filters]
2023-06-22
Seetharaman, Sanjay, Malaviya, Shubham, Vasu, Rosni, Shukla, Manish, Lodha, Sachin.  2022.  Influence Based Defense Against Data Poisoning Attacks in Online Learning. 2022 14th International Conference on COMmunication Systems & NETworkS (COMSNETS). :1–6.
Data poisoning is a type of adversarial attack on training data where an attacker manipulates a fraction of data to degrade the performance of machine learning model. There are several known defensive mechanisms for handling offline attacks, however defensive measures for online learning, where data points arrive sequentially, have not garnered similar interest. In this work, we propose a defense mechanism to minimize the degradation caused by the poisoned training data on a learner's model in an online setup. Our proposed method utilizes an influence function which is a classic technique in robust statistics. Further, we supplement it with the existing data sanitization methods for filtering out some of the poisoned data points. We study the effectiveness of our defense mechanism on multiple datasets and across multiple attack strategies against an online learner.
ISSN: 2155-2509
2023-01-06
Roy, Arunava, Dasgupta, Dipankar.  2022.  A Robust Framework for Adaptive Selection of Filter Ensembles to Detect Adversarial Inputs. 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :59—67.
Existing defense strategies against adversarial attacks (AAs) on AI/ML are primarily focused on examining the input data streams using a wide variety of filtering techniques. For instance, input filters are used to remove noisy, misleading, and out-of-class inputs along with a variety of attacks on learning systems. However, a single filter may not be able to detect all types of AAs. To address this issue, in the current work, we propose a robust, transferable, distribution-independent, and cross-domain supported framework for selecting Adaptive Filter Ensembles (AFEs) to minimize the impact of data poisoning on learning systems. The optimal filter ensembles are determined through a Multi-Objective Bi-Level Programming Problem (MOBLPP) that provides a subset of diverse filter sequences, each exhibiting fair detection accuracy. The proposed framework of AFE is trained to model the pristine data distribution to identify the corrupted inputs and converges to the optimal AFE without vanishing gradients and mode collapses irrespective of input data distributions. We presented preliminary experiments to show the proposed defense outperforms the existing defenses in terms of robustness and accuracy.
Erbil, Pinar, Gursoy, M. Emre.  2022.  Detection and Mitigation of Targeted Data Poisoning Attacks in Federated Learning. 2022 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :1—8.
Federated learning (FL) has emerged as a promising paradigm for distributed training of machine learning models. In FL, several participants train a global model collaboratively by only sharing model parameter updates while keeping their training data local. However, FL was recently shown to be vulnerable to data poisoning attacks, in which malicious participants send parameter updates derived from poisoned training data. In this paper, we focus on defending against targeted data poisoning attacks, where the attacker’s goal is to make the model misbehave for a small subset of classes while the rest of the model is relatively unaffected. To defend against such attacks, we first propose a method called MAPPS for separating malicious updates from benign ones. Using MAPPS, we propose three methods for attack detection: MAPPS + X-Means, MAPPS + VAT, and their Ensemble. Then, we propose an attack mitigation approach in which a "clean" model (i.e., a model that is not negatively impacted by an attack) can be trained despite the existence of a poisoning attempt. We empirically evaluate all of our methods using popular image classification datasets. Results show that we can achieve \textgreater 95% true positive rates while incurring only \textless 2% false positive rate. Furthermore, the clean models that are trained using our proposed methods have accuracy comparable to models trained in an attack-free scenario.
Jagadeesha, Nishchal.  2022.  Facial Privacy Preservation using FGSM and Universal Perturbation attacks. 2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON). 1:46—52.
Research done in Facial Privacy so far has entrenched the scope of gleaning race, age, and gender from a human’s facial image that are classifiable and compliant biometric attributes. Noticeable distortions, morphing, and face-swapping are some of the techniques that have been researched to restore consumers’ privacy. By fooling face recognition models, these techniques cater superficially to the needs of user privacy, however, the presence of visible manipulations negatively affects the aesthetic of the image. The objective of this work is to highlight common adversarial techniques that can be used to introduce granular pixel distortions using white-box and black-box perturbation algorithms that ensure the privacy of users’ sensitive or personal data in face images, fooling AI facial recognition models while maintaining the aesthetics of and visual integrity of the image.
2022-11-18
Tian, Pu, Hatcher, William Grant, Liao, Weixian, Yu, Wei, Blasch, Erik.  2021.  FALIoTSE: Towards Federated Adversarial Learning for IoT Search Engine Resiliency. 2021 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :290–297.
To improve efficiency and resource usage in data retrieval, an Internet of Things (IoT) search engine organizes a vast amount of scattered data and responds to client queries with processed results. Machine learning provides a deep understanding of complex patterns and enables enhanced feedback to users through well-trained models. Nonetheless, machine learning models are prone to adversarial attacks via the injection of elaborate perturbations, resulting in subverted outputs. Particularly, adversarial attacks on time-series data demand urgent attention, as sensors in IoT systems are collecting an increasing volume of sequential data. This paper investigates adversarial attacks on time-series analysis in an IoT search engine (IoTSE) system. Specifically, we consider the Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) as our base model, implemented in a simulated federated learning scheme. We propose the Federated Adversarial Learning for IoT Search Engine (FALIoTSE) that exploits the shared parameters of the federated model as the target for adversarial example generation and resiliency. Using a real-world smart parking garage dataset, the impact of an attack on FALIoTSE is demonstrated under various levels of perturbation. The experiments show that the training error increases significantly with noises from the gradient.
2022-09-20
Afzal-Houshmand, Sam, Homayoun, Sajad, Giannetsos, Thanassis.  2021.  A Perfect Match: Deep Learning Towards Enhanced Data Trustworthiness in Crowd-Sensing Systems. 2021 IEEE International Mediterranean Conference on Communications and Networking (MeditCom). :258—264.
The advent of IoT edge devices has enabled the collection of rich datasets, as part of Mobile Crowd Sensing (MCS), which has emerged as a key enabler for a wide gamut of safety-critical applications ranging from traffic control, environmental monitoring to assistive healthcare. Despite the clear advantages that such unprecedented quantity of data brings forth, it is also subject to inherent data trustworthiness challenges due to factors such as malevolent input and faulty sensors. Compounding this issue, there has been a plethora of proposed solutions, based on the use of traditional machine learning algorithms, towards assessing and sifting faulty data without any assumption on the trustworthiness of their source. However, there are still a number of open issues: how to cope with the presence of strong, colluding adversaries while at the same time efficiently managing this high influx of incoming user data. In this work, we meet these challenges by proposing the hybrid use of Deep Learning schemes (i.e., LSTMs) and conventional Machine Learning classifiers (i.e. One-Class Classifiers) for detecting and filtering out false data points. We provide a prototype implementation coupled with a detailed performance evaluation under various (attack) scenarios, employing both real and synthetic datasets. Our results showcase how the proposed solution outperforms various existing resilient aggregation and outlier detection schemes.
Wood, Adrian, Johnstone, Michael N..  2021.  Detection of Induced False Negatives in Malware Samples. 2021 18th International Conference on Privacy, Security and Trust (PST). :1—6.
Malware detection is an important area of cyber security. Computer systems rely on malware detection applications to prevent malware attacks from succeeding. Malware detection is not a straightforward task, as new variants of malware are generated at an increasing rate. Machine learning (ML) has been utilised to generate predictive classification models to identify new malware variants which conventional malware detection methods may not detect. Machine learning, has however, been found to be vulnerable to different types of adversarial attacks, in which an attacker is able to negatively affect the classification ability of the ML model. Several defensive measures to prevent adversarial poisoning attacks have been developed, but they often rely on the use of a trusted clean dataset to help identify and remove adversarial examples from the training dataset. The defence in this paper does not require a trusted clean dataset, but instead, identifies intentional false negatives (zero day malware classified as benign) at the testing stage by examining the activation weights of the ML model. The defence was able to identify 94.07% of the successful targeted poisoning attacks.
2022-07-15
Nguyen, Phuong T., Di Sipio, Claudio, Di Rocco, Juri, Di Penta, Massimiliano, Di Ruscio, Davide.  2021.  Adversarial Attacks to API Recommender Systems: Time to Wake Up and Smell the Coffee? 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). :253—265.
Recommender systems in software engineering provide developers with a wide range of valuable items to help them complete their tasks. Among others, API recommender systems have gained momentum in recent years as they became more successful at suggesting API calls or code snippets. While these systems have proven to be effective in terms of prediction accuracy, there has been less attention for what concerns such recommenders’ resilience against adversarial attempts. In fact, by crafting the recommenders’ learning material, e.g., data from large open-source software (OSS) repositories, hostile users may succeed in injecting malicious data, putting at risk the software clients adopting API recommender systems. In this paper, we present an empirical investigation of adversarial machine learning techniques and their possible influence on recommender systems. The evaluation performed on three state-of-the-art API recommender systems reveals a worrying outcome: all of them are not immune to malicious data. The obtained result triggers the need for effective countermeasures to protect recommender systems against hostile attacks disguised in training data.
2022-04-19
Hemmati, Mojtaba, Hadavi, Mohammad Ali.  2021.  Using Deep Reinforcement Learning to Evade Web Application Firewalls. 2021 18th International ISC Conference on Information Security and Cryptology (ISCISC). :35–41.
Web application firewalls (WAF) are the last line of defense in protecting web applications from application layer security threats like SQL injection and cross-site scripting. Currently, most evasion techniques from WAFs are still developed manually. In this work, we propose a solution, which automatically scans the WAFs to find payloads through which the WAFs can be bypassed. Our solution finds out rules defects, which can be further used in rule tuning for rule-based WAFs. Also, it can enrich the machine learning-based dataset for retraining. To this purpose, we provide a framework based on reinforcement learning with an environment compatible with OpenAI gym toolset standards, employed for training agents to implement WAF evasion tasks. The framework acts as an adversary and exploits a set of mutation operators to mutate the malicious payload syntactically without affecting the original semantics. We use Q-learning and proximal policy optimization algorithms with the deep neural network. Our solution is successful in evading signature-based and machine learning-based WAFs.
2022-04-12
Venkatesan, Sridhar, Sikka, Harshvardhan, Izmailov, Rauf, Chadha, Ritu, Oprea, Alina, de Lucia, Michael J..  2021.  Poisoning Attacks and Data Sanitization Mitigations for Machine Learning Models in Network Intrusion Detection Systems. MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM). :874—879.
Among many application domains of machine learning in real-world settings, cyber security can benefit from more automated techniques to combat sophisticated adversaries. Modern network intrusion detection systems leverage machine learning models on network logs to proactively detect cyber attacks. However, the risk of adversarial attacks against machine learning used in these cyber settings is not fully explored. In this paper, we investigate poisoning attacks at training time against machine learning models in constrained cyber environments such as network intrusion detection; we also explore mitigations of such attacks based on training data sanitization. We consider the setting of poisoning availability attacks, in which an attacker can insert a set of poisoned samples at training time with the goal of degrading the accuracy of the deployed model. We design a white-box, realizable poisoning attack that reduced the original model accuracy from 95% to less than 50 % by generating mislabeled samples in close vicinity of a selected subset of training points. We also propose a novel Nested Training method as a defense against these attacks. Our defense includes a diversified ensemble of classifiers, each trained on a different subset of the training set. We use the disagreement of the classifiers' predictions as a data sanitization method, and show that an ensemble of 10 SVM classifiers is resilient to a large fraction of poisoning samples, up to 30% of the training data.
2022-03-22
Bai, Zhihao, Wang, Ke, Zhu, Hang, Cao, Yinzhi, Jin, Xin.  2021.  Runtime Recovery of Web Applications under Zero-Day ReDoS Attacks. 2021 IEEE Symposium on Security and Privacy (SP). :1575—1588.
Regular expression denial of service (ReDoS)— which exploits the super-linear running time of matching regular expressions against carefully crafted inputs—is an emerging class of DoS attacks to web services. One challenging question for a victim web service under ReDoS attacks is how to quickly recover its normal operation after ReDoS attacks, especially these zero-day ones exploiting previously unknown vulnerabilities.In this paper, we present RegexNet, the first payload-based, automated, reactive ReDoS recovery system for web services. RegexNet adopts a learning model, which is updated constantly in a feedback loop during runtime, to classify payloads of upcoming requests including the request contents and database query responses. If detected as a cause leading to ReDoS, RegexNet migrates those requests to a sandbox and isolates their execution for a fast, first-measure recovery.We have implemented a RegexNet prototype and integrated it with HAProxy and Node.js. Evaluation results show that RegexNet is effective in recovering the performance of web services against zero-day ReDoS attacks, responsive on reacting to attacks in sub-minute, and resilient to different ReDoS attack types including adaptive ones that are designed to evade RegexNet on purpose.
2022-02-24
Muhati, Eric, Rawat, Danda B..  2021.  Adversarial Machine Learning for Inferring Augmented Cyber Agility Prediction. IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–6.
Security analysts conduct continuous evaluations of cyber-defense tools to keep pace with advanced and persistent threats. Cyber agility has become a critical proactive security resource that makes it possible to measure defense adjustments and reactions to rising threats. Subsequently, machine learning has been applied to support cyber agility prediction as an essential effort to anticipate future security performance. Nevertheless, apt and treacherous actors motivated by economic incentives continue to prevail in circumventing machine learning-based protection tools. Adversarial learning, widely applied to computer security, especially intrusion detection, has emerged as a new area of concern for the recently recognized critical cyber agility prediction. The rationale is, if a sophisticated malicious actor obtains the cyber agility parameters, correct prediction cannot be guaranteed. Unless with a demonstration of white-box attack failures. The challenge lies in recognizing that unconstrained adversaries hold vast potential capabilities. In practice, they could have perfect-knowledge, i.e., a full understanding of the defense tool in use. We address this challenge by proposing an adversarial machine learning approach that achieves accurate cyber agility forecast through mapped nefarious influence on static defense tools metrics. Considering an adversary would aim at influencing perilous confidence in a defense tool, we demonstrate resilient cyber agility prediction through verified attack signatures in dynamic learning windows. After that, we compare cyber agility prediction under negative influence with and without our proposed dynamic learning windows. Our numerical results show the model's execution degrades without adversarial machine learning. Such a feigned measure of performance could lead to incorrect software security patching.
2022-02-22
Martin, Peter, Fan, Jian, Kim, Taejin, Vesey, Konrad, Greenwald, Lloyd.  2021.  Toward Effective Moving Target Defense Against Adversarial AI. MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM). :993—998.
Deep learning (DL) models have been shown to be vulnerable to adversarial attacks. DL model security against adversarial attacks is critical to using DL-trained models in forward deployed systems, e.g. facial recognition, document characterization, or object detection. We provide results and lessons learned applying a moving target defense (MTD) strategy against iterative, gradient-based adversarial attacks. Our strategy involves (1) training a diverse ensemble of DL models, (2) applying randomized affine input transformations to inputs, and (3) randomizing output decisions. We report a primary lesson that this strategy is ineffective against a white-box adversary, which could completely circumvent output randomization using a deterministic surrogate. We reveal how our ensemble models lacked the diversity necessary for effective MTD. We also evaluate our MTD strategy against a black-box adversary employing an ensemble surrogate model. We conclude that an MTD strategy against black-box adversarial attacks crucially depends on lack of transferability between models.
2022-02-09
Cinà, Antonio Emanuele, Vascon, Sebastiano, Demontis, Ambra, Biggio, Battista, Roli, Fabio, Pelillo, Marcello.  2021.  The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers? 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
One of the most concerning threats for modern AI systems is data poisoning, where the attacker injects maliciously crafted training data to corrupt the system's behavior at test time. Availability poisoning is a particularly worrisome subset of poisoning attacks where the attacker aims to cause a Denial-of-Service (DoS) attack. However, the state-of-the-art algorithms are computationally expensive because they try to solve a complex bi-level optimization problem (the ``hammer''). We observed that in particular conditions, namely, where the target model is linear (the ``nut''), the usage of computationally costly procedures can be avoided. We propose a counter-intuitive but efficient heuristic that allows contaminating the training set such that the target system's performance is highly compromised. We further suggest a re-parameterization trick to decrease the number of variables to be optimized. Finally, we demonstrate that, under the considered settings, our framework achieves comparable, or even better, performances in terms of the attacker's objective while being significantly more computationally efficient.
2022-02-07
Catak, Evren, Catak, Ferhat Ozgur, Moldsvor, Arild.  2021.  Adversarial Machine Learning Security Problems for 6G: mmWave Beam Prediction Use-Case. 2021 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom). :1–6.
6G is the next generation for the communication systems. In recent years, machine learning algorithms have been applied widely in various fields such as health, transportation, and the autonomous car. The predictive algorithms will be used in 6G problems. With the rapid developments of deep learning techniques, it is critical to take the security concern into account when applying the algorithms. While machine learning offers significant advantages for 6G, AI models’ security is normally ignored. Due to the many applications in the real world, security is a vital part of the algorithms. This paper proposes a mitigation method for adversarial attacks against proposed 6G machine learning models for the millimeter-wave (mmWave) beam prediction using adversarial learning. The main idea behind adversarial attacks against machine learning models is to produce faulty results by manipulating trained deep learning models for 6G applications for mmWave beam prediction. We also present the adversarial learning mitigation method’s performance for 6G security in millimeter-wave beam prediction application with fast gradient sign method attack. The mean square errors of the defended model under attack are very close to the undefended model without attack.
2021-12-20
Janapriya, N., Anuradha, K., Srilakshmi, V..  2021.  Adversarial Deep Learning Models With Multiple Adversaries. 2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA). :522–525.
Adversarial machine learning calculations handle adversarial instance age, producing bogus data information with the ability to fool any machine learning model. As the word implies, “foe” refers to a rival, whereas “rival” refers to a foe. In order to strengthen the machine learning models, this section discusses about the weakness of machine learning models and how effectively the misinterpretation occurs during the learning cycle. As definite as it is, existing methods such as creating adversarial models and devising powerful ML computations, frequently ignore semantics and the general skeleton including ML section. This research work develops an adversarial learning calculation by considering the coordinated portrayal by considering all the characteristics and Convolutional Neural Networks (CNN) explicitly. Figuring will most likely express minimal adjustments via data transport represented over positive and negative class markings, as well as a specific subsequent data flow misclassified by CNN. The final results recommend a certain game theory and formative figuring, which obtain incredible favored ensuring about significant learning models against the execution of shortcomings, which are reproduced as attack circumstances against various adversaries.
2021-11-29
Yilmaz, Ibrahim, Siraj, Ambareen, Ulybyshev, Denis.  2020.  Improving DGA-Based Malicious Domain Classifiers for Malware Defense with Adversarial Machine Learning. 2020 IEEE 4th Conference on Information Communication Technology (CICT). :1–6.
Domain Generation Algorithms (DGAs) are used by adversaries to establish Command and Control (C&C) server communications during cyber attacks. Blacklists of known/identified C&C domains are used as one of the defense mechanisms. However, static blacklists generated by signature-based approaches can neither keep up nor detect never-seen-before malicious domain names. To address this weakness, we applied a DGA-based malicious domain classifier using the Long Short-Term Memory (LSTM) method with a novel feature engineering technique. Our model's performance shows a greater accuracy compared to a previously reported model. Additionally, we propose a new adversarial machine learning-based method to generate never-before-seen malware-related domain families. We augment the training dataset with new samples to make the training of the models more effective in detecting never-before-seen malicious domain names. To protect blacklists of malicious domain names against adversarial access and modifications, we devise secure data containers to store and transfer blacklists.
2021-11-08
Wilhjelm, Carl, Younis, Awad A..  2020.  A Threat Analysis Methodology for Security Requirements Elicitation in Machine Learning Based Systems. 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :426–433.
Machine learning (ML) models are now a key component for many applications. However, machine learning based systems (MLBSs), those systems that incorporate them, have proven vulnerable to various new attacks as a result. Currently, there exists no systematic process for eliciting security requirements for MLBSs that incorporates the identification of adversarial machine learning (AML) threats with those of a traditional non-MLBS. In this research study, we explore the applicability of traditional threat modeling and existing attack libraries in addressing MLBS security in the requirements phase. Using an example MLBS, we examined the applicability of 1) DFD and STRIDE in enumerating AML threats; 2) Microsoft SDL AI/ML Bug Bar in ranking the impact of the identified threats; and 3) the Microsoft AML attack library in eliciting threat mitigations to MLBSs. Such a method has the potential to assist team members, even with only domain specific knowledge, to collaboratively mitigate MLBS threats.
2021-10-12
Niazazari, Iman, Livani, Hanif.  2020.  Attack on Grid Event Cause Analysis: An Adversarial Machine Learning Approach. 2020 IEEE Power Energy Society Innovative Smart Grid Technologies Conference (ISGT). :1–5.
With the ever-increasing reliance on data for data-driven applications in power grids, such as event cause analysis, the authenticity of data streams has become crucially important. The data can be prone to adversarial stealthy attacks aiming to manipulate the data such that residual-based bad data detectors cannot detect them, and the perception of system operators or event classifiers changes about the actual event. This paper investigates the impact of adversarial attacks on convolutional neural network-based event cause analysis frameworks. We have successfully verified the ability of adversaries to maliciously misclassify events through stealthy data manipulations. The vulnerability assessment is studied with respect to the number of compromised measurements. Furthermore, a defense mechanism to robustify the performance of the event cause analysis is proposed. The effectiveness of adversarial attacks on changing the output of the framework is studied using the data generated by real-time digital simulator (RTDS) under different scenarios such as type of attacks and level of access to data.
2021-08-31
Di Noia, Tommaso, Malitesta, Daniele, Merra, Felice Antonio.  2020.  TAaMR: Targeted Adversarial Attack against Multimedia Recommender Systems. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :1–8.
Deep learning classifiers are hugely vulnerable to adversarial examples, and their existence raised cybersecurity concerns in many tasks with an emphasis on malware detection, computer vision, and speech recognition. While there is a considerable effort to investigate attacks and defense strategies in these tasks, only limited work explores the influence of targeted attacks on input data (e.g., images, textual descriptions, audio) used in multimedia recommender systems (MR). In this work, we examine the consequences of applying targeted adversarial attacks against the product images of a visual-based MR. We propose a novel adversarial attack approach, called Target Adversarial Attack against Multimedia Recommender Systems (TAaMR), to investigate the modification of MR behavior when the images of a category of low recommended products (e.g., socks) are perturbed to misclassify the deep neural classifier towards the class of more recommended products (e.g., running shoes) with human-level slight images alterations. We explore the TAaMR approach studying the effect of two targeted adversarial attacks (i.e., FGSM and PGD) against input pictures of two state-of-the-art MR (i.e., VBPR and AMR). Extensive experiments on two real-world recommender fashion datasets confirmed the effectiveness of TAaMR in terms of recommendation lists changing while keeping the original human judgment on the perturbed images.
2021-07-27
Biswal, Milan, Misra, Satyajayant, Tayeen, Abu S..  2020.  Black Box Attack on Machine Learning Assisted Wide Area Monitoring and Protection Systems. 2020 IEEE Power Energy Society Innovative Smart Grid Technologies Conference (ISGT). :1–5.
The applications for wide area monitoring, protection, and control systems (WAMPC) at the control center, help with providing resilient, efficient, and secure operation of the transmission system of the smart grid. The increased proliferation of phasor measurement units (PMUs) in this space has inspired many prudent applications to assist in the process of decision making in the control centers. Machine learning (ML) based decision support systems have become viable with the availability of abundant high-resolution wide area operational PMU data. We propose a deep neural network (DNN) based supervisory protection and event diagnosis system and demonstrate that it works with very high degree of confidence. The system introduces a supervisory layer that processes the data streams collected from PMUs and detects disturbances in the power systems that may have gone unnoticed by the local monitoring and protection system. Then, we investigate compromise of the insights of this ML based supervisory control by crafting adversaries that corrupt the PMU data via minimal coordinated manipulation and identification of the spatio-temporal regions in the multidimensional PMU data in a way that the DNN classifier makes wrong event predictions.
2021-06-30
DelVecchio, Matthew, Flowers, Bryse, Headley, William C..  2020.  Effects of Forward Error Correction on Communications Aware Evasion Attacks. 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications. :1—7.
Recent work has shown the impact of adversarial machine learning on deep neural networks (DNNs) developed for Radio Frequency Machine Learning (RFML) applications. While these attacks have been shown to be successful in disrupting the performance of an eavesdropper, they fail to fully support the primary goal of successful intended communication. To remedy this, a communications-aware attack framework was recently developed that allows for a more effective balance between the opposing goals of evasion and intended communication through the novel use of a DNN to intelligently create the adversarial communication signal. Given the near ubiquitous usage of for-ward error correction (FEC) coding in the majority of deployed systems to correct errors that arise, incorporating FEC in this framework is a natural extension of this prior work and will allow for improved performance in more adverse environments. This work therefore provides contributions to the framework through improved loss functions and design considerations to incorporate inherent knowledge of the usage of FEC codes within the transmitted signal. Performance analysis shows that FEC coding improves the communications aware adversarial attack even if no explicit knowledge of the coding scheme is assumed and allows for improved performance over the prior art in balancing the opposing goals of evasion and intended communications.
2021-06-24
Dang, Tran Khanh, Truong, Phat T. Tran, Tran, Pi To.  2020.  Data Poisoning Attack on Deep Neural Network and Some Defense Methods. 2020 International Conference on Advanced Computing and Applications (ACOMP). :15–22.
In recent years, Artificial Intelligence has disruptively changed information technology and software engineering with a proliferation of technologies and applications based-on it. However, recent researches show that AI models in general and the most greatest invention since sliced bread - Deep Learning models in particular, are vulnerable to being hacked and can be misused for bad purposes. In this paper, we carry out a brief review of data poisoning attack - one of the two recently dangerous emerging attacks - and the state-of-the-art defense methods for this problem. Finally, we discuss current challenges and future developments.
2021-05-20
Maung, Maung, Pyone, April, Kiya, Hitoshi.  2020.  Encryption Inspired Adversarial Defense For Visual Classification. 2020 IEEE International Conference on Image Processing (ICIP). :1681—1685.
Conventional adversarial defenses reduce classification accuracy whether or not a model is under attacks. Moreover, most of image processing based defenses are defeated due to the problem of obfuscated gradients. In this paper, we propose a new adversarial defense which is a defensive transform for both training and test images inspired by perceptual image encryption methods. The proposed method utilizes a block-wise pixel shuffling method with a secret key. The experiments are carried out on both adaptive and non-adaptive maximum-norm bounded white-box attacks while considering obfuscated gradients. The results show that the proposed defense achieves high accuracy (91.55%) on clean images and (89.66%) on adversarial examples with noise distance of 8/255 on CFAR-10 dataset. Thus, the proposed defense outperforms state-of-the-art adversarial defenses including latent adversarial training, adversarial training and thermometer encoding.
2021-03-04
Guo, H., Wang, Z., Wang, B., Li, X., Shila, D. M..  2020.  Fooling A Deep-Learning Based Gait Behavioral Biometric System. 2020 IEEE Security and Privacy Workshops (SPW). :221—227.

We leverage deep learning algorithms on various user behavioral information gathered from end-user devices to classify a subject of interest. In spite of the ability of these techniques to counter spoofing threats, they are vulnerable to adversarial learning attacks, where an attacker adds adversarial noise to the input samples to fool the classifier into false acceptance. Recently, a handful of mature techniques like Fast Gradient Sign Method (FGSM) have been proposed to aid white-box attacks, where an attacker has a complete knowledge of the machine learning model. On the contrary, we exploit a black-box attack to a behavioral biometric system based on gait patterns, by using FGSM and training a shadow model that mimics the target system. The attacker has limited knowledge on the target model and no knowledge of the real user being authenticated, but induces a false acceptance in authentication. Our goal is to understand the feasibility of a black-box attack and to what extent FGSM on shadow models would contribute to its success. Our results manifest that the performance of FGSM highly depends on the quality of the shadow model, which is in turn impacted by key factors including the number of queries allowed by the target system in order to train the shadow model. Our experimentation results have revealed strong relationships between the shadow model and FGSM performance, as well as the effect of the number of FGSM iterations used to create an attack instance. These insights also shed light on deep-learning algorithms' model shareability that can be exploited to launch a successful attack.