Visible to the public Biblio

Filters: Keyword is black box attack  [Clear All Filters]
2021-07-27
Biswal, Milan, Misra, Satyajayant, Tayeen, Abu S..  2020.  Black Box Attack on Machine Learning Assisted Wide Area Monitoring and Protection Systems. 2020 IEEE Power Energy Society Innovative Smart Grid Technologies Conference (ISGT). :1–5.
The applications for wide area monitoring, protection, and control systems (WAMPC) at the control center, help with providing resilient, efficient, and secure operation of the transmission system of the smart grid. The increased proliferation of phasor measurement units (PMUs) in this space has inspired many prudent applications to assist in the process of decision making in the control centers. Machine learning (ML) based decision support systems have become viable with the availability of abundant high-resolution wide area operational PMU data. We propose a deep neural network (DNN) based supervisory protection and event diagnosis system and demonstrate that it works with very high degree of confidence. The system introduces a supervisory layer that processes the data streams collected from PMUs and detects disturbances in the power systems that may have gone unnoticed by the local monitoring and protection system. Then, we investigate compromise of the insights of this ML based supervisory control by crafting adversaries that corrupt the PMU data via minimal coordinated manipulation and identification of the spatio-temporal regions in the multidimensional PMU data in a way that the DNN classifier makes wrong event predictions.
Xiao, Wenli, Jiang, Hao, Xia, Song.  2020.  A New Black Box Attack Generating Adversarial Examples Based on Reinforcement Learning. 2020 Information Communication Technologies Conference (ICTC). :141–146.
Machine learning can be misled by adversarial examples, which is formed by making small changes to the original data. Nowadays, there are kinds of methods to produce adversarial examples. However, they can not apply non-differentiable models, reduce the amount of calculations, and shorten the sample generation time at the same time. In this paper, we propose a new black box attack generating adversarial examples based on reinforcement learning. By using deep Q-learning network, we can train the substitute model and generate adversarial examples at the same time. Experimental results show that this method only needs 7.7ms to produce an adversarial example, which solves the problems of low efficiency, large amount of calculation and inapplicable to non-differentiable model.
2021-03-01
Kuppa, A., Le-Khac, N.-A..  2020.  Black Box Attacks on Explainable Artificial Intelligence(XAI) methods in Cyber Security. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Cybersecurity community is slowly leveraging Machine Learning (ML) to combat ever evolving threats. One of the biggest drivers for successful adoption of these models is how well domain experts and users are able to understand and trust their functionality. As these black-box models are being employed to make important predictions, the demand for transparency and explainability is increasing from the stakeholders.Explanations supporting the output of ML models are crucial in cyber security, where experts require far more information from the model than a simple binary output for their analysis. Recent approaches in the literature have focused on three different areas: (a) creating and improving explainability methods which help users better understand the internal workings of ML models and their outputs; (b) attacks on interpreters in white box setting; (c) defining the exact properties and metrics of the explanations generated by models. However, they have not covered, the security properties and threat models relevant to cybersecurity domain, and attacks on explainable models in black box settings.In this paper, we bridge this gap by proposing a taxonomy for Explainable Artificial Intelligence (XAI) methods, covering various security properties and threat models relevant to cyber security domain. We design a novel black box attack for analyzing the consistency, correctness and confidence security properties of gradient based XAI methods. We validate our proposed system on 3 security-relevant data-sets and models, and demonstrate that the method achieves attacker's goal of misleading both the classifier and explanation report and, only explainability method without affecting the classifier output. Our evaluation of the proposed approach shows promising results and can help in designing secure and robust XAI methods.

2019-01-16
Gao, J., Lanchantin, J., Soffa, M. L., Qi, Y..  2018.  Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers. 2018 IEEE Security and Privacy Workshops (SPW). :50–56.

Although various techniques have been proposed to generate adversarial samples for white-box attacks on text, little attention has been paid to a black-box attack, which is a more realistic scenario. In this paper, we present a novel algorithm, DeepWordBug, to effectively generate small text perturbations in a black-box setting that forces a deep-learning classifier to misclassify a text input. We develop novel scoring strategies to find the most important words to modify such that the deep classifier makes a wrong prediction. Simple character-level transformations are applied to the highest-ranked words in order to minimize the edit distance of the perturbation. We evaluated DeepWordBug on two real-world text datasets: Enron spam emails and IMDB movie reviews. Our experimental results indicate that DeepWordBug can reduce the classification accuracy from 99% to 40% on Enron and from 87% to 26% on IMDB. Our results strongly demonstrate that the generated adversarial sequences from a deep-learning model can similarly evade other deep models.