Title | Black Box Explanation Guided Decision-Based Adversarial Attacks |
Publication Type | Conference Paper |
Year of Publication | 2019 |
Authors | Jing, Huiyun, Meng, Chengrui, He, Xin, Wei, Wei |
Conference Name | 2019 IEEE 5th International Conference on Computer and Communications (ICCC) |
Keywords | artificial intelligence, artificial intelligence security, attack efficiency, black box explanation, black box explanation guided decision-based adversarial attacks, Black Box Security, boundary attack, Cats, composability, Computational modeling, Constraint optimization, cryptography, decision-based adversarial attacks, decision-based black-box adversarial attack, derivative-free and constraint optimization problem, imperceptible adversarial example, imperceptive adversarial perturbation, learning (artificial intelligence), Logistics, Metrics, neural nets, Neural networks, optimisation, performing decision-based black-box attacks, Perturbation methods, pubcrawl, resilience, Resiliency, search problems, targeted deep neural networks, telecommunication security, Training data |
Abstract | Adversarial attacks have been the hot research field in artificial intelligence security. Decision-based black-box adversarial attacks are much more appropriate in the real-world scenarios, where only the final decisions of the targeted deep neural networks are accessible. However, since there is no available guidance for searching the imperceptive adversarial perturbation, boundary attack, one of the best performing decision-based black-box attacks, carries out computationally expensive search. For improving attack efficiency, we propose a novel black box explanation guided decision-based black-box adversarial attack. Firstly, the problem of decision-based adversarial attacks is modeled as a derivative-free and constraint optimization problem. To solve this optimization problem, the black box explanation guided constrained random search method is proposed to more quickly find the imperceptible adversarial example. The insights into the targeted deep neural networks explored by the black box explanation are fully used to accelerate the computationally expensive random search. Experimental results demonstrate that our proposed attack improves the attack efficiency by 64% compared with boundary attack. |
DOI | 10.1109/ICCC47050.2019.9064243 |
Citation Key | jing_black_2019 |