Biblio
Filters: Keyword is boundary attack [Clear All Filters]
Improved Adversarial Attack against Black-box Machine Learning Models. 2020 Chinese Automation Congress (CAC). :5907–5912.
.
2020. The existence of adversarial samples makes the security of machine learning models in practical application questioned, especially the black-box adversarial attack, which is very close to the actual application scenario. Efficient search for black-box attack samples is helpful to train more robust models. We discuss the situation that the attacker can get nothing except the final predict label. As for this problem, the current state-of-the-art method is Boundary Attack(BA) and its variants, such as Biased Boundary Attack(BBA), however it still requires large number of queries and kills a lot of time. In this paper, we propose a novel method to solve these shortcomings. First, we improved the algorithm for generating initial adversarial samples with smaller L2 distance. Second, we innovatively combine a swarm intelligence algorithm - Particle Swarm Optimization(PSO) with Biased Boundary Attack and propose PSO-BBA method. Finally, we experiment on ImageNet dataset, and compared our algorithm with the baseline algorithm. The results show that:(1)our improved initial point selection algorithm effectively reduces the number of queries;(2)compared with the most advanced methods, our PSO-BBA method improves the convergence speed while ensuring the attack accuracy;(3)our method has a good effect on both targeted attack and untargeted attack.
Black Box Explanation Guided Decision-Based Adversarial Attacks. 2019 IEEE 5th International Conference on Computer and Communications (ICCC). :1592—1596.
.
2019. Adversarial attacks have been the hot research field in artificial intelligence security. Decision-based black-box adversarial attacks are much more appropriate in the real-world scenarios, where only the final decisions of the targeted deep neural networks are accessible. However, since there is no available guidance for searching the imperceptive adversarial perturbation, boundary attack, one of the best performing decision-based black-box attacks, carries out computationally expensive search. For improving attack efficiency, we propose a novel black box explanation guided decision-based black-box adversarial attack. Firstly, the problem of decision-based adversarial attacks is modeled as a derivative-free and constraint optimization problem. To solve this optimization problem, the black box explanation guided constrained random search method is proposed to more quickly find the imperceptible adversarial example. The insights into the targeted deep neural networks explored by the black box explanation are fully used to accelerate the computationally expensive random search. Experimental results demonstrate that our proposed attack improves the attack efficiency by 64% compared with boundary attack.