Title | SPA: An Efficient Adversarial Attack on Spiking Neural Networks using Spike Probabilistic |
Publication Type | Conference Paper |
Year of Publication | 2022 |
Authors | Lin, Xuanwei, Dong, Chen, Liu, Ximeng, Zhang, Yuanyuan |
Conference Name | 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid) |
Date Published | may |
Keywords | adversarial attacks, black-box, composability, Degradation, Linear programming, Medical diagnosis, Metrics, Neural networks, perturbation, Perturbation methods, Probabilistic logic, pubcrawl, Resiliency, security, SNNs, Spiking Neural Networks, transferability, White Box Security, white-box |
Abstract | With the future 6G era, spiking neural networks (SNNs) can be powerful processing tools in various areas due to their strong artificial intelligence (AI) processing capabilities, such as biometric recognition, AI robotics, autonomous drive, and healthcare. However, within Cyber Physical System (CPS), SNNs are surprisingly vulnerable to adversarial examples generated by benign samples with human-imperceptible noise, this will lead to serious consequences such as face recognition anomalies, autonomous drive-out of control, and wrong medical diagnosis. Only by fully understanding the principles of adversarial attacks with adversarial samples can we defend against them. Nowadays, most existing adversarial attacks result in a severe accuracy degradation to trained SNNs. Still, the critical issue is that they only generate adversarial samples by randomly adding, deleting, and flipping spike trains, making them easy to identify by filters, even by human eyes. Besides, the attack performance and speed also can be improved further. Hence, Spike Probabilistic Attack (SPA) is presented in this paper and aims to generate adversarial samples with more minor perturbations, greater model accuracy degradation, and faster iteration. SPA uses Poisson coding to generate spikes as probabilities, directly converting input data into spikes for faster speed and generating uniformly distributed perturbation for better attack performance. Moreover, an objective function is constructed for minor perturbations and keeping attack success rate, which speeds up the convergence by adjusting parameters. Both white-box and black-box settings are conducted to evaluate the merits of SPA. Experimental results show the model's accuracy under white-box attack decreases by 9.2S% 31.1S% better than others, and average success rates are 74.87% under the black-box setting. The experimental results indicate that SPA has better attack performance than other existing attacks in the white-box and better transferability performance in the black-box setting, |
DOI | 10.1109/CCGrid54584.2022.00046 |
Citation Key | lin_spa_2022 |