Visible to the public Biblio

Filters: Author is Zhang, Junjian  [Clear All Filters]
2023-03-31
Zhang, Junjian, Tan, Hao, Deng, Binyue, Hu, Jiacen, Zhu, Dong, Huang, Linyi, Gu, Zhaoquan.  2022.  NMI-FGSM-Tri: An Efficient and Targeted Method for Generating Adversarial Examples for Speaker Recognition. 2022 7th IEEE International Conference on Data Science in Cyberspace (DSC). :167–174.
Most existing deep neural networks (DNNs) are inexplicable and fragile, which can be easily deceived by carefully designed adversarial example with tiny undetectable noise. This allows attackers to cause serious consequences in many DNN-assisted scenarios without human perception. In the field of speaker recognition, the attack for speaker recognition system has been relatively mature. Most works focus on white-box attacks that assume the information of the DNN is obtainable, and only a few works study gray-box attacks. In this paper, we study blackbox attacks on the speaker recognition system, which can be applied in the real world since we do not need to know the system information. By combining the idea of transferable attack and query attack, our proposed method NMI-FGSM-Tri can achieve the targeted goal by misleading the system to recognize any audio as a registered person. Specifically, our method combines the Nesterov accelerated gradient (NAG), the ensemble attack and the restart trigger to design an attack method that generates the adversarial audios with good performance to attack blackbox DNNs. The experimental results show that the effect of the proposed method is superior to the extant methods, and the attack success rate can reach as high as 94.8% even if only one query is allowed.