Visible to the public A Practical Black-Box Attack Against Autonomous Speech Recognition Model

TitleA Practical Black-Box Attack Against Autonomous Speech Recognition Model
Publication TypeConference Paper
Year of Publication2020
AuthorsFan, Wenshu, Li, Hongwei, Jiang, Wenbo, Xu, Guowen, Lu, Rongxing
Conference NameGLOBECOM 2020 - 2020 IEEE Global Communications Conference
Keywordsautomatic speech recognition, Black Box Attacks, black-box attack, composability, Conferences, Data models, differential evolution, Global communication, machine learning, machine learning algorithms, Metrics, pubcrawl, Resiliency, security, Training
AbstractWith the wild applications of machine learning (ML) technology, automatic speech recognition (ASR) has made great progress in recent years. Despite its great potential, there are various evasion attacks of ML-based ASR, which could affect the security of applications built upon ASR. Up to now, most studies focus on white-box attacks in ASR, and there is almost no attention paid to black-box attacks where attackers can only query the target model to get output labels rather than probability vectors in audio domain. In this paper, we propose an evasion attack against ASR in the above-mentioned situation, which is more feasible in realistic scenarios. Specifically, we first train a substitute model by using data augmentation, which ensures that we have enough samples to train with a small number of times to query the target model. Then, based on the substitute model, we apply Differential Evolution (DE) algorithm to craft adversarial examples and implement black-box attack against ASR models from the Speech Commands dataset. Extensive experiments are conducted, and the results illustrate that our approach achieves untargeted attacks with over 70% success rate while still maintaining the authenticity of the original data well.
DOI10.1109/GLOBECOM42002.2020.9348184
Citation Keyfan_practical_2020