Visible to the public A Study on the Transferability of Adversarial Attacks in Sound Event Classification

TitleA Study on the Transferability of Adversarial Attacks in Sound Event Classification
Publication TypeConference Paper
Year of Publication2020
AuthorsSubramanian, Vinod, Pankajakshan, Arjun, Benetos, Emmanouil, Xu, Ning, McDonald, SKoT, Sandler, Mark
Conference NameICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date Publishedmay
Keywordsadversarial attacks, audio tagging, composability, Computational modeling, Computer vision, Metrics, privacy, pubcrawl, resilience, Resiliency, security, Signal processing, signal processing security, sound event classification, speech processing, Training data, transferability, Transforms
AbstractAn adversarial attack is an algorithm that perturbs the input of a machine learning model in an intelligent way in order to change the output of the model. An important property of adversarial attacks is transferability. According to this property, it is possible to generate adversarial perturbations on one model and apply it the input to fool the output of a different model. Our work focuses on studying the transferability of adversarial attacks in sound event classification. We are able to demonstrate differences in transferability properties from those observed in computer vision. We show that dataset normalization techniques such as z-score normalization does not affect the transferability of adversarial attacks and we show that techniques such as knowledge distillation do not increase the transferability of attacks.
DOI10.1109/ICASSP40776.2020.9054445
Citation Keysubramanian_study_2020