Title | A Study on the Transferability of Adversarial Attacks in Sound Event Classification |
Publication Type | Conference Paper |
Year of Publication | 2020 |
Authors | Subramanian, Vinod, Pankajakshan, Arjun, Benetos, Emmanouil, Xu, Ning, McDonald, SKoT, Sandler, Mark |
Conference Name | ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |
Date Published | may |
Keywords | adversarial attacks, audio tagging, composability, Computational modeling, Computer vision, Metrics, privacy, pubcrawl, resilience, Resiliency, security, Signal processing, signal processing security, sound event classification, speech processing, Training data, transferability, Transforms |
Abstract | An adversarial attack is an algorithm that perturbs the input of a machine learning model in an intelligent way in order to change the output of the model. An important property of adversarial attacks is transferability. According to this property, it is possible to generate adversarial perturbations on one model and apply it the input to fool the output of a different model. Our work focuses on studying the transferability of adversarial attacks in sound event classification. We are able to demonstrate differences in transferability properties from those observed in computer vision. We show that dataset normalization techniques such as z-score normalization does not affect the transferability of adversarial attacks and we show that techniques such as knowledge distillation do not increase the transferability of attacks. |
DOI | 10.1109/ICASSP40776.2020.9054445 |
Citation Key | subramanian_study_2020 |