Visible to the public Adversarial Machine Learning Attack on Modulation Classification

TitleAdversarial Machine Learning Attack on Modulation Classification
Publication TypeConference Paper
Year of Publication2019
AuthorsUsama, Muhammad, Asim, Muhammad, Qadir, Junaid, Al-Fuqaha, Ala, Imran, Muhammad Ali
Conference Name2019 UK/ China Emerging Technologies (UCET)
Date PublishedAug. 2019
PublisherIEEE
ISBN Number978-1-7281-2797-2
KeywordsAdversarial Machine Learning, adversarial machine learning attack, adversarial ML examples, Carlini & Wagner attack, cognitive self-driving networks, deterrence, Human Behavior, learning (artificial intelligence), Mathematical model, ML models, ML-based modulation classification methods, ML-based modulation classifiers, modulation, Modulation classification, pattern classification, Perturbation methods, pubcrawl, resilience, Resiliency, Robustness, Scalability, security of data, Signal to noise ratio, Support vector machines, Task Analysis
Abstract

Modulation classification is an important component of cognitive self-driving networks. Recently many ML-based modulation classification methods have been proposed. We have evaluated the robustness of 9 ML-based modulation classifiers against the powerful Carlini & Wagner (C-W) attack and showed that the current ML-based modulation classifiers do not provide any deterrence against adversarial ML examples. To the best of our knowledge, we are the first to report the results of the application of the C-W attack for creating adversarial examples against various ML models for modulation classification.

URLhttps://ieeexplore.ieee.org/document/8881843
DOI10.1109/UCET.2019.8881843
Citation Keyusama_adversarial_2019