Visible to the public PRADA: Protecting Against DNN Model Stealing Attacks

TitlePRADA: Protecting Against DNN Model Stealing Attacks
Publication TypeConference Paper
Year of Publication2019
AuthorsJuuti, Mika, Szyller, Sebastian, Marchal, Samuel, Asokan, N.
Conference Name2019 IEEE European Symposium on Security and Privacy (EuroS P)
KeywordsAdversarial Machine Learning, Adversary Models, API queries, application program interfaces, Business, Computational modeling, confidentiality protection, data mining, Deep Neural Network, DNN model extraction attacks, DNN model stealing attacks, Human Behavior, learning (artificial intelligence), machine learning applications, Mathematical model, Metrics, ML models, model extraction, model extraction attacks, model stealing, neural nets, Neural networks, nontargeted adversarial examples, PRADA, prediction accuracy, prediction API, Predictive models, prior model extraction attacks, pubcrawl, query processing, Resiliency, Scalability, security of data, stolen model, Training, transferable adversarial examples, well-defined prediction APIs
AbstractMachine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
DOI10.1109/EuroSP.2019.00044
Citation Keyjuuti_prada_2019