Defending Against Model Stealing Attacks With Adaptive Misinformation
Title | Defending Against Model Stealing Attacks With Adaptive Misinformation |
Publication Type | Conference Paper |
Year of Publication | 2020 |
Authors | Kariyappa, S., Qureshi, M. K. |
Conference Name | 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) |
Date Published | June 2020 |
Publisher | IEEE |
ISBN Number | 978-1-7281-7168-5 |
Keywords | Adaptation models, Adaptive Misinformation, Adversary Models, attacker, attacker clone model, black-box query access, clone model, Cloning, Computational modeling, Data models, deep neural networks, Human Behavior, labeled dataset, learning (artificial intelligence), Metrics, model stealing attacks, neural nets, OOD queries, out-of-distribution inputs, Perturbation methods, Predictive models, pubcrawl, query processing, resilience, Resiliency, Scalability, security, security of data, training dataset |
Abstract | Deep Neural Networks (DNNs) are susceptible to model stealing attacks, which allows a data-limited adversary with no knowledge of the training dataset to clone the functionality of a target model, just by using black-box query access. Such attacks are typically carried out by querying the target model using inputs that are synthetically generated or sampled from a surrogate dataset to construct a labeled dataset. The adversary can use this labeled dataset to train a clone model, which achieves a classification accuracy comparable to that of the target model. We propose "Adaptive Misinformation" to defend against such model stealing attacks. We identify that all existing model stealing attacks invariably query the target model with Out-Of-Distribution (OOD) inputs. By selectively sending incorrect predictions for OOD queries, our defense substantially degrades the accuracy of the attacker's clone model (by up to 40%), while minimally impacting the accuracy (\textbackslashtextless; 0.5%) for benign users. Compared to existing defenses, our defense has a significantly better security vs accuracy trade-off and incurs minimal computational overhead. |
URL | https://ieeexplore.ieee.org/document/9157021 |
DOI | 10.1109/CVPR42600.2020.00085 |
Citation Key | kariyappa_defending_2020 |
- Metrics
- training dataset
- security of data
- security
- Scalability
- Resiliency
- resilience
- query processing
- pubcrawl
- Predictive models
- Perturbation methods
- out-of-distribution inputs
- OOD queries
- neural nets
- model stealing attacks
- Adaptation models
- learning (artificial intelligence)
- labeled dataset
- Human behavior
- deep neural networks
- Data models
- Computational modeling
- Cloning
- clone model
- black-box query access
- attacker clone model
- attacker
- Adversary Models
- Adaptive Misinformation