Visible to the public Preventing Neural Network Model Exfiltration in Machine Learning Hardware Accelerators

TitlePreventing Neural Network Model Exfiltration in Machine Learning Hardware Accelerators
Publication TypeConference Paper
Year of Publication2018
AuthorsIsakov, M., Bu, L., Cheng, H., Kinsy, M. A.
Conference Name2018 Asian Hardware Oriented Security and Trust Symposium (AsianHOST)
Date PublishedDec. 2018
PublisherIEEE
ISBN Number978-1-5386-7471-0
KeywordsAdversary Models, Computational modeling, Context modeling, Data models, Hardware, hardware security, Human Behavior, inference, machine learning, memory probing, Metrics, model exfiltration, model theft, Neural Network, Neural networks, pubcrawl, Resiliency, Scalability, side-channels, Training
Abstract

Machine learning (ML) models are often trained using private datasets that are very expensive to collect, or highly sensitive, using large amounts of computing power. The models are commonly exposed either through online APIs, or used in hardware devices deployed in the field or given to the end users. This provides an incentive for adversaries to steal these ML models as a proxy for gathering datasets. While API-based model exfiltration has been studied before, the theft and protection of machine learning models on hardware devices have not been explored as of now. In this work, we examine this important aspect of the design and deployment of ML models. We illustrate how an attacker may acquire either the model or the model architecture through memory probing, side-channels, or crafted input attacks, and propose (1) power-efficient obfuscation as an alternative to encryption, and (2) timing side-channel countermeasures.

URLhttps://ieeexplore.ieee.org/document/8607161
DOI10.1109/AsianHOST.2018.8607161
Citation Keyisakov_preventing_2018