Visible to the public Exploiting Process Variations to Protect Machine Learning Inference Engine from Chip Cloning

TitleExploiting Process Variations to Protect Machine Learning Inference Engine from Chip Cloning
Publication TypeConference Paper
Year of Publication2021
AuthorsHuang, Shanshi, Peng, Xiaochen, Jiang, Hongwu, Luo, Yandong, Yu, Shimeng
Conference Name2021 IEEE International Symposium on Circuits and Systems (ISCAS)
Date PublishedMay 2021
PublisherIEEE
ISBN Number978-1-7281-9201-7
KeywordsCloning, composability, Deep Neural Network, Engines, Hardware accelerator, hardware security, inmemory computing, machine learning, Metrics, network on chip security, non-volatile memory, pubcrawl, resilience, Resiliency, Scalability, security, Testing, Throughput, Transistors
AbstractMachine learning inference engine is of great interest to smart edge computing. Compute-in-memory (CIM) architecture has shown significant improvements in throughput and energy efficiency for hardware acceleration. Emerging nonvolatile memory (eNVM) technologies offer great potentials for instant on and off by dynamic power gating. Inference engine is typically pre-trained by the cloud and then being deployed to the field. There is a new security concern on cloning of the weights stored on eNVM-based CIM chip. In this paper, we propose a countermeasure to the weight cloning attack by exploiting the process variations of the periphery circuitry. In particular, we use weight fine-tuning to compensate the analog-to-digital converter (ADC) offset for a specific chip instance while inducing significant accuracy drop for cloned chip instances. We evaluate our proposed scheme on a CIFAR-10 classification task using a VGG- 8 network. Our results show that with precisely chosen transistor size on the employed SAR-ADC, we could maintain 88% 90% accuracy for the fine-tuned chip while the same set of weights cloned on other chips will only have 20 40% accuracy on average. The weight fine-tune could be completed within one epoch of 250 iterations. On average only 0.02%, 0.025%, 0.142% of cells are updated for 2-bit, 4-bit, 8-bit weight precisions in each iteration.
URLhttps://ieeexplore.ieee.org/document/9401659
DOI10.1109/ISCAS51556.2021.9401659
Citation Keyhuang_exploiting_2021