Biblio
Machine learning (ML) models are often trained using private datasets that are very expensive to collect, or highly sensitive, using large amounts of computing power. The models are commonly exposed either through online APIs, or used in hardware devices deployed in the field or given to the end users. This provides an incentive for adversaries to steal these ML models as a proxy for gathering datasets. While API-based model exfiltration has been studied before, the theft and protection of machine learning models on hardware devices have not been explored as of now. In this work, we examine this important aspect of the design and deployment of ML models. We illustrate how an attacker may acquire either the model or the model architecture through memory probing, side-channels, or crafted input attacks, and propose (1) power-efficient obfuscation as an alternative to encryption, and (2) timing side-channel countermeasures.
The emergence of general-purpose system-on-chip (SoC) architectures has given rise to a number of significant security challenges. The current trend in SoC design is system-level integration of heterogeneous technologies consisting of a large number of processing elements such as programmable RISC cores, memory, DSPs, and accelerator function units/ASIC. These processing elements may come from different providers, and application executable code may have varying levels of trust. Some of the pressing architecture design questions are: (1) how to implement multi-level user-defined security; (2) how to optimally and securely share resources and data among processing elements. In this work, we develop a secure multicore architecture, named Hermes. It represents a new architectural framework that integrates multiple processing elements (called tenants) of secure and non-secure cores into the same chip design while (a) maintaining individual tenant security, (b) preventing data leakage and corruption, and (c) promoting collaboration among the tenants. The Hermes architecture is based on a programmable secure router interface and a trust-aware routing algorithm. With 17% hardware overhead, it enables the implementation of processing-element-oblivious secure multicore systems with a programmable distributed group key management scheme.
Robust Adaptive Secure Secret Sharing (RASSS) is a protocol for reconstructing secrets and information in distributed computing systems even in the presence of a large number of untrusted participants. Since the original Shamir's Secret Sharing scheme, there have been efforts to secure the technique against dishonest shareholders. Early on, researchers determined that the Reed-Solomon encoding property of the Shamir's share distribution equation and its decoding algorithm could tolerate cheaters up to one third of the total shareholders. However, if the number of cheaters grows beyond the error correcting capability (distance) of the Reed-Solomon codes, the reconstruction of the secret is hindered. Untrusted participants or cheaters could hide in the decoding procedure, or even frame up the honest parties. In this paper, we solve this challenge and propose a secure protocol that is no longer constrained by the limitations of the Reed-Solomon codes. As long as there are a minimum number of honest shareholders, the RASSS protocol is able to identify the cheaters and retrieve the correct secret or information in a distributed system with a probability close to 1 with less than 60% of hardware overhead. Furthermore, the adaptive nature of the protocol enables considerable hardware and timing resource savings and makes RASSS highly practical.