Biblio

Filters: Author is Chen, Huili  [Clear All Filters]
2021-06-24
Javaheripi, Mojan, Chen, Huili, Koushanfar, Farinaz.  2020.  Unified Architectural Support for Secure and Robust Deep Learning. 2020 57th ACM/IEEE Design Automation Conference (DAC). :1—6.
Recent advances in Deep Learning (DL) have enabled a paradigm shift to include machine intelligence in a wide range of autonomous tasks. As a result, a largely unexplored surface has opened up for attacks jeopardizing the integrity of DL models and hindering the success of autonomous systems. To enable ubiquitous deployment of DL approaches across various intelligent applications, we propose to develop architectural support for hardware implementation of secure and robust DL. Towards this goal, we leverage hardware/software co-design to develop a DL execution engine that supports algorithms specifically designed to defend against various attacks. The proposed framework is enhanced with two real-time defense mechanisms, securing both DL training and execution stages. In particular, we enable model-level Trojan detection to mitigate backdoor attacks and malicious behaviors induced on the DL model during training. We further realize real-time adversarial attack detection to avert malicious behavior during execution. The proposed execution engine is equipped with hardware-level IP protection and usage control mechanism to attest the legitimacy of the DL model mapped to the device. Our design is modular and can be tuned to task-specific demands, e.g., power, throughput, and memory bandwidth, by means of a customized hardware compiler. We further provide an accompanying API to reduce the nonrecurring engineering cost and ensure automated adaptation to various domains and applications.
2020-08-17
Chen, Huili, Fu, Cheng, Rouhani, Bita Darvish, Zhao, Jishen, Koushanfar, Farinaz.  2019.  DeepAttest: An End-to-End Attestation Framework for Deep Neural Networks. 2019 ACM/IEEE 46th Annual International Symposium on Computer Architecture (ISCA). :487–498.
Emerging hardware architectures for Deep Neural Networks (DNNs) are being commercialized and considered as the hardware- level Intellectual Property (IP) of the device providers. However, these intelligent devices might be abused and such vulnerability has not been identified. The unregulated usage of intelligent platforms and the lack of hardware-bounded IP protection impair the commercial advantage of the device provider and prohibit reliable technology transfer. Our goal is to design a systematic methodology that provides hardware-level IP protection and usage control for DNN applications on various platforms. To address the IP concern, we present DeepAttest, the first on-device DNN attestation method that certifies the legitimacy of the DNN program mapped to the device. DeepAttest works by designing a device-specific fingerprint which is encoded in the weights of the DNN deployed on the target platform. The embedded fingerprint (FP) is later extracted with the support of the Trusted Execution Environment (TEE). The existence of the pre-defined FP is used as the attestation criterion to determine whether the queried DNN is authenticated. Our attestation framework ensures that only authorized DNN programs yield the matching FP and are allowed for inference on the target device. DeepAttest provisions the device provider with a practical solution to limit the application usage of her manufactured hardware and prevents unauthorized or tampered DNNs from execution. We take an Algorithm/Software/Hardware co-design approach to optimize DeepAttest's overhead in terms of latency and energy consumption. To facilitate the deployment, we provide a high-level API of DeepAttest that can be seamlessly integrated into existing deep learning frameworks and TEEs for hardware-level IP protection and usage control. Extensive experiments corroborate the fidelity, reliability, security, and efficiency of DeepAttest on various DNN benchmarks and TEE-supported platforms.
2020-08-07
Chen, Huili, Cammarota, Rosario, Valencia, Felipe, Regazzoni, Francesco.  2019.  PlaidML-HE: Acceleration of Deep Learning Kernels to Compute on Encrypted Data. 2019 IEEE 37th International Conference on Computer Design (ICCD). :333—336.

Machine Learning as a Service (MLaaS) is becoming a popular practice where Service Consumers, e.g., end-users, send their data to a ML Service and receive the prediction outputs. However, the emerging usage of MLaaS has raised severe privacy concerns about users' proprietary data. PrivacyPreserving Machine Learning (PPML) techniques aim to incorporate cryptographic primitives such as Homomorphic Encryption (HE) and Multi-Party Computation (MPC) into ML services to address privacy concerns from a technology standpoint. Existing PPML solutions have not been widely adopted in practice due to their assumed high overhead and integration difficulty within various ML front-end frameworks as well as hardware backends. In this work, we propose PlaidML-HE, the first end-toend HE compiler for PPML inference. Leveraging the capability of Domain-Specific Languages, PlaidML-HE enables automated generation of HE kernels across diverse types of devices. We evaluate the performance of PlaidML-HE on different ML kernels and demonstrate that PlaidML-HE greatly reduces the overhead of the HE primitive compared to the existing implementations.