Visible to the public Biblio

Filters: Keyword is privacy-preserving machine learning  [Clear All Filters]
2020-08-07
Chen, Huili, Cammarota, Rosario, Valencia, Felipe, Regazzoni, Francesco.  2019.  PlaidML-HE: Acceleration of Deep Learning Kernels to Compute on Encrypted Data. 2019 IEEE 37th International Conference on Computer Design (ICCD). :333—336.

Machine Learning as a Service (MLaaS) is becoming a popular practice where Service Consumers, e.g., end-users, send their data to a ML Service and receive the prediction outputs. However, the emerging usage of MLaaS has raised severe privacy concerns about users' proprietary data. PrivacyPreserving Machine Learning (PPML) techniques aim to incorporate cryptographic primitives such as Homomorphic Encryption (HE) and Multi-Party Computation (MPC) into ML services to address privacy concerns from a technology standpoint. Existing PPML solutions have not been widely adopted in practice due to their assumed high overhead and integration difficulty within various ML front-end frameworks as well as hardware backends. In this work, we propose PlaidML-HE, the first end-toend HE compiler for PPML inference. Leveraging the capability of Domain-Specific Languages, PlaidML-HE enables automated generation of HE kernels across diverse types of devices. We evaluate the performance of PlaidML-HE on different ML kernels and demonstrate that PlaidML-HE greatly reduces the overhead of the HE primitive compared to the existing implementations.

2019-03-22
Lu, Wen-jie, Sakuma, Jun.  2018.  More Practical Privacy-Preserving Machine Learning As A Service via Efficient Secure Matrix Multiplication. Proceedings of the 6th Workshop on Encrypted Computing & Applied Homomorphic Cryptography. :25-36.

An efficient secure two-party computation protocol of matrix multiplication allows privacy-preserving cloud-aid machine learning services such as face recognition and traffic-aware navigation. We use homomorphic encryption to construct a secure matrix multiplication protocol with a small communication overhead and computation overhead on the client's side, which works particularly well when a large number of clients access to the server simultaneously. The fastest secure matrix multiplication protocols have been constructed using tools such as oblivious transfer, but a potential limitation of these methods is the needs of using a wide network bandwidth between the client and the server, e.g., 10\textasciitildeGbps. This is of particular concern when thousands of clients interact with the server concurrently. Under this setting, the performance oblivious transfer-based methods will decrease significantly, since the server can only allocate a small ratio of its outgoing bandwidth for each client. With three proposed optimizations, our matrix multiplication protocol can run very fast even under the high concurrent setting. Our benchmarks show that it takes an Amazon instance (i.e., 72 CPUs and 25 Gbps outgoing bandwidth) less than 50 seconds to complete 1000 concurrent secure matrix multiplications with \$128\textbackslashtimes 128\$ entries. In addition, our method reduces more than \$74% - 97%\$ of the precomputation time of two privacy-preserving machine learning frameworks, SecureML (S&P'17) and MiniONN (CCS'17).