Biblio
An efficient secure two-party computation protocol of matrix multiplication allows privacy-preserving cloud-aid machine learning services such as face recognition and traffic-aware navigation. We use homomorphic encryption to construct a secure matrix multiplication protocol with a small communication overhead and computation overhead on the client's side, which works particularly well when a large number of clients access to the server simultaneously. The fastest secure matrix multiplication protocols have been constructed using tools such as oblivious transfer, but a potential limitation of these methods is the needs of using a wide network bandwidth between the client and the server, e.g., 10\textasciitildeGbps. This is of particular concern when thousands of clients interact with the server concurrently. Under this setting, the performance oblivious transfer-based methods will decrease significantly, since the server can only allocate a small ratio of its outgoing bandwidth for each client. With three proposed optimizations, our matrix multiplication protocol can run very fast even under the high concurrent setting. Our benchmarks show that it takes an Amazon instance (i.e., 72 CPUs and 25 Gbps outgoing bandwidth) less than 50 seconds to complete 1000 concurrent secure matrix multiplications with \$128\textbackslashtimes 128\$ entries. In addition, our method reduces more than \$74% - 97%\$ of the precomputation time of two privacy-preserving machine learning frameworks, SecureML (S&P'17) and MiniONN (CCS'17).