Biblio
We propose a serverless computing mechanism for distributed computation based on polar codes. Serverless computing is an emerging cloud based computation model that lets users run their functions on the cloud without provisioning or managing servers. Our proposed approach is a hybrid computing framework that carries out computationally expensive tasks such as linear algebraic operations involving large-scale data using serverless computing and does the rest of the processing locally. We address the limitations and reliability issues of serverless platforms such as straggling workers using coding theory, drawing ideas from recent literature on coded computation. The proposed mechanism uses polar codes to ensure straggler-resilience in a computationally effective manner. We provide extensive evidence showing polar codes outperform other coding methods. We have designed a sequential decoder specifically for polar codes in erasure channels with full-precision input and outputs. In addition, we have extended the proposed method to the matrix multiplication case where both matrices being multiplied are coded. The proposed coded computation scheme is implemented for AWS Lambda. Experiment results are presented where the performance of the proposed coded computation technique is tested in optimization via gradient descent. Finally, we introduce the idea of partial polarization which reduces the computational burden of encoding and decoding at the expense of straggler-resilience.
Secure computation is increasingly required, most notably when using public clouds. Many secure CPU architectures have been proposed, mostly focusing on single-threaded applications running on a single node. However, security for parallel and distributed computation is also needed, requiring the sharing of secret data among mutually trusting threads running in different compute nodes in an untrusted environment. We propose SDSM, a novel hardware approach for providing a security layer for directory-based distributed shared memory systems. Unlike previously proposed schemes that cannot maintain reasonable performance beyond 32 cores, our approach allows secure parallel applications to scale efficiently to thousands of cores.