Visible to the public Biblio

Filters: Keyword is distributed learning  [Clear All Filters]
2023-02-03
Kumar, Abhinav, Tourani, Reza, Vij, Mona, Srikanteswara, Srikathyayani.  2022.  SCLERA: A Framework for Privacy-Preserving MLaaS at the Pervasive Edge. 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops). :175–180.
The increasing data generation rate and the proliferation of deep learning applications have led to the development of machine learning-as-a-service (MLaaS) platforms by major Cloud providers. The existing MLaaS platforms, however, fall short in protecting the clients’ private data. Recent distributed MLaaS architectures such as federated learning have also shown to be vulnerable against a range of privacy attacks. Such vulnerabilities motivated the development of privacy-preserving MLaaS techniques, which often use complex cryptographic prim-itives. Such approaches, however, demand abundant computing resources, which undermine the low-latency nature of evolving applications such as autonomous driving.To address these challenges, we propose SCLERA–an efficient MLaaS framework that utilizes trusted execution environment for secure execution of clients’ workloads. SCLERA features a set of optimization techniques to reduce the computational complexity of the offloaded services and achieve low-latency inference. We assessed SCLERA’s efficacy using image/video analytic use cases such as scene detection. Our results show that SCLERA achieves up to 23× speed-up when compared to the baseline secure model execution.
2022-08-26
Liang, Kai, Wu, Youlong.  2021.  Two-layer Coded Gradient Aggregation with Straggling Communication Links. 2020 IEEE Information Theory Workshop (ITW). :1—5.
In many distributed learning setups such as federated learning, client nodes at the edge use individually collected data to compute the local gradients and send them to a central master server, and the master aggregates the received gradients and broadcasts the aggregation to all clients with which the clients can update the global model. As straggling communication links could severely affect the performance of distributed learning system, Prakash et al. proposed to utilize helper nodes and coding strategy to achieve resiliency against straggling client-to-helpers links. In this paper, we propose two coding schemes: repetition coding (RC) and MDS coding both of which enable the clients to update the global model in the presence of only helpers but without the master. Moreover, we characterize the uplink and downlink communication loads, and prove the tightness of uplink communication load. Theoretical tradeoff between uplink and downlink communication loads is established indicating that larger uplink communication load could reduce downlink communication load. Compared to Prakash's schemes which require a master to connect with helpers though noiseless links, our scheme can even reduce the communication load in the absence of master when the number of clients and helpers is relatively large compared to the number of straggling links.
2021-01-11
Lyu, L..  2020.  Lightweight Crypto-Assisted Distributed Differential Privacy for Privacy-Preserving Distributed Learning. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
The appearance of distributed learning allows multiple participants to collaboratively train a global model, where instead of directly releasing their private training data with the server, participants iteratively share their local model updates (parameters) with the server. However, recent attacks demonstrate that sharing local model updates is not sufficient to provide reasonable privacy guarantees, as local model updates may result in significant privacy leakage about local training data of participants. To address this issue, in this paper, we present an alternative approach that combines distributed differential privacy (DDP) with a three-layer encryption protocol to achieve a better privacy-utility tradeoff than the existing DP-based approaches. An unbiased encoding algorithm is proposed to cope with floating-point values, while largely reducing mean squared error due to rounding. Our approach dispenses with the need for any trusted server, and enables each party to add less noise to achieve the same privacy and similar utility guarantees as that of the centralized differential privacy. Preliminary analysis and performance evaluation confirm the effectiveness of our approach, which achieves significantly higher accuracy than that of local differential privacy approach, and comparable accuracy to the centralized differential privacy approach.
2020-06-02
Aliasgari, Malihe, Simeone, Osvaldo, Kliewer, Jörg.  2019.  Distributed and Private Coded Matrix Computation with Flexible Communication Load. 2019 IEEE International Symposium on Information Theory (ISIT). :1092—1096.

Tensor operations, such as matrix multiplication, are central to large-scale machine learning applications. These operations can be carried out on a distributed computing platform with a master server at the user side and multiple workers in the cloud operating in parallel. For distributed platforms, it has been recently shown that coding over the input data matrices can reduce the computational delay, yielding a tradeoff between recovery threshold and communication load. In this work, we impose an additional security constraint on the data matrices and assume that workers can collude to eavesdrop on the content of these data matrices. Specifically, we introduce a novel class of secure codes, referred to as secure generalized PolyDot codes, that generalizes previously published non-secure versions of these codes for matrix multiplication. These codes extend the state-of-the-art by allowing a flexible trade-off between recovery threshold and communication load for a fixed maximum number of colluding workers.