Visible to the public Biblio

Filters: Keyword is distributed differential privacy  [Clear All Filters]
2021-01-11
Lyu, L..  2020.  Lightweight Crypto-Assisted Distributed Differential Privacy for Privacy-Preserving Distributed Learning. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
The appearance of distributed learning allows multiple participants to collaboratively train a global model, where instead of directly releasing their private training data with the server, participants iteratively share their local model updates (parameters) with the server. However, recent attacks demonstrate that sharing local model updates is not sufficient to provide reasonable privacy guarantees, as local model updates may result in significant privacy leakage about local training data of participants. To address this issue, in this paper, we present an alternative approach that combines distributed differential privacy (DDP) with a three-layer encryption protocol to achieve a better privacy-utility tradeoff than the existing DP-based approaches. An unbiased encoding algorithm is proposed to cope with floating-point values, while largely reducing mean squared error due to rounding. Our approach dispenses with the need for any trusted server, and enables each party to add less noise to achieve the same privacy and similar utility guarantees as that of the centralized differential privacy. Preliminary analysis and performance evaluation confirm the effectiveness of our approach, which achieves significantly higher accuracy than that of local differential privacy approach, and comparable accuracy to the centralized differential privacy approach.
2018-09-28
Lu, Z., Shen, H..  2017.  A New Lower Bound of Privacy Budget for Distributed Differential Privacy. 2017 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT). :25–32.

Distributed data aggregation via summation (counting) helped us to learn the insights behind the raw data. However, such computing suffered from a high privacy risk of malicious collusion attacks. That is, the colluding adversaries infer a victim's privacy from the gaps between the aggregation outputs and their source data. Among the solutions against such collusion attacks, Distributed Differential Privacy (DDP) shows a significant effect of privacy preservation. Specifically, a DDP scheme guarantees the global differential privacy (the presence or absence of any data curator barely impacts the aggregation outputs) by ensuring local differential privacy at the end of each data curator. To guarantee an overall privacy performance of a distributed data aggregation system against malicious collusion attacks, part of the existing work on such DDP scheme aim to provide an estimated lower bound of privacy budget for the global differential privacy. However, there are two main problems: low data utility from using a large global function sensitivity; unknown privacy guarantee when the aggregation sensitivity of the whole system is less than the sum of the data curator's aggregation sensitivity. To address these problems while ensuring distributed differential privacy, we provide a new lower bound of privacy budget, which works with an unconditional aggregation sensitivity of the whole distributed system. Moreover, we study the performance of our privacy bound in different scenarios of data updates. Both theoretical and experimental evaluations show that our privacy bound offers better global privacy performance than the existing work.

2018-02-21
Lyu, L., Law, Y. W., Jin, J., Palaniswami, M..  2017.  Privacy-Preserving Aggregation of Smart Metering via Transformation and Encryption. 2017 IEEE Trustcom/BigDataSE/ICESS. :472–479.

This paper proposes a novel privacy-preserving smart metering system for aggregating distributed smart meter data. It addresses two important challenges: (i) individual users wish to publish sensitive smart metering data for specific purposes, and (ii) an untrusted aggregator aims to make queries on the aggregate data. We handle these challenges using two main techniques. First, we propose Fourier Perturbation Algorithm (FPA) and Wavelet Perturbation Algorithm (WPA) which utilize Fourier/Wavelet transformation and distributed differential privacy (DDP) to provide privacy for the released statistic with provable sensitivity and error bounds. Second, we leverage an exponential ElGamal encryption mechanism to enable secure communications between the users and the untrusted aggregator. Standard differential privacy techniques perform poorly for time-series data as it results in a Θ(n) noise to answer n queries, rendering the answers practically useless if n is large. Our proposed distributed differential privacy mechanism relies on Gaussian principles to generate distributed noise, which guarantees differential privacy for each user with O(1) error, and provides computational simplicity and scalability. Compared with Gaussian Perturbation Algorithm (GPA) which adds distributed Gaussian noise to the original data, the experimental results demonstrate the superiority of the proposed FPA and WPA by adding noise to the transformed coefficients.

2018-01-23
Acar, A., Celik, Z. B., Aksu, H., Uluagac, A. S., McDaniel, P..  2017.  Achieving Secure and Differentially Private Computations in Multiparty Settings. 2017 IEEE Symposium on Privacy-Aware Computing (PAC). :49–59.

Sharing and working on sensitive data in distributed settings from healthcare to finance is a major challenge due to security and privacy concerns. Secure multiparty computation (SMC) is a viable panacea for this, allowing distributed parties to make computations while the parties learn nothing about their data, but the final result. Although SMC is instrumental in such distributed settings, it does not provide any guarantees not to leak any information about individuals to adversaries. Differential privacy (DP) can be utilized to address this; however, achieving SMC with DP is not a trivial task, either. In this paper, we propose a novel Secure Multiparty Distributed Differentially Private (SM-DDP) protocol to achieve secure and private computations in a multiparty environment. Specifically, with our protocol, we simultaneously achieve SMC and DP in distributed settings focusing on linear regression on horizontally distributed data. That is, parties do not see each others’ data and further, can not infer information about individuals from the final constructed statistical model. Any statistical model function that allows independent calculation of local statistics can be computed through our protocol. The protocol implements homomorphic encryption for SMC and functional mechanism for DP to achieve the desired security and privacy guarantees. In this work, we first introduce the theoretical foundation for the SM-DDP protocol and then evaluate its efficacy and performance on two different datasets. Our results show that one can achieve individual-level privacy through the proposed protocol with distributed DP, which is independently applied by each party in a distributed fashion. Moreover, our results also show that the SM-DDP protocol incurs minimal computational overhead, is scalable, and provides security and privacy guarantees.