Visible to the public Biblio

Found 160 results

Filters: Keyword is Differential privacy  [Clear All Filters]
2023-06-30
Kai, Liu, Jingjing, Wang, Yanjing, Hu.  2022.  Localized Differential Location Privacy Protection Scheme in Mobile Environment. 2022 IEEE 5th International Conference on Big Data and Artificial Intelligence (BDAI). :148–152.
When users request location services, they are easy to expose their privacy information, and the scheme of using a third-party server for location privacy protection has high requirements for the credibility of the server. To solve these problems, a localized differential privacy protection scheme in mobile environment is proposed, which uses Markov chain model to generate probability transition matrix, and adds Laplace noise to construct a location confusion function that meets differential privacy, Conduct location confusion on the client, construct and upload anonymous areas. Through the analysis of simulation experiments, the scheme can solve the problem of untrusted third-party server, and has high efficiency while ensuring the high availability of the generated anonymous area.
Gupta, Rishabh, Singh, Ashutosh Kumar.  2022.  Privacy-Preserving Cloud Data Model based on Differential Approach. 2022 Second International Conference on Power, Control and Computing Technologies (ICPC2T). :1–6.
With the variety of cloud services, the cloud service provider delivers the machine learning service, which is used in many applications, including risk assessment, product recommen-dation, and image recognition. The cloud service provider initiates a protocol for the classification service to enable the data owners to request an evaluation of their data. The owners may not entirely rely on the cloud environment as the third parties manage it. However, protecting data privacy while sharing it is a significant challenge. A novel privacy-preserving model is proposed, which is based on differential privacy and machine learning approaches. The proposed model allows the various data owners for storage, sharing, and utilization in the cloud environment. The experiments are conducted on Blood transfusion service center, Phoneme, and Wilt datasets to lay down the proposed model's efficiency in accuracy, precision, recall, and Fl-score terms. The results exhibit that the proposed model specifies high accuracy, precision, recall, and Fl-score up to 97.72%, 98.04%, 97.72%, and 98.80%, respectively.
Subramanian, Rishabh.  2022.  Differential Privacy Techniques for Healthcare Data. 2022 International Conference on Intelligent Data Science Technologies and Applications (IDSTA). :95–100.
This paper analyzes techniques to enable differential privacy by adding Laplace noise to healthcare data. First, as healthcare data contain natural constraints for data to take only integral values, we show that drawing only integral values does not provide differential privacy. In contrast, rounding randomly drawn values to the nearest integer provides differential privacy. Second, when a variable is constructed using two other variables, noise must be added to only one of them. Third, if the constructed variable is a fraction, then noise must be added to its constituent private variables, and not to the fraction directly. Fourth, the accuracy of analytics following noise addition increases with the privacy budget, ϵ, and the variance of the independent variable. Finally, the accuracy of analytics following noise addition increases disproportionately with an increase in the privacy budget when the variance of the independent variable is greater. Using actual healthcare data, we provide evidence supporting the two predictions on the accuracy of data analytics. Crucially, to enable accuracy of data analytics with differential privacy, we derive a relationship to extract the slope parameter in the original dataset using the slope parameter in the noisy dataset.
Song, Yuning, Ding, Liping, Liu, Xuehua, Du, Mo.  2022.  Differential Privacy Protection Algorithm Based on Zero Trust Architecture for Industrial Internet. 2022 IEEE 4th International Conference on Power, Intelligent Computing and Systems (ICPICS). :917–920.
The Zero Trust Architecture is an important part of the industrial Internet security protection standard. When analyzing industrial data for enterprise-level or industry-level applications, differential privacy (DP) is an important technology for protecting user privacy. However, the centralized and local DP used widely nowadays are only applicable to the networks with fixed trust relationship and cannot cope with the dynamic security boundaries in Zero Trust Architecture. In this paper, we design a differential privacy scheme that can be applied to Zero Trust Architecture. It has a consistent privacy representation and the same noise mechanism in centralized and local DP scenarios, and can balance the strength of privacy protection and the flexibility of privacy mechanisms. We verify the algorithm in the experiment, that using maximum expectation estimation method it is able to obtain equal or even better result of the utility with the same level of security as traditional methods.
Han, Liquan, Xie, Yushan, Fan, Di, Liu, Jinyuan.  2022.  Improved differential privacy K-means clustering algorithm for privacy budget allocation. 2022 International Conference on Computer Engineering and Artificial Intelligence (ICCEAI). :221–225.
In the differential privacy clustering algorithm, the added random noise causes the clustering centroids to be shifted, which affects the usability of the clustering results. To address this problem, we design a differential privacy K-means clustering algorithm based on an adaptive allocation of privacy budget to the clustering effect: Adaptive Differential Privacy K-means (ADPK-means). The method is based on the evaluation results generated at the end of each iteration in the clustering algorithm. First, it dynamically evaluates the effect of the clustered sets at the end of each iteration by measuring the separation and tightness between the clustered sets. Then, the evaluation results are introduced into the process of privacy budget allocation by weighting the traditional privacy budget allocation. Finally, different privacy budgets are assigned to different sets of clusters in the iteration to achieve the purpose of adaptively adding perturbation noise to each set. In this paper, both theoretical and experimental results are analyzed, and the results show that the algorithm satisfies e-differential privacy and achieves better results in terms of the availability of clustering results for the three standard datasets.
Ma, Xuebin, Yang, Ren, Zheng, Maobo.  2022.  RDP-WGAN: Image Data Privacy Protection Based on Rényi Differential Privacy. 2022 18th International Conference on Mobility, Sensing and Networking (MSN). :320–324.
In recent years, artificial intelligence technology based on image data has been widely used in various industries. Rational analysis and mining of image data can not only promote the development of the technology field but also become a new engine to drive economic development. However, the privacy leakage problem has become more and more serious. To solve the privacy leakage problem of image data, this paper proposes the RDP-WGAN privacy protection framework, which deploys the Rényi differential privacy (RDP) protection techniques in the training process of generative adversarial networks to obtain a generative model with differential privacy. This generative model is used to generate an unlimited number of synthetic datasets to complete various data analysis tasks instead of sensitive datasets. Experimental results demonstrate that the RDP-WGAN privacy protection framework provides privacy protection for sensitive image datasets while ensuring the usefulness of the synthetic datasets.
Lu, Xiaotian, Piao, Chunhui, Han, Jianghe.  2022.  Differential Privacy High-dimensional Data Publishing Method Based on Bayesian Network. 2022 International Conference on Computer Engineering and Artificial Intelligence (ICCEAI). :623–627.
Ensuring high data availability while realizing privacy protection is a research hotspot in the field of privacy-preserving data publishing. In view of the instability of data availability in the existing differential privacy high-dimensional data publishing methods based on Bayesian networks, this paper proposes an improved MEPrivBayes privacy-preserving data publishing method, which is mainly improved from two aspects. Firstly, in view of the structural instability caused by the random selection of Bayesian first nodes, this paper proposes a method of first node selection and Bayesian network construction based on the Maximum Information Coefficient Matrix. Then, this paper proposes a privacy budget elastic allocation algorithm: on the basis of pre-setting differential privacy budget coefficients for all branch nodes and all leaf nodes in Bayesian network, the influence of branch nodes on their child nodes and the average correlation degree between leaf nodes and all other nodes are calculated, then get a privacy budget strategy. The SVM multi-classifier is constructed with privacy preserving data as training data set, and the original data set is used as input to evaluate the prediction accuracy in this paper. The experimental results show that the MEPrivBayes method proposed in this paper has higher data availability than the classical PrivBayes method. Especially when the privacy budget is small (noise is large), the availability of the data published by MEPrivBayes decreases less.
Mimoto, Tomoaki, Hashimoto, Masayuki, Yokoyama, Hiroyuki, Nakamura, Toru, Isohara, Takamasa, Kojima, Ryosuke, Hasegawa, Aki, Okuno, Yasushi.  2022.  Differential Privacy under Incalculable Sensitivity. 2022 6th International Conference on Cryptography, Security and Privacy (CSP). :27–31.
Differential privacy mechanisms have been proposed to guarantee the privacy of individuals in various types of statistical information. When constructing a probabilistic mechanism to satisfy differential privacy, it is necessary to consider the impact of an arbitrary record on its statistics, i.e., sensitivity, but there are situations where sensitivity is difficult to derive. In this paper, we first summarize the situations in which it is difficult to derive sensitivity in general, and then propose a definition equivalent to the conventional definition of differential privacy to deal with them. This definition considers neighboring datasets as in the conventional definition. Therefore, known differential privacy mechanisms can be applied. Next, as an example of the difficulty in deriving sensitivity, we focus on the t-test, a basic tool in statistical analysis, and show that a concrete differential privacy mechanism can be constructed in practice. Our proposed definition can be treated in the same way as the conventional differential privacy definition, and can be applied to cases where it is difficult to derive sensitivity.
Shi, Er-Mei, Liu, Jia-Xi, Ji, Yuan-Ming, Chang, Liang.  2022.  DP-BEGAN: A Generative Model of Differential Privacy Algorithm. 2022 International Conference on Computer Engineering and Artificial Intelligence (ICCEAI). :168–172.
In recent years, differential privacy has gradually become a standard definition in the field of data privacy protection. Differential privacy does not need to make assumptions about the prior knowledge of privacy adversaries, so it has a more stringent effect than existing privacy protection models and definitions. This good feature has been used by researchers to solve the in-depth learning problem restricted by the problem of privacy and security, making an important breakthrough, and promoting its further large-scale application. Combining differential privacy with BEGAN, we propose the DP-BEGAN framework. The differential privacy is realized by adding carefully designed noise to the gradient of Gan model training, so as to ensure that Gan can generate unlimited synthetic data that conforms to the statistical characteristics of source data and does not disclose privacy. At the same time, it is compared with the existing methods on public datasets. The results show that under a certain privacy budget, this method can generate higher quality privacy protection data more efficiently, which can be used in a variety of data analysis tasks. The privacy loss is independent of the amount of synthetic data, so it can be applied to large datasets.
Shejy, Geocey, Chavan, Pallavi.  2022.  Sensitivity Support in Data Privacy Algorithms. 2022 2nd Asian Conference on Innovation in Technology (ASIANCON). :1–4.
Personal data privacy is a great concern by governments across the world as citizens generate huge amount of data continuously and industries using this for betterment of user centric services. There must be a reasonable balance between data privacy and utility of data. Differential privacy is a promise by data collector to the customer’s personal privacy. Centralised Differential Privacy (CDP) is performing output perturbation of user’s data by applying required privacy budget. This promises the inclusion or exclusion of individual’s data in data set not going to create significant change for a statistical query output and it offers -Differential privacy guarantee. CDP is holding a strong belief on trusted data collector and applying global sensitivity of the data. Local Differential Privacy (LDP) helps user to locally perturb his data and there by guaranteeing privacy even with untrusted data collector. Many differential privacy algorithms handles parameters like privacy budget, sensitivity and data utility in different ways and mostly trying to keep trade-off between privacy and utility of data. This paper evaluates differential privacy algorithms in regard to the privacy support it offers according to the sensitivity of the data. Generalized application of privacy budget is found ineffective in comparison to the sensitivity based usage of privacy budget.
2023-06-09
Zhang, Yue, Nan, Xiaoya, Zhou, Jialing, Wang, Shuai.  2022.  Design of Differential Privacy Protection Algorithms for Cyber-Physical Systems. 2022 International Conference on Intelligent Systems and Computational Intelligence (ICISCI). :29—34.
A new privacy Laplace common recognition algorithm is designed to protect users’ privacy data in this paper. This algorithm disturbs state transitions and information generation functions using exponentially decaying Laplace noise to avoid attacks. The mean square consistency and privacy protection performance are further studied. Finally, the theoretical results obtained are verified by performing numerical simulations.
2023-05-19
Wu, Jingyi, Guo, Jinkang, Lv, Zhihan.  2022.  Deep Learning Driven Security in Digital Twins of Drone Network. ICC 2022 - IEEE International Conference on Communications. :1—6.
This study aims to explore the security issues and computational intelligence of drone information system based on deep learning. Targeting at the security issues of the drone system when it is attacked, this study adopts the improved long short-term memory (LSTM) network to analyze the cyber physical system (CPS) data for prediction from the perspective of predicting the control signal data of the system before the attack occurs. At the same time, the differential privacy frequent subgraph (DPFS) is introduced to keep data privacy confidential, and the digital twins technology is used to map the operating environment of the drone in the physical space, and an attack prediction model for drone digital twins CPS is constructed based on differential privacy-improved LSTM. Finally, the tennessee eastman (TE) process is undertaken as a simulation platform to simulate the constructed model so as to verify its performance. In addition, the proposed model is compared with the Bidirectional LSTM (BiLSTM) and Attention-BiLSTM models proposed by other scholars. It was found that the root mean square error (RMSE) of the proposed model is the smallest (0.20) when the number of hidden layer nodes is 26. Comparison with the actual flow value shows that the proposed algorithm is more accurate with better fitting. Therefore, the constructed drone attack prediction model can achieve higher prediction accuracy and obvious better robustness under the premise of ensuring errors, which can provide experimental basis for the later security and intelligent development of drone system.
2023-05-12
Wei, Yuecen, Fu, Xingcheng, Sun, Qingyun, Peng, Hao, Wu, Jia, Wang, Jinyan, Li, Xianxian.  2022.  Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation. 2022 IEEE International Conference on Data Mining (ICDM). :528–537.
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances. HGNNs, compared to homogeneous data, absorb various aspects of information about individuals in the training stage. That means more information has been covered in the learning result, especially sensitive information. However, the privacy-preserving methods on homogeneous graphs only preserve the same type of node attributes or relationships, which cannot effectively work on heterogeneous graphs due to the complexity. To address this issue, we propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP, which provides a double guarantee on graph features and topology. In particular, we first define a new attack scheme to reveal privacy leakage in the heterogeneous graphs. Specifically, we design a two-stage pipeline framework, which includes the privacy-preserving feature encoder and the heterogeneous link reconstructor with gradients perturbation based on differential privacy to tolerate data diversity and against the attack. To better control the noise and promote model performance, we utilize a bi-level optimization pattern to allocate a suitable privacy budget for the above two modules. Our experiments on four public benchmarks show that the HeteDP method is equipped to resist heterogeneous graph privacy leakage with admirable model generalization.
ISSN: 2374-8486
Qin, Shuying, Fang, Chongrong, He, Jianping.  2022.  Towards Characterization of General Conditions for Correlated Differential Privacy. 2022 IEEE 19th International Conference on Mobile Ad Hoc and Smart Systems (MASS). :364–372.
Differential privacy is a widely-used metric, which provides rigorous privacy definitions and strong privacy guarantees. Much of the existing studies on differential privacy are based on datasets where the tuples are independent, and thus are not suitable for correlated data protection. In this paper, we focus on correlated differential privacy, by taking the data correlations and the prior knowledge of the initial data into account. The data correlations are modeled by Bayesian conditional probabilities, and the prior knowledge refers to the exact values of the data. We propose general correlated differential privacy conditions for the discrete and continuous random noise-adding mechanisms, respectively. In case that the conditions are inaccurate due to the insufficient prior knowledge, we introduce the tuple dependence based on rough set theory to improve the correlated differential privacy conditions. The obtained theoretical results reveal the relationship between the correlations and the privacy parameters. Moreover, the improved privacy condition helps strengthen the mechanism utility. Finally, evaluations are conducted over a micro-grid system to verify the privacy protection levels and utility guaranteed by correlated differential private mechanisms.
ISSN: 2155-6814
2023-03-31
Shrivastva, Krishna Mohan Pd, Rizvi, M.A., Singh, Shailendra.  2014.  Big Data Privacy Based on Differential Privacy a Hope for Big Data. 2014 International Conference on Computational Intelligence and Communication Networks. :776–781.
In era of information age, due to different electronic, information & communication technology devices and process like sensors, cloud, individual archives, social networks, internet activities and enterprise data are growing exponentially. The most challenging issues are how to effectively manage these large and different type of data. Big data is one of the term named for this large and different type of data. Due to its extraordinary scale, privacy and security is one of the critical challenge of big data. At the every stage of managing the big data there are chances that privacy may be disclose. Many techniques have been suggested and implemented for privacy preservation of large data set like anonymization based, encryption based and others but unfortunately due to different characteristic (large volume, high speed, and unstructured data) of big data all these techniques are not fully suitable. In this paper we have deeply analyzed, discussed and suggested how an existing approach "differential privacy" is suitable for big data. Initially we have discussed about differential privacy and later analyze how it is suitable for big data.
2023-01-20
Wu, Fazong, Wang, Xin, Yang, Ming, Zhang, Heng, Wu, Xiaoming, Yu, Jia.  2022.  Stealthy Attack Detection for Privacy-preserving Real-time Pricing in Smart Grids. 2022 13th Asian Control Conference (ASCC). :2012—2017.

Over the past decade, smart grids have been widely implemented. Real-time pricing can better address demand-side management in smart grids. Real-time pricing requires managers to interact more with consumers at the data level, which raises many privacy threats. Thus, we introduce differential privacy into the Real-time pricing for privacy protection. However, differential privacy leaves more space for an adversary to compromise the robustness of the system, which has not been well addressed in the literature. In this paper, we propose a novel active attack detection scheme against stealthy attacks, and then give the proof of correctness and effectiveness of the proposed scheme. Further, we conduct extensive experiments with real datasets from CER to verify the detection performance of the proposed scheme.

2023-01-06
Anastasakis, Zacharias, Psychogyios, Konstantinos, Velivassaki, Terpsi, Bourou, Stavroula, Voulkidis, Artemis, Skias, Dimitrios, Gonos, Antonis, Zahariadis, Theodore.  2022.  Enhancing Cyber Security in IoT Systems using FL-based IDS with Differential Privacy. 2022 Global Information Infrastructure and Networking Symposium (GIIS). :30—34.
Nowadays, IoT networks and devices exist in our everyday life, capturing and carrying unlimited data. However, increasing penetration of connected systems and devices implies rising threats for cybersecurity with IoT systems suffering from network attacks. Artificial Intelligence (AI) and Machine Learning take advantage of huge volumes of IoT network logs to enhance their cybersecurity in IoT. However, these data are often desired to remain private. Federated Learning (FL) provides a potential solution which enables collaborative training of attack detection model among a set of federated nodes, while preserving privacy as data remain local and are never disclosed or processed on central servers. While FL is resilient and resolves, up to a point, data governance and ownership issues, it does not guarantee security and privacy by design. Adversaries could interfere with the communication process, expose network vulnerabilities, and manipulate the training process, thus affecting the performance of the trained model. In this paper, we present a federated learning model which can successfully detect network attacks in IoT systems. Moreover, we evaluate its performance under various settings of differential privacy as a privacy preserving technique and configurations of the participating nodes. We prove that the proposed model protects the privacy without actually compromising performance. Our model realizes a limited performance impact of only ∼ 7% less testing accuracy compared to the baseline while simultaneously guaranteeing security and applicability.
Yang, Xuefeng, Liu, Li, Zhang, Yinggang, Li, Yihao, Liu, Pan, Ai, Shili.  2022.  A Privacy-preserving Approach to Distributed Set-membership Estimation over Wireless Sensor Networks. 2022 9th International Conference on Dependable Systems and Their Applications (DSA). :974—979.
This paper focuses on the system on wireless sensor networks. The system is linear and the time of the system is discrete as well as variable, which named discrete-time linear time-varying systems (DLTVS). DLTVS are vulnerable to network attacks when exchanging information between sensors in the network, as well as putting their security at risk. A DLTVS with privacy-preserving is designed for this purpose. A set-membership estimator is designed by adding privacy noise obeying the Laplace distribution to state at the initial moment. Simultaneously, the differential privacy of the system is analyzed. On this basis, the real state of the system and the existence form of the estimator for the desired distribution are analyzed. Finally, simulation examples are given, which prove that the model after adding differential privacy can obtain accurate estimates and ensure the security of the system state.
Golatkar, Aditya, Achille, Alessandro, Wang, Yu-Xiang, Roth, Aaron, Kearns, Michael, Soatto, Stefano.  2022.  Mixed Differential Privacy in Computer Vision. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :8366—8376.
We introduce AdaMix, an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data. While pre-training language models on large public datasets has enabled strong differential privacy (DP) guarantees with minor loss of accuracy, a similar practice yields punishing trade-offs in vision tasks. A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset. AdaMix incorporates few-shot training, or cross-modal zero-shot learning, on public data prior to private fine-tuning, to improve the trade-off. AdaMix reduces the error increase from the non-private upper bound from the 167–311% of the baseline, on average across 6 datasets, to 68-92% depending on the desired privacy level selected by the user. AdaMix tackles the trade-off arising in visual classification, whereby the most privacy sensitive data, corresponding to isolated points in representation space, are also critical for high classification accuracy. In addition, AdaMix comes with strong theoretical privacy guarantees and convergence analysis.
2022-09-20
Abuah, Chike, Silence, Alex, Darais, David, Near, Joseph P..  2021.  DDUO: General-Purpose Dynamic Analysis for Differential Privacy. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1—15.
Differential privacy enables general statistical analysis of data with formal guarantees of privacy protection at the individual level. Tools that assist data analysts with utilizing differential privacy have frequently taken the form of programming languages and libraries. However, many existing programming languages designed for compositional verification of differential privacy impose significant burden on the programmer (in the form of complex type annotations). Supplementary library support for privacy analysis built on top of existing general-purpose languages has been more usable, but incapable of pervasive end-to-end enforcement of sensitivity analysis and privacy composition. We introduce DDuo, a dynamic analysis for enforcing differential privacy. DDuo is usable by non-experts: its analysis is automatic and it requires no additional type annotations. DDuo can be implemented as a library for existing programming languages; we present a reference implementation in Python which features moderate runtime overheads on realistic workloads. We include support for several data types, distance metrics and operations which are commonly used in modern machine learning programs. We also provide initial support for tracking the sensitivity of data transformations in popular Python libraries for data analysis. We formalize the novel core of the DDuo system and prove it sound for sensitivity analysis via a logical relation for metric preservation. We also illustrate DDuo's usability and flexibility through various case studies which implement state-of-the-art machine learning algorithms.
2022-08-26
Chen, Bo, Hawkins, Calvin, Yazdani, Kasra, Hale, Matthew.  2021.  Edge Differential Privacy for Algebraic Connectivity of Graphs. 2021 60th IEEE Conference on Decision and Control (CDC). :2764—2769.
Graphs are the dominant formalism for modeling multi-agent systems. The algebraic connectivity of a graph is particularly important because it provides the convergence rates of consensus algorithms that underlie many multi-agent control and optimization techniques. However, sharing the value of algebraic connectivity can inadvertently reveal sensitive information about the topology of a graph, such as connections in social networks. Therefore, in this work we present a method to release a graph’s algebraic connectivity under a graph-theoretic form of differential privacy, called edge differential privacy. Edge differential privacy obfuscates differences among graphs’ edge sets and thus conceals the absence or presence of sensitive connections therein. We provide privacy with bounded Laplace noise, which improves accuracy relative to conventional unbounded noise. The private algebraic connectivity values are analytically shown to provide accurate estimates of consensus convergence rates, as well as accurate bounds on the diameter of a graph and the mean distance between its nodes. Simulation results confirm the utility of private algebraic connectivity in these contexts.
Liu, Tianyu, Di, Boya, Wang, Shupeng, Song, Lingyang.  2021.  A Privacy-Preserving Incentive Mechanism for Federated Cloud-Edge Learning. 2021 IEEE Global Communications Conference (GLOBECOM). :1—6.
The federated learning scheme enhances the privacy preservation through avoiding the private data uploading in cloud-edge computing. However, the attacks against the uploaded model updates still cause private data leakage which demotivates the privacy-sensitive participating edge devices. Facing this issue, we aim to design a privacy-preserving incentive mechanism for the federated cloud-edge learning (PFCEL) system such that 1) the edge devices are motivated to actively contribute to the updated model uploading, 2) a trade-off between the private data leakage and the model accuracy is achieved. We formulate the incentive design problem as a three-layer Stackelberg game, where the server-device interaction is further formulated as a contract design problem. Extensive numerical evaluations demonstrate the effectiveness of our designed mechanism in terms of privacy preservation and system utility.
Zuo, Zhiqiang, Tian, Ran, Wang, Yijing.  2021.  Bipartite Consensus for Multi-Agent Systems with Differential Privacy Constraint. 2021 40th Chinese Control Conference (CCC). :5062—5067.
This paper studies the differential privacy-preserving problem of discrete-time multi-agent systems (MASs) with antagonistic information, where the connected signed graph is structurally balanced. First, we introduce the bipartite consensus definitions in the sense of mean square and almost sure, respectively. Second, some criteria for mean square and almost sure bipartite consensus are derived, where the eventualy value is related to the gauge matrix and agents’ initial states. Third, we design the ε-differential privacy algorithm and characterize the tradeoff between differential privacy and system performance. Finally, simulations validate the effectiveness of the proposed algorithm.
Chowdhury, Sayak Ray, Zhou, Xingyu, Shroff, Ness.  2021.  Adaptive Control of Differentially Private Linear Quadratic Systems. 2021 IEEE International Symposium on Information Theory (ISIT). :485—490.
In this paper we study the problem of regret minimization in reinforcement learning (RL) under differential privacy constraints. This work is motivated by the wide range of RL applications for providing personalized service, where privacy concerns are becoming paramount. In contrast to previous works, we take the first step towards non-tabular RL settings, while providing a rigorous privacy guarantee. In particular, we consider the adaptive control of differentially private linear quadratic (LQ) systems. We develop the first private RL algorithm, Private-OFU-RL which is able to attain a sub-linear regret while guaranteeing privacy protection. More importantly, the additional cost due to privacy is only on the order of \$\textbackslashtextbackslashfrac\textbackslashtextbackslashln(1/\textbackslashtextbackslashdelta)ˆ1/4\textbackslashtextbackslashvarepsilonˆ1/2\$ given privacy parameters \$\textbackslashtextbackslashvarepsilon, \textbackslashtextbackslashdelta \textbackslashtextgreater 0\$. Through this process, we also provide a general procedure for adaptive control of LQ systems under changing regularizers, which not only generalizes previous non-private controls, but also serves as the basis for general private controls.
2022-07-29
Tao, Qian, Tong, Yongxin, Li, Shuyuan, Zeng, Yuxiang, Zhou, Zimu, Xu, Ke.  2021.  A Differentially Private Task Planning Framework for Spatial Crowdsourcing. 2021 22nd IEEE International Conference on Mobile Data Management (MDM). :9—18.
Spatial crowdsourcing has stimulated various new applications such as taxi calling and food delivery. A key enabler for these spatial crowdsourcing based applications is to plan routes for crowd workers to execute tasks given diverse requirements of workers and the spatial crowdsourcing platform. Despite extensive studies on task planning in spatial crowdsourcing, few have accounted for the location privacy of tasks, which may be misused by an untrustworthy platform. In this paper, we explore efficient task planning for workers while protecting the locations of tasks. Specifically, we define the Privacy-Preserving Task Planning (PPTP) problem, which aims at both total revenue maximization of the platform and differential privacy of task locations. We first apply the Laplacian mechanism to protect location privacy, and analyze its impact on the total revenue. Then we propose an effective and efficient task planning algorithm for the PPTP problem. Extensive experiments on both synthetic and real datasets validate the advantages of our algorithm in terms of total revenue and time cost.