Visible to the public Biblio

Filters: Keyword is Computing Theory and Privacy  [Clear All Filters]
2020-09-21
Ding, Hongfa, Peng, Changgen, Tian, Youliang, Xiang, Shuwen.  2019.  A Game Theoretical Analysis of Risk Adaptive Access Control for Privacy Preserving. 2019 International Conference on Networking and Network Applications (NaNA). :253–258.

More and more security and privacy issues are arising as new technologies, such as big data and cloud computing, are widely applied in nowadays. For decreasing the privacy breaches in access control system under opening and cross-domain environment. In this paper, we suggest a game and risk based access model for privacy preserving by employing Shannon information and game theory. After defining the notions of Privacy Risk and Privacy Violation Access, a high-level framework of game theoretical risk based access control is proposed. Further, we present formulas for estimating the risk value of access request and user, construct and analyze the game model of the proposed access control by using a multi-stage two player game. There exists sub-game perfect Nash equilibrium each stage in the risk based access control and it's suitable to protect the privacy by limiting the privacy violation access requests.

2020-04-20
Takbiri, Nazanin, Shao, Xiaozhe, Gao, Lixin, Pishro-Nik, Hossein.  2019.  Improving Privacy in Graphs Through Node Addition. 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton). :487–494.

The rapid growth of computer systems which generate graph data necessitates employing privacy-preserving mechanisms to protect users' identity. Since structure-based de-anonymization attacks can reveal users' identity's even when the graph is simply anonymized by employing naïve ID removal, recently, k- anonymity is proposed to secure users' privacy against the structure-based attack. Most of the work ensured graph privacy using fake edges, however, in some applications, edge addition or deletion might cause a significant change to the key property of the graph. Motivated by this fact, in this paper, we introduce a novel method which ensures privacy by adding fake nodes to the graph. First, we present a novel model which provides k- anonymity against one of the strongest attacks: seed-based attack. In this attack, the adversary knows the partial mapping between the main graph and the graph which is generated using the privacy-preserving mechanisms. We show that even if the adversary knows the mapping of all of the nodes except one, the last node can still have k- anonymity privacy. Then, we turn our attention to the privacy of the graphs generated by inter-domain routing against degree attacks in which the degree sequence of the graph is known to the adversary. To ensure the privacy of networks against this attack, we propose a novel method which tries to add fake nodes in a way that the degree of all nodes have the same expected value.

2020-01-06
Li, Xianxian, Luo, Chunfeng, Liu, Peng, Wang, Li-e.  2019.  Information Entropy Differential Privacy: A Differential Privacy Protection Data Method Based on Rough Set Theory. 2019 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :918–923.

Data have become an important asset for analysis and behavioral prediction, especially correlations between data. Privacy protection has aroused academic and social concern given the amount of personal sensitive information involved in data. However, existing works assume that the records are independent of each other, which is unsuitable for associated data. Many studies either fail to achieve privacy protection or lead to excessive loss of information while applying data correlations. Differential privacy, which achieves privacy protection by injecting random noise into the statistical results given the correlation, will improve the background knowledge of adversaries. Therefore, this paper proposes an information entropy differential privacy solution for correlation data privacy issues based on rough set theory. Under the solution, we use rough set theory to measure the degree of association between attributes and use information entropy to quantify the sensitivity of the attribute. The information entropy difference privacy is achieved by clustering based on the correlation and adding personalized noise to each cluster while preserving the correlations between data. Experiments show that our algorithm can effectively preserve the correlation between the attributes while protecting privacy.

2019-12-16
Cerf, Sophie, Robu, Bogdan, Marchand, Nicolas, Mokhtar, Sonia Ben, Bouchenak, Sara.  2018.  A Control-Theoretic Approach for Location Privacy in Mobile Applications. 2018 IEEE Conference on Control Technology and Applications (CCTA). :1488-1493.

The prevalent use of mobile applications using location information to improve the quality of their service has arisen privacy issues, particularly regarding the extraction of user's points on interest. Many studies in the literature focus on presenting algorithms that allow to protect the user of such applications. However, these solutions often require a high level of expertise to be understood and tuned properly. In this paper, the first control-based approach of this problem is presented. The protection algorithm is considered as the ``physical'' plant and its parameters as control signals that enable to guarantee privacy despite user's mobility pattern. The following of the paper presents the first control formulation of POI-related privacy measure, as well as dynamic modeling and a simple yet efficient PI control strategy. The evaluation using simulated mobility records shows the relevance and efficiency of the presented approach.

2019-11-11
Martiny, Karsten, Elenius, Daniel, Denker, Grit.  2018.  Protecting Privacy with a Declarative Policy Framework. 2018 IEEE 12th International Conference on Semantic Computing (ICSC). :227–234.

This article describes a privacy policy framework that can represent and reason about complex privacy policies. By using a Common Data Model together with a formal shareability theory, this framework enables the specification of expressive policies in a concise way without burdening the user with technical details of the underlying formalism. We also build a privacy policy decision engine that implements the framework and that has been deployed as the policy decision point in a novel enterprise privacy prototype system. Our policy decision engine supports two main uses: (1) interfacing with user interfaces for the creation, validation, and management of privacy policies; and (2) interfacing with systems that manage data requests and replies by coordinating privacy policy engine decisions and access to (encrypted) databases using various privacy enhancing technologies.

2017-10-10
Yuan, Ganzhao, Yang, Yin, Zhang, Zhenjie, Hao, Zhifeng.  2016.  Convex Optimization for Linear Query Processing Under Approximate Differential Privacy. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :2005–2014.

Differential privacy enables organizations to collect accurate aggregates over sensitive data with strong, rigorous guarantees on individuals' privacy. Previous work has found that under differential privacy, computing multiple correlated aggregates as a batch, using an appropriate strategy, may yield higher accuracy than computing each of them independently. However, finding the best strategy that maximizes result accuracy is non-trivial, as it involves solving a complex constrained optimization program that appears to be non-convex. Hence, in the past much effort has been devoted in solving this non-convex optimization program. Existing approaches include various sophisticated heuristics and expensive numerical solutions. None of them, however, guarantees to find the optimal solution of this optimization problem. This paper points out that under (ε, ཬ)-differential privacy, the optimal solution of the above constrained optimization problem in search of a suitable strategy can be found, rather surprisingly, by solving a simple and elegant convex optimization program. Then, we propose an efficient algorithm based on Newton's method, which we prove to always converge to the optimal solution with linear global convergence rate and quadratic local convergence rate. Empirical evaluations demonstrate the accuracy and efficiency of the proposed solution.

Malluhi, Qutaibah M., Shikfa, Abdullatif, Trinh, Viet Cuong.  2016.  An Efficient Instance Hiding Scheme. Proceedings of the Seventh Symposium on Information and Communication Technology. :388–395.

Delegating computation, which is applicable to many practical contexts such as cloud computing or pay-TV system, concerns the task where a computationally weak client wants to securely compute a very complex function f on a given input with the help of a remote computationally strong but untrusted server. The requirement is that the computation complexity of the client is much more efficient than that of f, ideally it should be in constant time or in NC0. This task has been investigated in several contexts such as instance hiding, randomized encoding, fully homomorphic encryption, garbling schemes, and verifiable scheme. In this work, we specifically consider the context where only the client has an input and gets an output, also called instance hiding. Concretely, we first give a survey of delegating computation, we then propose an efficient instance hiding scheme with passive input privacy. In our scheme, the computation complexity of the client is in NC0 and that of the server is exactly the same as the original function f. Regarding communication complexity, the client in our scheme just needs to transfer 4textbarftextbar + textbarxtextbar bits to the server, where textbarftextbar is the size of the circuit representing f and textbarxtextbar is the length of the input of f.

Bassily, Raef, Nissim, Kobbi, Smith, Adam, Steinke, Thomas, Stemmer, Uri, Ullman, Jonathan.  2016.  Algorithmic Stability for Adaptive Data Analysis. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :1046–1059.

Adaptivity is an important feature of data analysis - the choice of questions to ask about a dataset often depends on previous interactions with the same dataset. However, statistical validity is typically studied in a nonadaptive model, where all questions are specified before the dataset is drawn. Recent work by Dwork et al. (STOC, 2015) and Hardt and Ullman (FOCS, 2014) initiated a general formal study of this problem, and gave the first upper and lower bounds on the achievable generalization error for adaptive data analysis. Specifically, suppose there is an unknown distribution P and a set of n independent samples x is drawn from P. We seek an algorithm that, given x as input, accurately answers a sequence of adaptively chosen ``queries'' about the unknown distribution P. How many samples n must we draw from the distribution, as a function of the type of queries, the number of queries, and the desired level of accuracy? In this work we make two new contributions towards resolving this question: We give upper bounds on the number of samples n that are needed to answer statistical queries. The bounds improve and simplify the work of Dwork et al. (STOC, 2015), and have been applied in subsequent work by those authors (Science, 2015; NIPS, 2015). We prove the first upper bounds on the number of samples required to answer more general families of queries. These include arbitrary low-sensitivity queries and an important class of optimization queries (alternatively, risk minimization queries). As in Dwork et al., our algorithms are based on a connection with algorithmic stability in the form of differential privacy. We extend their work by giving a quantitatively optimal, more general, and simpler proof of their main theorem that the stability notion guaranteed by differential privacy implies low generalization error. We also show that weaker stability guarantees such as bounded KL divergence and total variation distance lead to correspondingly weaker generalization guarantees.

Cummings, Rachel, Ligett, Katrina, Radhakrishnan, Jaikumar, Roth, Aaron, Wu, Zhiwei Steven.  2016.  Coordination Complexity: Small Information Coordinating Large Populations. Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science. :281–290.

We initiate the study of a quantity that we call coordination complexity. In a distributed optimization problem, the information defining a problem instance is distributed among n parties, who need to each choose an action, which jointly will form a solution to the optimization problem. The coordination complexity represents the minimal amount of information that a centralized coordinator, who has full knowledge of the problem instance, needs to broadcast in order to coordinate the n parties to play a nearly optimal solution. We show that upper bounds on the coordination complexity of a problem imply the existence of good jointly differentially private algorithms for solving that problem, which in turn are known to upper bound the price of anarchy in certain games with dynamically changing populations. We show several results. We fully characterize the coordination complexity for the problem of computing a many-to-one matching in a bipartite graph. Our upper bound in fact extends much more generally to the problem of solving a linearly separable convex program. We also give a different upper bound technique, which we use to bound the coordination complexity of coordinating a Nash equilibrium in a routing game, and of computing a stable matching.