Differential Privacy, 2014 Part 2 |
The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. The work here looks at big data and cyber physical systems, as well as theoretic approaches. Citations are for articles published in 2014.
Anjum, Adeel; Anjum, Adnan, “Differentially Private K-Anonymity,” Frontiers of Information Technology (FIT), 2014 12th International Conference on, vol. no., pp. 153, 158, 17-19 Dec. 2014. doi:10.1109/FIT.2014.37
Abstract: Research in privacy preserving data publication can be broadly categorized in two classes. Syntactic privacy definitions have been under the cursor of the research community for the past many years. A lot of research is primarily dedicated to developing algorithms and notions for syntactic privacy that thwart the re-identification attacks. Sweeney and Samarati proposed a well-known syntactic privacy definition coined K-anonymity for thwarting linking attacks using quasi-identifiers. Thanks to its conceptual simplicity, K-anonymity has been widely implemented as a practicable definition of syntactic privacy, and owing to algorithmic advancement for K-anonymous versions of micro-data, K-anonymity has attained much anticipated popularity. Semantic privacy definitions do not take into account the adversarial background knowledge but rather forces the sanitization algorithms (mechanisms) to satisfy a strong semantic property by the way of random processes. Though semantic privacy definitions are theoretically immune to any kind of adversarial attacks, their applicability in real-life scenarios has come under criticism. In order to make the semantic definitions more practical, the research community has focused its attention towards combining the practicalness of syntactic privacy with the strength of semantic approaches [7] such that we may in the near future benefit from both research tracks.
Keywords: Data models; Data privacy; Noise measurement; Partitioning algorithms; Privacy; Semantics; Syntactics; Data Privacy; Differential Privacy; K-anonymity; Semantic Privacy; Syntactic Privacy (ID#: 15-6083)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7118391&isnumber=7118353
Zhigang Zhou; Hongli Zhang; Qiang Zhang; Yang Xu; Panpan Li, “Privacy-Preserving Granular Data Retrieval Indexes for Outsourced Cloud Data,” Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 601, 606, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7036873
Abstract: Storage as a service has become an important paradigm in cloud computing for its great flexibility and economic savings. Since data owners no longer physically possess the storage of their data, it also brings many new challenges for data security and management. Several techniques have been investigated, including encryption, as well as fine-grained access control for enabling such services. However, these techniques just expresses the "Yes or No" problem, that is, whether the user has permissions to access the corresponding data. In this paper, we investigate the issue of how to provide different granular information views for different users. Our mechanism first constructs the relationship between the keywords and data files based on a Galois connection. And then we exploit data retrieval indexes with variable threshold, where granular data retrieval service can be supported by adjusting the threshold for different users. Moreover, to prevent privacy disclosure, we propose a differentially privacy release scheme based on the proposed index technique. We prove the privacy-preserving guarantee of the proposed mechanism, and the extensive experiments further demonstrate the validity of the proposed mechanism.
Keywords: cloud computing; data privacy; granular computing; information retrieval; outsourcing; Galois connection; access permissions; data files; data management; data owners; data security; differentially privacy release scheme; granular data retrieval service; granular information; outsourced cloud data; privacy disclosure prevention; privacy-preserving granular data retrieval indexes; privacy-preserving guarantee; storage-as-a-service; variable threshold; Access control; Cloud computing; Data privacy; Indexes; Lattices; Privacy; cloud computing; data indexes; differential privacy; fuzzy retrieval; granular data retrieval (ID#: 15-6084)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036873&isnumber=7036769
Saravanan, M.; Thoufeeq, A.M.; Akshaya, S.; Jayasre Manchari, V.L., “Exploring New Privacy Approaches in a Scalable Classification Framework,” Data Science and Advanced Analytics (DSAA), 2014 International Conference on, vol., no., pp. 209, 215, Oct. 30 2014 - Nov. 1 2014. doi:10.1109/DSAA.2014.7058075
Abstract: Recent advancements in Information and Communication Technologies (ICT) enable many organizations to collect, store and control massive amount of various types of details of individuals from their regular transactions (credit card, mobile phone, smart meter etc.). While using these wealth of information for Personalized Recommendations provides enormous opportunities for applying data mining (or machine learning) tasks, there is a need to address the challenge of preserving individuals privacy during the time of running predictive analytics on Big Data. Privacy Preserving Data Mining (PPDM) on these applications is particularly challenging, because it involves and process large volume of complex, heterogeneous, and dynamic details of individuals. Ensuring that privacy-protected data remains useful in intended applications, such as building accurate data mining models or enabling complex analytic tasks, is essential. Differential Privacy has been tried with few of the PPDM methods and is immune to attacks with auxiliary information. In this paper, we propose a distributed implementation based on Map Reduce computing model for C4.5 Decision Tree algorithm and run extensive experiments on three different datasets using Hadoop Cluster. The novelty of this work is to experiment two different privacy methods: First method is to use perturbed data on decision tree algorithm for prediction in privacy-preserving data sharing and the second method is based on applying raw data to the privacy-preserving decision tree algorithm for private data analysis. In addition to this, we propose the combination of the methods as hybrid technique to maintain accuracy (Utility) and privacy in an acceptable level. The proposed privacy approaches has two potential benefits in the context of data mining tasks: it allows the service providers to outsource data mining tasks without exposing the raw data, and it allows data providers to share data access to third parties while limiting privacy risks.
Keywords: data mining; data privacy; decision trees; learning (artificial intelligence); C4.5 decision tree algorithm; Hadoop Cluster; ICT; big data; differential privacy; information and communication technologies; machine learning; map reduce computing model; personalized recommendation; privacy preserving data mining; private data analysis; scalable classification; Big data; Classification algorithms; Data privacy; Decision trees; Noise; Privacy; Scalability; Hybrid data privacy; Map Reduce Framework; Privacy Approaches; Privacy Preserving data Mining; Scalability (ID#: 15-6085)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058075&isnumber=7058031
Paverd, A.; Martin, A.; Brown, I., “Privacy-Enhanced Bi-Directional Communication in the Smart Grid Using Trusted Computing,” Smart Grid Communications (SmartGridComm), 2014 IEEE International Conference on, vol., no., pp. 872, 877, 3-6 Nov. 2014. doi:10.1109/SmartGridComm.2014.7007758
Abstract: Although privacy concerns in smart metering have been widely studied, relatively little attention has been given to privacy in bi-directional communication between consumers and service providers. Full bi-directional communication is necessary for incentive-based demand response (DR) protocols, such as demand bidding, in which consumers bid to reduce their energy consumption. However, this can reveal private information about consumers. Existing proposals for privacy-enhancing protocols do not support bi-directional communication. To address this challenge, we present a privacy-enhancing communication architecture that incorporates all three major information flows (network monitoring, billing and bi-directional DR) using a combination of spatial and temporal aggregation and differential privacy. The key element of our architecture is the Trustworthy Remote Entity (TRE), a node that is singularly trusted by mutually distrusting entities. The TRE differs from a trusted third party in that it uses Trusted Computing approaches and techniques to provide a technical foundation for its trustworthiness. A automated formal analysis of our communication architecture shows that it achieves its security and privacy objectives with respect to a previously-defined adversary model. This is therefore the first application of privacy-enhancing techniques to bi-directional smart grid communication between mutually distrusting agents.
Keywords: data privacy; energy consumption; incentive schemes; invoicing; power engineering computing; power system measurement; protocols; smart meters; smart power grids; trusted computing; TRE; automated formal analysis; bidirectional DR information flow; billing information flow; differential privacy; energy consumption reduction; incentive-based demand response protocol; network monitoring information flow; privacy-enhanced bidirectional smart grid communication architecture; privacy-enhancing protocol; smart metering; spatial aggregation; temporal aggregation; trusted computing; trustworthy remote entity; Bidirectional control; Computer architecture; Monitoring; Privacy; Protocols; Security; Smart grids (ID#: 15-6086)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7007758&isnumber=7007609
Jun Yang; Yun Li, “Differentially Private Feature Selection,” Neural Networks (IJCNN), 2014 International Joint Conference on, vol., no., pp. 4182, 4189, 6-11 July 2014. doi:10.1109/IJCNN.2014.6889613
Abstract: The privacy-preserving data analysis has been gained significant interest across several research communities. The current researches mainly focus on privacy-preserving classification and regression. However, feature selection is also an essential component for data analysis, which can be used to reduce the data dimensionality and can be utilized to discover knowledge, such as inherent variables in data. In this paper, in order to efficiently mine sensitive data, a privacy preserving feature selection algorithm is proposed and analyzed in theory based on local learning and differential privacy. We also conduct some experiments on benchmark data sets. The Experimental results show that our algorithm can preserve the data privacy to some extent.
Keywords: data analysis; data mining; data privacy; learning (artificial intelligence); data dimensionality reduction; differential privacy; differentially private feature selection; feature selection; knowledge discovery; local learning; privacy preserving feature selection algorithm; privacy-preserving classification; privacy-preserving data analysis; privacy-preserving regression; Accuracy; Algorithm design and analysis; Computational modeling; Data privacy; Logistics; Privacy; Vectors (ID#: 15-6087)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889613&isnumber=6889358
Koufogiannis, F.; Shuo Han; Pappas, G.J., “Computation of Privacy-Preserving Prices in Smart Grids,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2142, 2147, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039715
Abstract: Demand management through pricing is a modern approach that can improve the efficiency of modern power networks. However, computing optimal prices requires access to data that individuals consider private. We present a novel approach for computing prices while providing privacy guarantees under the differential privacy framework. Differentially private prices are computed through a distributed utility maximization problem with each individual perturbing their own utility function. Privacy concerning temporal localization and monitoring of an individual's activity is enforced in the process. The proposed scheme provides formal privacy guarantees and its performance-privacy trade-off is evaluated quantitatively.
Keywords: power system control; pricing; smart power grids; computation; demand management; differential privacy framework; distributed utility maximization problem; formal privacy; modern power networks; performance-privacy trade-off; pricing; privacy-preserving prices; smart grids; temporal localization; utility function; Electricity; Monitoring; Optimization; Power demand; Pricing; Privacy; Smart grids (ID#: 15-6088)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039715&isnumber=7039338
Wentian Lu; Miklau, G.; Gupta, V., “Generating Private Synthetic Databases for Untrusted System Evaluation,” Data Engineering (ICDE), 2014 IEEE 30th International Conference on, vol., no., pp. 652, 663, March 31 2014 - April 4 2014. doi:10.1109/ICDE.2014.6816689
Abstract: Evaluating the performance of database systems is crucial when database vendors or researchers are developing new technologies. But such evaluation tasks rely heavily on actual data and query workloads that are often unavailable to researchers due to privacy restrictions. To overcome this barrier, we propose a framework for the release of a synthetic database which accurately models selected performance properties of the original database. We improve on prior work on synthetic database generation by providing a formal, rigorous guarantee of privacy. Accuracy is achieved by generating synthetic data using a carefully selected set of statistical properties of the original data which balance privacy loss with relevance to the given query workload. An important contribution of our framework is an extension of standard differential privacy to multiple tables.
Keywords: data privacy; database management systems; statistical analysis; trusted computing; balance privacy loss; database researchers; database vendors; differential privacy; privacy guarantee; privacy restrictions; private synthetic database generation; query workloads; statistical properties; synthetic data generation; untrusted system evaluation; Aggregates; Data privacy; Databases; Noise; Privacy; Sensitivity; Standards (ID#: 15-6089)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816689&isnumber=6816620
Riboni, D.; Bettini, C., “Differentially-Private Release of Check-in Data for Venue Recommendation,” Pervasive Computing and Communications (PerCom), 2014 IEEE International Conference on, vol., no., pp.190,198, 24-28 March 2014. doi:10.1109/PerCom.2014.6813960
Abstract: Recommender systems suggesting venues offer very useful services to people on the move and a great business opportunity for advertisers. These systems suggest venues by matching the current context of the user with the venue features, and consider the popularity of venues, based on the number of visits (“check-ins”) that they received. Check-ins may be explicitly communicated by users to geo-social networks, or implicitly derived by analysing location data collected by mobile services. In general, the visibility of explicit check-ins is limited to friends in the social network, while the visibility of implicit check-ins limited to the service provider. Exposing check-ins to unauthorized users is a privacy threat since recurring presence in given locations may reveal political opinions, religious beliefs, or sexual orientation, as well as absence from other locations where the user is supposed to be. Hence, on one side mobile app providers host valuable information that recommender system providers would like to buy and use to improve their systems, and on the other we recognize serious privacy issues in releasing that information. In this paper, we solve this dilemma by providing formal privacy guarantees to users and trusted mobile providers while preserving the utility of check-in information for recommendation purposes. Our technique is based on the use of differential privacy methods integrated with a pre-filtering process, and protects against both an untrusted recommender system and its users, willing to infer the venues and sensitive locations visited by other users. Extensive experiments with a large dataset of real users' check-ins show the effectiveness of our methods.
Keywords: data privacy; mobile computing; recommender systems; social networking (online); advertisers; business opportunity; check-in data; differential privacy methods; differentially-private release; explicit check-ins; formal privacy; geo-social networks; implicit check-ins; location data analysis; mobile app providers; mobile services; political opinions; prefiltering process; religious beliefs; sexual orientation; untrusted recommender system; venue recommendation; Context; Data privacy; Mobile communication; Pervasive computing; Privacy; Recommender systems; Sensitivity (ID#: 15-6090)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6813960&isnumber=6813930
Patil, A.; Singh, S., “Differential Private Random Forest,” Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on, vol., no., pp. 2623, 2630, 24-27 Sept. 2014. doi:10.1109/ICACCI.2014.6968348
Abstract: Organizations be it private or public often collect personal information about an individual who are their customers or clients. The personal information of an individual is private and sensitive which has to be secured from data mining algorithm which an adversary may apply to get access to the private information. In this paper we have consider the problem of securing these private and sensitive information when used in random forest classifier in the framework of differential privacy. We have incorporated the concept of differential privacy to the classical random forest algorithm. Experimental results shows that quality functions such as information gain, max operator and gini index gives almost equal accuracy regardless of their sensitivity towards the noise. Also the accuracy of the classical random forest and the differential private random forest is almost equal for different size of datasets. The proposed algorithm works for datasets with categorical as well as continuous attributes.
Keywords: data mining; data privacy; learning (artificial intelligence); Gini index; data mining algorithm; differential privacy; differential private random forest; information gain; max operator; personal information; private information; sensitive information; Accuracy; Data privacy; Indexes; Noise; Privacy; Sensitivity; Vegetation (ID#: 15-6091)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6968348&isnumber=6968191
Bassily, R.; Smith, A.; Thakurta, A., “Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds,” Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, vol., no., pp. 464, 473, 18-21 Oct. 2014. doi:10.1109/FOCS.2014.56
Abstract: Convex empirical risk minimization is a basic tool in machine learning and statistics. We provide new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. Our algorithms run in polynomial time, and in some cases even match the optimal nonprivate running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for (ε, 0) and (ε, δ)-differential privacy; perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different. Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median.
Keywords: computational complexity; convex programming; learning (artificial intelligence); minimisation; (ε, δ)-differential privacy; (ε, 0)-differential privacy; Lipschitz loss function; arbitrary loss function smoothing; machine learning; optimal nonprivate running time; oracle complexity; polynomial time; private convex empirical risk minimization; smooth function families; statistics; Algorithm design and analysis; Convex functions; Noise measurement; Optimization; Privacy; Risk management; Support vector machines (ID#: 15-6092)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979031&isnumber=6978973
Le Ny, J.; Mohammady, M., “Differentially Private MIMO Filtering for Event Streams and Spatio-Temporal Monitoring,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2148, 2153, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039716
Abstract: Many large-scale systems such as intelligent transportation systems, smart grids or smart buildings collect data about the activities of their users to optimize their operations. In a typical scenario, signals originate from many sensors capturing events involving these users, and several statistics of interest need to be continuously published in real-time. Moreover, in order to encourage user participation, privacy issues need to be taken into consideration. This paper considers the problem of providing differential privacy guarantees for such multi-input multi-output systems operating continuously. We show in particular how to construct various extensions of the zero-forcing equalization mechanism, which we previously proposed for single-input single-output systems. We also describe an application to privately monitoring and forecasting occupancy in a building equipped with a dense network of motion detection sensors, which is useful for example to control its HVAC system.
Keywords: MIMO systems; filtering theory; sensors; HVAC system; differential privacy; differentially private MIMO filtering; event streams; intelligent transportation systems; large-scale systems; motion detection sensors; single-input single-output systems; smart buildings; smart grids; spatio temporal monitoring; zero-forcing equalization mechanism; Buildings; MIMO; Monitoring; Noise; Privacy; Sensitivity; Sensors (ID#: 15-6093)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039716&isnumber=7039338
Chunchun Wu; Zuying Wei; Fan Wu; Guihai Chen, “DIARY: A Differentially Private and Approximately Revenue Maximizing Auction Mechanism for Secondary Spectrum Markets,” Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 625, 630, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7036877
Abstract: It is urgent to solve the contradiction between limited spectrum resources and the increasing demand from the ever-growing wireless networks. Spectrum redistribution is a powerful way to mitigate the situation of spectrum scarcity. In contrast to existing truthful mechanisms for spectrum redistribution which aim to maximize the spectrum utilization and social welfare, we propose DIARY in this paper, which not only achieves approximate revenue maximization, but also guarantees bid privacy via differential privacy. Extensive simulations show that DIARY has substantial competitive advantages over existing mechanisms.
Keywords: data privacy; electronic commerce; radio networks; radio spectrum management; telecommunication industry; DIARY; approximately revenue maximization auction mechanism; differential privacy; differentially private mechanism; ever-growing wireless network; limited spectrum resource; secondary spectrum market; social welfare; spectrum redistribution; spectrum scarcity; spectrum utilization maximization; Cost accounting; Information systems; Interference; Privacy; Resource management; Security; Vectors (ID#: 15-6094)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036877&isnumber=7036769
Tiwari, P.K.; Chaturvedi, S., “Publishing Set Valued Data via M-Privacy,”Advances in Engineering and Technology Research (ICAETR), 2014 International Conference on, vol., no., pp. 1, 6, 1-2 Aug. 2014. doi:10.1109/ICAETR.2014.7012814
Abstract: It is very important to achieve security of data in distributed databases. With increasing in the usability of distributed database security issues regarding it are also going to be more complex. M-privacy is a very effective technique which may be used to achieve security of distributed databases. Set-valued data provides huge opportunities for a variety of data mining tasks. Most of the present data publishing techniques for set-valued data are refers to horizontal division based privacy models. Differential privacy method is totally opposite to horizontal based privacy method; it provides higher privacy guarantee and it is also so vereign of an adversary's environment information and computational capability. Set-valued data have high dimensionality so not any single existing data publishing approach for differential privacy can be applied for both utility and scalability. This work provided detailed information about this new threat, and gave some assistance to resolve it. At the start we introduced the concept of m-privacy. This concept guarantees that the anonymous data will satisfies a given privacy check next to any group of up to m colluding data providers. After it we presented heuristic approach for exploiting the monotonicity of confidentiality constraints for proficiently inspecting m-privacy given a cluster of records. Next, we have presented a data provider-aware anonymization approach with adaptive m-privacy inspection strategies to guarantee high usefulness and m-privacy of anonymized data with effectiveness. Finally, we proposed secured multi-party calculation protocols for set valued data publishing with m-privacy.
Keywords: data mining; data privacy; distributed databases; adaptive m-privacy inspection strategies; anonymous data; computational capability; confidentiality constraints monotonicity; data mining tasks; data provider-aware anonymization approach; data security; distributed database security; environment information; heuristic approach; horizontal division based privacy models; privacy check; privacy guarantee; privacy method; secured multiparty calculation protocols; set-valued data publishing techniques; threat; Algorithm design and analysis; Computational modeling; Data privacy; Distributed databases; Privacy; Publishing; Taxonomy; data mining; privacy; set-valued dataset (ID#: 15-6095)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7012814&isnumber=7012782
Shuo Han; Topcu, U.; Pappas, G.J., “Differentially Private Convex Optimization with Piecewise Affine Objectives,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, vol., no., pp. 2160, 2166, 15-17 Dec. 2014. doi:10.1109/CDC.2014.7039718
Abstract: Differential privacy is a recently proposed notion of privacy that provides strong privacy guarantees without any assumptions on the adversary. The paper studies the problem of computing a differentially private solution to convex optimization problems whose objective function is piecewise affine. Such problems are motivated by applications in which the affine functions that define the objective function contain sensitive user information. We propose several privacy preserving mechanisms and provide an analysis on the trade-offs between optimality and the level of privacy for these mechanisms. Numerical experiments are also presented to evaluate their performance in practice.
Keywords: data privacy; optimisation; affine functions; convex optimization problems; differentially private convex optimization; differentially private solution; piecewise affine objectives; privacy guarantees; privacy preserving mechanisms; sensitive user information; Convex functions; Data privacy; Databases; Linear programming; Optimization; Privacy; Sensitivity (ID#: 15-6096)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039718&isnumber=7039338
Jia Dong Zhang; Ghinita, G.; Chi Yin Chow, “Differentially Private Location Recommendations in Geosocial Networks,” Mobile Data Management (MDM), 2014 IEEE 15th International Conference on, vol. 1, no., pp. 59, 68, 14-18 July 2014. doi:10.1109/MDM.2014.13
Abstract: Location-tagged social media have an increasingly important role in shaping behavior of individuals. With the help of location recommendations, users are able to learn about events, products or places of interest that are relevant to their preferences. User locations and movement patterns are available from geosocial networks such as Foursquare, mass transit logs or traffic monitoring systems. However, disclosing movement data raises serious privacy concerns, as the history of visited locations can reveal sensitive details about an individual's health status, alternative lifestyle, etc. In this paper, we investigate mechanisms to sanitize location data used in recommendations with the help of differential privacy. We also identify the main factors that must be taken into account to improve accuracy. Extensive experimental results on real-world datasets show that a careful choice of differential privacy technique leads to satisfactory location recommendation results.
Keywords: data privacy; recommender systems; social networking (online); differentially private location recommendations; geosocial networks; location data sanitation; Data privacy; History; Indexes; Markov processes; Privacy; Trajectory; Vegetation (ID#: 15-6097)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916904&isnumber=6916883
Shuo Han; Topcu, U.; Pappas, G.J., “Differentially Private Distributed Protocol for Electric Vehicle Charging,” Communication, Control, and Computing (Allerton), 2014 52nd Annual Allerton Conference on, vol., no., pp. 242, 249, Sept. 30 2014 - Oct. 3 2014. doi:10.1109/ALLERTON.2014.7028462
Abstract: In distributed electric vehicle (EV) charging, an optimization problem is solved iteratively between a central server and the charging stations by exchanging coordination signals that are publicly available to all stations. The coordination signals depend on user demand reported by charging stations and may reveal private information of the users at the stations. From the public signals, an adversary can potentially decode private user information and put user privacy at risk. This paper develops a distributed EV charging algorithm that preserves differential privacy, which is a notion of privacy recently introduced and studied in theoretical computer science. The algorithm is based on the so-called Laplace mechanism, which perturbs the public signal with Laplace noise whose magnitude is determined by the sensitivity of the public signal with respect to changes in user information. The paper derives the sensitivity and analyzes the suboptimality of the differentially private charging algorithm. In particular, we obtain a bound on suboptimality by viewing the algorithm as an implementation of stochastic gradient descent. In the end, numerical experiments are performed to investigate various aspects of the algorithm when being used in practice, including the number of iterations and tradeoffs between privacy level and suboptimality.
Keywords: electric vehicles; gradient methods; protocols; stochastic programming; Laplace mechanism; Laplace noise; central server; differential private charging algorithm; differential private distributed protocol; distributed EV charging algorithm; distributed electric vehicle charging station; optimization problem; public signal sensitivity; stochastic gradient descent; theoretical computer science; user demand; Charging stations; Data privacy; Databases; Optimization; Privacy; Sensitivity; Vehicles (ID#: 15-6098)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028462&isnumber=7028426
Jianwei Chen; Huadong Ma, “Privacy-Preserving Aggregation for Participatory Sensing with Efficient Group Management,” Global Communications Conference (GLOBECOM), 2014 IEEE, vol., no., pp. 2757, 2762, 8-12 Dec. 2014. doi:10.1109/GLOCOM.2014.7037225
Abstract: Participatory sensing applications can learn the aggregate statistics over personal data to produce useful knowledge about the world. Since personal data may be privacy-sensitive, the aggregator should only gain desired statistics without learning anything about the personal data. To guarantee differential privacy of personal data under an untrusted aggregator, existing approaches encrypt the noisy personal data, and allow the aggregator to get a noisy sum. However, these approaches suffer from either high computation overhead, or lack of efficient group management to support dynamic joins and leaves, or node failures. In this paper, we propose a novel privacy-preserving aggregation scheme to address these issues in participatory sensing applications. In our scheme, we first design an efficient group management protocol to deal with participants' dynamic joins and leaves. Specifically, when a participant joins or leaves, only three participants need to update their encryption keys. Moreover, we leverage the future ciphertext buffering mechanism to deal with node failures, which is combined with the group management protocol making low communication overhead. The analysis indicates that our scheme achieves desired properties, and the performance evaluation demonstrates the scheme's efficiency in terms of communication and computation overhead.
Keywords: cryptographic protocols; data privacy; ciphertext buffering mechanism; group management protocol; noisy personal data; participatory sensing; personal data privacy; privacy-preserving aggregation scheme; untrusted aggregator; Aggregates; Fault tolerance; Fault tolerant systems; Noise; Noise measurement; Privacy; Sensors (ID#: 15-6099)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7037225&isnumber=7036769
Jongho Won; Ma, C.Y.T.; Yau, D.K.Y.; Rao, N.S.V., “Proactive Fault-Tolerant Aggregation Protocol for Privacy-Assured Smart Metering,” INFOCOM, 2014 Proceedings IEEE, vol., no., pp. 2804, 2812, April 27 2014 - May 2 2014. doi:10.1109/INFOCOM.2014.6848230
Abstract: Smart meters are integral to demand response in emerging smart grids, by reporting the electricity consumption of users to serve application needs. But reporting real-time usage information for individual households raises privacy concerns. Existing techniques to guarantee differential privacy (DP) of smart meter users either are not fault tolerant or achieve (possibly partial) fault tolerance at high communication overheads. In this paper, we propose a fault-tolerant protocol for smart metering that can handle general communication failures while ensuring DP with significantly improved efficiency and lower errors compared with the state of the art. Our protocol handles fail-stop faults proactively by using a novel design of future ciphertexts, and distributes trust among the smart meters by sharing secret keys among them. We prove the DP properties of our protocol and analyze its advantages in fault tolerance, accuracy, and communication efficiency relative to competing techniques. We illustrate our analysis by simulations driven by real-world traces of electricity consumption.
Keywords: fault tolerance; smart meters; ciphertexts; communication efficiency; electricity consumption; fail-stop faults; privacy-assured smart metering; proactive fault-tolerant aggregation protocol; secret key sharing; Bandwidth; Fault tolerance; Fault tolerant systems; Noise; Privacy; Protocols; Smart meters (ID#: 15-6100)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848230&isnumber=6847911
Pritha, P.V.G.R.; Suresh, N., “Implementation of Hummingbird 1s Cryptographic Algorithm for Low Cost RFID Tags Using LabVIEW,” Information Communication and Embedded Systems (ICICES), 2014 International Conference on, vol, no., pp. 1, 4, 27-28 Feb. 2014. doi:10.1109/ICICES.2014.7034182
Abstract: Hummingbird is a novel ultra-light weight cryptographic encryption scheme used for RFID applications of privacy-preserving identification and mutual authentication protocols, motivated by the well-known enigma machine. Hummingbird has a precise response time and the design of small block size will reduce the power consumption requirements. This algorithm is shown as it prevents from the common attacks like Linear and differential cryptanalysis. The properties of privacy identification and mutual authentication are together investigated in this algorithm. This is implemented using the LABVIEW software.
Keywords: cryptographic protocols; data privacy; radiofrequency identification; virtual instrumentation; Hummingbird 1s cryptographic algorithm; LabVIEW software; RFID tags; differential cryptanalysis; enigma machine; linear cryptanalysis; mutual authentication protocols; privacy-preserving identification; ultra-light weight cryptographic encryption scheme; Algorithm design and analysis; Authentication; Ciphers; Encryption; Radiofrequency identification; Software; lightweight cryptography scheme; mutual authentication protocols; privacy-preserving identification (ID#: 15-6101)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7034182&isnumber=7033740
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.