Biblio
It is a challenging problem to preserve the friendly-correlations between individuals when publishing social-network data. To alleviate this problem, uncertain graph has been presented recently. The main idea of uncertain graph is converting an original graph into an uncertain form, where the correlations between individuals is an associated probability. However, the existing methods of uncertain graph lack rigorous guarantees of privacy and rely on the assumption of adversary's knowledge. In this paper we first introduced a general model for constructing uncertain graphs. Then, we proposed an algorithm under the model which is based on differential privacy and made an analysis of algorithm's privacy. Our algorithm provides rigorous guarantees of privacy and against the background knowledge attack. Finally, the algorithm we proposed satisfied differential privacy and showed feasibility in the experiments. And then, we compare our algorithm with (k, ε)-obfuscation algorithm in terms of data utility, the importance of nodes for network in our algorithm is similar to (k, ε)-obfuscation algorithm.
To enhance privacy protection and improve data availability, a differential privacy data protection method ICMD-DP is proposed. Based on insensitive clustering algorithm, ICMD-DP performs differential privacy on the results of ICMD (insensitive clustering method for mixed data). The combination of clustering and differential privacy realizes the differentiation of query sensitivity from single record to group record. At the meanwhile, it reduces the risk of information loss and information disclosure. In addition, to satisfy the requirement of maintaining differential privacy for mixed data, ICMD-DP uses different methods to calculate the distance and centroid of categorical and numerical attributes. Finally, experiments are given to illustrate the availability of the method.
Comparing with the traditional grid, energy internet will collect data widely and connect more broader. The analysis of electrical data use of Non-intrusive Load Monitoring (NILM) can infer user behavior privacy. Consideration both data security and availability is a problem must be addressed. Due to its rigid and provable privacy guarantee, Differential Privacy has proverbially reached and applied to privacy preserving data release and data mining. Because of its high sensitivity, increases the noise directly will led to data unavailable. In this paper, we propose a differentially private mechanism to protect energy internet privacy. Our focus is the aggregated data be released by data owner after added noise in disaggregated data. The theoretically proves and experiments show that our scheme can achieve the purpose of privacy-preserving and data availability.
Trying to solve the risk of data privacy disclosure in classification process, a Random Forest algorithm under differential privacy named DPRF-gini is proposed in the paper. In the process of building decision tree, the algorithm first disturbed the process of feature selection and attribute partition by using exponential mechanism, and then meet the requirement of differential privacy by adding Laplace noise to the leaf node. Compared with the original algorithm, Empirical results show that protection of data privacy is further enhanced while the accuracy of the algorithm is slightly reduced.
Distributed data aggregation via summation (counting) helped us to learn the insights behind the raw data. However, such computing suffered from a high privacy risk of malicious collusion attacks. That is, the colluding adversaries infer a victim's privacy from the gaps between the aggregation outputs and their source data. Among the solutions against such collusion attacks, Distributed Differential Privacy (DDP) shows a significant effect of privacy preservation. Specifically, a DDP scheme guarantees the global differential privacy (the presence or absence of any data curator barely impacts the aggregation outputs) by ensuring local differential privacy at the end of each data curator. To guarantee an overall privacy performance of a distributed data aggregation system against malicious collusion attacks, part of the existing work on such DDP scheme aim to provide an estimated lower bound of privacy budget for the global differential privacy. However, there are two main problems: low data utility from using a large global function sensitivity; unknown privacy guarantee when the aggregation sensitivity of the whole system is less than the sum of the data curator's aggregation sensitivity. To address these problems while ensuring distributed differential privacy, we provide a new lower bound of privacy budget, which works with an unconditional aggregation sensitivity of the whole distributed system. Moreover, we study the performance of our privacy bound in different scenarios of data updates. Both theoretical and experimental evaluations show that our privacy bound offers better global privacy performance than the existing work.
Nowadays, the Internet is developed, so that the requirements for on- and offline data storage have increased. Large storage IT projects, are related to large costs and high level of business risk. A storage service provider (SSP) provides computer storage space and management. In addition to that, it offers also back-up and archiving. Despite this, many companies fears security, privacy and integrity of outsourced data. As a solution, File Assured Deletion (FADE) is a system built upon standard cryptographic issues. It aims to guarantee their privacy and integrity, and most importantly, assuredly deleted files to make them unrecoverable to anybody (including those who manage the cloud storage) upon revocations of file access policies, by encrypting outsourced data files. Unfortunately, This system remains weak, in case the key manager's security is compromised. Our work provides a new scheme that aims to improve the security of FADE by using the TPM (Trusted Platform Module) that stores safely keys, passwords and digital certificates.
In the existing remote data integrity checking schemes, dynamic update operates on block level, which usually restricts the location of the data inserted in a file due to the fixed size of a data block. In this paper, we propose a remote data integrity checking scheme with fine-grained update for big data storage. The proposed scheme achieves basic operations of insertion, modification, deletion on line level at any location in a file by designing a mapping relationship between line level update and block level update. Scheme analysis shows that the proposed scheme supports public verification and privacy preservation. Meanwhile, it performs data integrity checking with low computation and communication cost.
According to the new Tor network (6.0.5 version) can help the domestic users easily realize "over the wall", and of course criminals may use it to visit deep and dark website also. The paper analyzes the core technology of the new Tor network: the new flow obfuscation technology based on meek plug-in and real instance is used to verify the new Tor network's fast connectivity. On the basis of analyzing the traffic confusion mechanism and the network crime based on Tor, it puts forward some measures to prevent the using of Tor network to implement network crime.
Traffic classification, i.e. associating network traffic to the application that generated it, is an important tool for several tasks, spanning on different fields (security, management, traffic engineering, R&D). This process is challenged by applications that preserve Internet users' privacy by encrypting the communication content, and even more by anonymity tools, additionally hiding the source, the destination, and the nature of the communication. In this paper, leveraging a public dataset released in 2017, we provide (repeatable) classification results with the aim of investigating to what degree the specific anonymity tool (and the traffic it hides) can be identified, when compared to the traffic of the other considered anonymity tools, using machine learning approaches based on the sole statistical features. To this end, four classifiers are trained and tested on the dataset: (i) Naïve Bayes, (ii) Bayesian Network, (iii) C4.5, and (iv) Random Forest. Results show that the three considered anonymity networks (Tor, I2P, JonDonym) can be easily distinguished (with an accuracy of 99.99%), telling even the specific application generating the traffic (with an accuracy of 98.00%).
Many companies within the Internet of Things (IoT) sector rely on the personal data of users to deliver and monetize their services, creating a high demand for personal information. A user can be seen as making a series of transactions, each involving the exchange of personal data for a service. In this paper, we argue that privacy can be described quantitatively, using the game- theoretic concept of value of information (VoI), enabling us to assess whether each exchange is an advantageous one for the user. We introduce PrivacyGate, an extension to the Android operating system built for the purpose of studying privacy of IoT transactions. An example study, and its initial results, are provided to illustrate its capabilities.
Emerging cyber-physical systems (CPS) often require collecting end users' data to support data-informed decision making processes. There has been a long-standing argument as to the tradeoff between privacy and data utility. In this paper, we adopt a multiparametric programming approach to rigorously study conditions under which data utility has to be sacrificed to protect privacy and situations where free-lunch privacy can be achieved, i.e., data can be concealed without hurting the optimality of the decision making underlying the CPS. We formalize the concept of free-lunch privacy, and establish various results on its existence, geometry, as well as efficient computation methods. We propose the free-lunch privacy mechanism, which is a pragmatic mechanism that exploits free-lunch privacy if it exists with the constant guarantee of optimal usage of data. We study the resilience of this mechanism against attacks that attempt to infer the parameter of a user's data generating process. We close the paper by a case study on occupancy-adaptive smart home temperature control to demonstrate the efficacy of the mechanism.
Data storage in cloud should come along with high safety and confidentiality. It is accountability of cloud service provider to guarantee the availability and security of client data. There exist various alternatives for storage services but confidentiality and complexity solutions for database as a service are still not satisfactory. Proposed system gives alternative solution for database as a service that integrates benefits of different services along with advance encryption techniques. It yields possibility of applying concurrency on encrypted data. This alternative provides supporting facility to connect dispersed clients with elimination of intermediate proxy by which simplicity can acquired. Performance of proposed system evaluated on basis of theoretical analyses.
Ubiquitous deployment of low-cost mobile positioning devices and the widespread use of high-speed wireless networks enable massive collection of large-scale trajectory data of individuals moving on road networks. Trajectory data mining finds numerous applications including understanding users' historical travel preferences and recommending places of interest to new visitors. Privacy-preserving trajectory mining is an important and challenging problem as exposure of sensitive location information in the trajectories can directly invade the location privacy of the users associated with the trajectories. In this paper, we propose a differentially private trajectory analysis algorithm for points-of-interest recommendation to users that aims at maximizing the accuracy of the recommendation results while protecting the privacy of the exposed trajectories with differential privacy guarantees. Our algorithm first transforms the raw trajectory dataset into a bipartite graph with nodes representing the users and the points-of-interest and the edges representing the visits made by the users to the locations, and then extracts the association matrix representing the bipartite graph to inject carefully calibrated noise to meet έ-differential privacy guarantees. A post-processing of the perturbed association matrix is performed to suppress noise prior to performing a Hyperlink-Induced Topic Search (HITS) on the transformed data that generates an ordered list of recommended points-of-interest. Extensive experiments on a real trajectory dataset show that our algorithm is efficient, scalable and demonstrates high recommendation accuracy while meeting the required differential privacy guarantees.
The prevalence of mobile devices and location-based services (LBS) has generated great concerns regarding the LBS users' privacy, which can be compromised by statistical analysis of their movement patterns. A number of algorithms have been proposed to protect the privacy of users in such systems, but the fundamental underpinnings of such remain unexplored. Recently, the concept of perfect location privacy was introduced and its achievability was studied for anonymization-based LBS systems, where user identifiers are permuted at regular intervals to prevent identification based on statistical analysis of long time sequences. In this paper, we significantly extend that investigation by incorporating the other major tool commonly employed to obtain location privacy: obfuscation, where user locations are purposely obscured to protect their privacy. Since anonymization and obfuscation reduce user utility in LBS systems, we investigate how location privacy varies with the degree to which each of these two methods is employed. We provide: (1) achievability results for the case where the location of each user is governed by an i.i.d. process; (2) converse results for the i.i.d. case as well as the more general Markov Chain model. We show that, as the number of users in the network grows, the obfuscation-anonymization plane can be divided into two regions: in the first region, all users have perfect location privacy; and, in the second region, no user has location privacy.
Traditional privacy-preserving data disclosure solutions have focused on protecting the privacy of individual's information with the assumption that all aggregate (statistical) information about individuals is safe for disclosure. Such schemes fail to support group privacy where aggregate information about a group of individuals may also be sensitive and users of the published data may have different levels of access privileges entitled to them. We propose the notion ofεg-Group Differential Privacy that protects sensitive information of groups of individuals at various defined privacy levels, enabling data users to obtain the level of access entitled to them. We present a preliminary evaluation of the proposed notion of group privacy through experiments on real association graph data that demonstrate the guarantees on group privacy on the disclosed data.
Due to the increasing concerns of securing private information, context-aware Internet of Things (IoT) applications are in dire need of supporting data privacy preservation for users. In the past years, game theory has been widely applied to design secure and privacy-preserving protocols for users to counter various attacks, and most of the existing work is based on a two-player game model, i.e., a user/defender-attacker game. In this paper, we consider a more practical scenario which involves three players: a user, an attacker, and a service provider, and such a complicated system renders any two-player model inapplicable. To capture the complex interactions between the service provider, the user, and the attacker, we propose a hierarchical two-layer three-player game framework. Finally, we carry out a comprehensive numerical study to validate our proposed game framework and theoretical analysis.
Blockchain has been applied to study data privacy and network security recently. In this paper, we propose a punishment scheme based on the action record on the blockchain to suppress the attack motivation of the edge servers and the mobile devices in the edge network. The interactions between a mobile device and an edge server are formulated as a blockchain security game, in which the mobile device sends a request to the server to obtain real-time service or launches attacks against the server for illegal security gains, and the server chooses to perform the request from the device or attack it. The Nash equilibria (NEs) of the game are derived and the conditions that each NE exists are provided to disclose how the punishment scheme impacts the adversary behaviors of the mobile device and the edge server.
MANETs have been focusing the interest of researchers for several years. The new scenarios where MANETs are being deployed make that several challenging issues remain open: node scalability, energy efficiency, network lifetime, Quality of Service (QoS), network overhead, data privacy and security, and effective routing. This latter is often seen as key since it frequently constrains the performance of the overall network. Location-based routing protocols provide a good solution for scalable MANETs. Although several location-based routing protocols have been proposed, most of them rely on error-free positions. Only few studies have focused so far on how positioning error affects the routing performance; also, most of them consider outdated solutions. This paper is aimed at filling this gap, by studying the impact of the error in the position of the nodes of two location-based routing protocols: DYMOselfwd and AODV-Line. These protocols were selected as they both aim at reducing the routing overhead. Simulations considering different mobility patterns in a dense network were conducted, so that the performance of these protocols can be assessed under ideal (i.e. error-less) and realistic (i.e. with error) conditions. The results show that AODV-Line builds less reliable routes than DYMOselfwd in case of error in the position information, thus increasing the routing overhead.
Routing security has a great importance to the security of Mobile Ad Hoc Networks (MANETs). There are various kinds of attacks when establishing routing path between source and destination. The adversaries attempt to deceive the source node and get the privilege of data transmission. Then they try to launch the malicious behaviors such as passive or active attacks. Due to the characteristics of the MANETs, e.g. dynamic topology, open medium, distributed cooperation, and constrained capability, it is difficult to verify the behavior of nodes and detect malicious nodes without revealing any privacy. In this paper, we present PVad, an approach conducting privacy-preserving verification in the routing discovery phase of MANETs. PVad tries to find the existing communication rules by association rules instead of making the rules. PVad consists of two phases, a reasoning phase deducing the expected log data of the peers, and a verification phase using Merkle Hash Tree to verify the correctness of derived information without revealing any privacy of nodes on expected routing paths. Without deploying any special nodes to assist the verification, PVad can detect multiple malicious nodes by itself. To show our approach can be used to guarantee the security of the MANETs, we conduct our experiments in NS3 as well as the real router environment, and we improved the detection accuracy by 4% on average compared to our former work.
The paper presents an example Sensor-cloud architecture that integrates security as its native ingredient. It is based on the multi-layer client-server model with separation of physical and virtual instances of sensors, gateways, application servers and data storage. It proposes the application of virtualised sensor nodes as a prerequisite for increasing security, privacy, reliability and data protection. All main concerns in Sensor-Cloud security are addressed: from secure association, authentication and authorization to privacy and data integrity and protection. The main concept is that securing the virtual instances is easier to implement, manage and audit and the only bottleneck is the physical interaction between real sensor and its virtual reflection.
The Semantic Web can be used to enable the interoperability of IoT devices and to annotate their functional and nonfunctional properties, including security and privacy. In this paper, we will show how to use the ontology and JSON-LD to annotate connectivity, security and privacy properties of IoT devices. Out of that, we will present our prototype for a lightweight, secure application level protocol wrapper that ensures communication consistency, secrecy and integrity for low cost IoT devices like the ESP8266 and Photon particle.
Recent years have witnessed the trend of increasingly relying on distributed infrastructures. This increased the number of reported incidents of security breaches compromising users' privacy, where third parties massively collect, process and manage users' personal data. Towards these security and privacy challenges, we combine hierarchical identity based cryptographic mechanisms with emerging blockchain infrastructures and propose a blockchain-based data usage auditing architecture ensuring availability and accountability in a privacy-preserving fashion. Our approach relies on the use of auditable contracts deployed in blockchain infrastructures. Thus, it offers transparent and controlled data access, sharing and processing, so that unauthorized users or untrusted servers cannot process data without client's authorization. Moreover, based on cryptographic mechanisms, our solution preserves privacy of data owners and ensures secrecy for shared data with multiple service providers. It also provides auditing authorities with tamper-proof evidences for data usage compliance.
In vehicular networks, each message is signed by the generating node to ensure accountability for the contents of that message. For privacy reasons, each vehicle uses a collection of certificates, which for accountability reasons are linked at a central authority. One such design is the Security Credential Management System (SCMS) [1], which is the leading credential management system in the US. The SCMS is composed of multiple components, each of which has a different task for key management, which are logically separated. The SCMS is designed to ensure privacy against a single insider compromise, or against outside adversaries. In this paper, we demonstrate that the current SCMS design fails to achieve its design goal, showing that a compromised authority can gain substantial information about certificate linkages. We propose a solution that accommodates threshold-based detection, but uses relabeling and noise to limit the information that can be learned from a single insider adversary. We also analyze our solution using techniques from differential privacy and validate it using traffic-simulator based experiments. Our results show that our proposed solution prevents privacy information leakage against the compromised authority in collusion with outsider attackers.