Biblio
Many mobile apps, including augmented-reality games, bar-code readers, and document scanners, digitize information from the physical world by applying computer-vision algorithms to live camera data. However, because camera permissions for existing mobile operating systems are coarse (i.e., an app may access a camera's entire view or none of it), users are vulnerable to visual privacy leaks. An app violates visual privacy if it extracts information from camera data in unexpected ways. For example, a user might be surprised to find that an augmented-reality makeup app extracts text from the camera's view in addition to detecting faces. This paper presents results from the first large-scale study of visual privacy leaks in the wild. We build CamForensics to identify the kind of information that apps extract from camera data. Our extensive user surveys determine what kind of information users expected an app to extract. Finally, our results show that camera apps frequently defy users' expectations based on their descriptions.
With the increasing popularity of augmented reality (AR) services, providing seamless human-computer interactions in the AR setting has received notable attention in the industry. Gesture control devices have recently emerged to be the next great gadgets for AR due to their unique ability to enable computer interaction with day-to-day gestures. While these AR devices are bringing revolutions to our interaction with the cyber world, it is also important to consider potential privacy leakages from these always-on wearable devices. Specifically, the coarse access control on current AR systems could lead to possible abuse of sensor data. Although the always-on gesture sensors are frequently quoted as a privacy concern, there has not been any study on information leakage of these devices. In this article, we present our study on side-channel information leakage of the most popular gesture control device, Myo. Using signals recorded from the electromyography (EMG) sensor and accelerometers on Myo, we can recover sensitive information such as passwords typed on a keyboard and PIN sequence entered through a touchscreen. EMG signal records subtle electric currents of muscle contractions. We design novel algorithms based on dynamic cumulative sum and wavelet transform to determine the exact time of finger movements. Furthermore, we adopt the Hudgins feature set in a support vector machine to classify recorded signal segments into individual fingers or numbers. We also apply coordinate transformation techniques to recover fine-grained spatial information with low-fidelity outputs from the sensor in keystroke recovery. We evaluated the information leakage using data collected from a group of volunteers. Our results show that there is severe privacy leakage from these commodity wearable sensors. Our system recovers complex passwords constructed with lowercase letters, uppercase letters, numbers, and symbols with a mean success rate of 91%.
Augmented Reality (AR) devices continuously scan their environment in order to naturally overlay virtual objects onto user's view of the physical world. In contrast to Virtual Reality, where one's environment is fully replaced with a virtual one, one of AR's "killer features" is co-located collaboration, in which multiple users interact with the same combination of virtual and real objects. Microsoft recently released HoloLens, the first consumer-ready augmented reality headset that needs no outside markers to achieve precise inside-out spatial mapping, which allows centimeter-scale hologram positioning. However, despite many applications published on the Windows Mixed Reality platform that rely on direct communication between AR devices, there currently exists no implementation or achievable proposal for secure direct pairing of two unassociated headsets. As augmented reality gets into mainstream, this omission exposes current and future users to a range of avoidable attacks. In order to close this real-world gap in both theory and engineering practice, in this paper we design and evaluate HoloPair, a system for secure and usable pairing of two AR headsets. We propose a pairing protocol and build a working prototype to experimentally evaluate its security guarantees, usability, and system performance. By running a user study with a total of 22 participants, we show that the system achieves high rates of attack detection, short pairing times, and a high average usability score. Moreover, in order to make an immediate impact on the wider developer community, we have published the full implementation and source code of our prototype, which is currently under consideration to be included in the official HoloLens development toolkit.
Having an effective security level for Embedded System (ES), helps a reliable and stable operation of this system. In order to identify, if the current security level for a given ES is effective or not, we need a proactive evaluation for this security level. The evaluation of the security level for ESs is not straightforward process, things like the heterogeneity among the components of ES complicate this process. One of the productive approaches, which overcame the complexity of evaluation for Security, Privacy and Dependability (SPD) is the Multi Metrics (MM). As most of SPD evaluation approaches, the MM approach bases on the experts knowledge for the basic evaluation. Regardless of its advantages, experts evaluation has some drawbacks, which foster the need for less experts-dependent evaluation. In this paper, we propose a framework for security measurability as a part of security, privacy and dependability evaluation. The security evaluation based on Multi Metric (MM) approach as being an effective approach for evaluations, thus, we call it MM framework. The art of evaluation investigated within MM framework, based also on systematic storing and retrieving of experts knowledge. Using MM framework, the administrator of the ES could evaluate and enhance the S-level of their system, without being an expert in security.
The concept of cyber-physical production systems is highly discussed amongst researchers and industry experts, however, the implementation options for these systems rely mainly on obsolete technologies. Despite the fact that the blockchain is most often associated with cryptocurrency, it is fundamentally wrong to deny the universality of this technology and the prospects for its application in other industries. For example, in the insurance sector or in a number of identity verification services. This article discusses the deployment of the CPPS backbone network based on the Ethereum private blockchain system. The structure of the network is described as well as its interaction with the help of smart contracts, based on the consumption of cryptocurrency for various operations.
One challenge for cybersecurity experts is deciding which type of attack would be successful against the system they wish to protect. Often, this challenge is addressed in an ad hoc fashion and is highly dependent upon the skill and knowledge base of the expert. In this study, we present a method for automatically ranking attack patterns in the Common Attack Pattern Enumeration and Classification (CAPEC) database for a given system. This ranking method is intended to produce suggested attacks to be evaluated by a cybersecurity expert and not a definitive ranking of the "best" attacks. The proposed method uses topic modeling to extract hidden topics from the textual description of each attack pattern and learn the parameters of a topic model. The posterior distribution of topics for the system is estimated using the model and any provided text. Attack patterns are ranked by measuring the distance between each attack topic distribution and the topic distribution of the system using KL divergence.
If, as most experts agree, the mathematical basis of major blockchain systems is (probably if not provably) sound, why do they have a bad reputation? Human misbehavior (such as failed Bitcoin exchanges) accounts for some of the issues, but there are also deeper and more interesting vulnerabilities here. These include design faults and code-level implementation defects, ecosystem issues (such as wallets), as well as approaches such as the "51% attack" all of which can compromise the integrity of blockchain systems. With particular attention to the emerging non-financial applications of blockchain technology, this paper demonstrates the kinds of attacks that are possible and provides suggestions for minimizing the risks involved.
Internet of Things (IoT) distributed secure data management system is characterized by authentication, privacy policies to preserve data integrity. Multi-phase security and privacy policies ensure confidentiality and trust between the users and service providers. In this regard, we present a novel Two-phase Incentive-based Secure Key (TISK) system for distributed data management in IoT. The proposed system classifies the IoT user nodes and assigns low-level, high-level security keys for data transactions. Low-level secure keys are generic light-weight keys used by the data collector nodes and data aggregator nodes for trusted transactions. TISK phase-I Generic Service Manager (GSM-C) module verifies the IoT devices based on self-trust incentive and server-trust incentive levels. High-level secure keys are dedicated special purpose keys utilized by data manager nodes and data expert nodes for authorized transactions. TISK phase-II Dedicated Service Manager (DSM-C) module verifies the certificates issued by GSM-C module. DSM-C module further issues high-level secure keys to data manager nodes and data expert nodes for specific purpose transactions. Simulation results indicate that the proposed TISK system reduces the key complexity and key cost to ensure distributed secure data management in IoT network.
Currently, when companies conduct risk analysis of own networks and systems, it is common to outsource risk analysis to third-party experts. At that time, the company passes the information used for risk analysis including confidential information such as network configuration to third-party expert. It raises the risk of leakage and abuse of confidential information. Therefore, a method of risk analysis by using secure computation without passing confidential information of company has been proposed. Although Liu's method have firstly achieved secure risk analysis method using multiparty computation and attack tree analysis, it has several problems to be practical. In this paper, improvement of secure risk analysis method is proposed. It can dynamically reduce compilation time, enhance scale of target network and system without increasing execution time. Experimental work is carried out by prototype implementation. As a result, we achieved improved performance in compile time and enhance scale of target with equivalent performance on execution time.
A blockchain powered Health information ecosystem can solve a frequently discussed problem of the lifelong recorded patient health data, which seriously could hurdle the privacy of the patients and the growing data hunger of the research and policy maker institutions. On one side the general availability of the data is vital in emergency situations and supports heavily the different research, population health management and development activities, on the other side using the same data can lead to serious social and ethical problems caused by malicious actors. Currently, the regulation of the privacy data varies all over the world, however underlying principles are always defensive and protective towards patient privacy against general availability. The protective principles cause a defensive, data hiding attitude of the health system developers to avoid breaching the overall law regulations. It makes the policy makers and different - primarily drug - developers to find ways to treat data such a way that lead to ethical and political debates. In our paper we introduce how the blockchain technology can help solving the problem of secure data storing and ensuring data availability at the same time. We use the basic principles of the American HIPAA regulation, which defines the public availability criteria of health data, however the different local regulations may differ significantly. Blockchain's decentralized, intermediary-free, cryptographically secured attributes offer a new way of storing patient data securely and at the same time publicly available in a regulated way, where a well-designed distributed peer-to-peer network incentivize the smooth operation of a full-featured EHR system.
Community Health Workers (CHWs) have been using Mobile Health Data Collection Systems (MDCSs) for supporting the delivery of primary healthcare and carrying out public health surveys, feeding national-level databases with families' personal data. Such systems are used for public surveillance and to manage sensitive data (i.e., health data), so addressing the privacy issues is crucial for successfully deploying MDCSs. In this paper we present a comprehensive privacy threat analysis for MDCSs, discuss the privacy challenges and provide recommendations that are specially useful to health managers and developers. We ground our analysis on a large-scale MDCS used for primary care (GeoHealth) and a well-known Privacy Impact Assessment (PIA) methodology. The threat analysis is based on a compilation of relevant privacy threats from the literature as well as brain-storming sessions with privacy and security experts. Among the main findings, we observe that existing MDCSs do not employ adequate controls for achieving transparency and interveinability. Thus, threatening fundamental privacy principles regarded as data quality, right to access and right to object. Furthermore, it is noticeable that although there has been significant research to deal with data security issues, the attention with privacy in its multiple dimensions is prominently lacking.
Healthcare Internet of Things (HIoT) is transforming healthcare industry by providing large scale connectivity for medical devices, patients, physicians, clinical and nursing staff who use them and facilitate real-time monitoring based on the information gathered from the connected things. Heterogeneity and vastness of this network provide both opportunity and challenges for information collection and sharing. Patient-centric information such as health status and medical devices used by them must be protected to respect their safety and privacy, while healthcare knowledge should be shared in confidence by experts for healthcare innovation and timely treatment of patients. In this paper an overview of HIoT is given, emphasizing its characteristics to those of Big Data, and a security and privacy architecture is proposed for it. Context-sensitive role-based access control scheme is discussed to ensure that HIoT is reliable, provides data privacy, and achieves regulatory compliance.
In this paper, we report our work on using machine learning techniques to predict back bending activity based on field data acquired in a local nursing home. The data are recorded by a privacy-aware compliance tracking system (PACTS). The objective of PACTS is to detect back-bending activities and issue real-time alerts to the participant when she bends her back excessively, which we hope could help the participant form good habits of using proper body mechanics when performing lifting/pulling tasks. We show that our algorithms can differentiate nursing staffs baseline and high-level bending activities by using human skeleton data without any expert rules.
Consent is a key measure for privacy protection and needs to be `meaningful' to give people informational power. It is increasingly important that individuals are provided with real choices and are empowered to negotiate for meaningful consent. Meaningful consent is an important area for consideration in IoT systems since privacy is a significant factor impacting on adoption of IoT. Obtaining meaningful consent is becoming increasingly challenging in IoT environments. It is proposed that an ``apparency, pragmatic/semantic transparency model'' adopted for data management could make consent more meaningful, that is, visible, controllable and understandable. The model has illustrated the why and what issues regarding data management for potential meaningful consent [1]. In this paper, we focus on the `how' issue, i.e. how to implement the model in IoT systems. We discuss apparency by focusing on the interactions and data actions in the IoT system; pragmatic transparency by centring on the privacy risks, threats of data actions; and semantic transparency by focusing on the terms and language used by individuals and the experts. We believe that our discussion would elicit more research on the apparency model' in IoT for meaningful consent.
Audit logs are widely used in information systems nowadays. In cloud computing and cloud storage environment, audit logs are required to be encrypted and outsourced on remote servers to protect the confidentiality of data and the privacy of users. The searchable encrypted audit logs support a search on the encrypted audit logs. In this paper, we propose a privacy-preserving and unforgeable searchable encrypted audit log scheme based on PEKS. Only the trusted data owner can generate encrypted audit logs containing access permissions for users. The semi-honest server verifies the audit logs in a searchable encryption way before granting the operation rights to users and storing the audit logs. The data owner can perform a fine-grained conjunctive query on the stored audit logs, and accept only the valid audit logs. The scheme is immune to the collusion tamper or fabrication conducted by server and user. Concrete implementations of the scheme is put forward in detail. The correct of the scheme is proved, and the security properties, such as privacy-preserving, searchability, verifiability and unforgeability are analyzed. Further evaluation of computation load shows that the design is of considerable efficiency.
In a Semi-autonomic cloud auditing architecture we weaved in privacy enhancing mechanisms [15] by applying the public key version of the Somewhat homomorphic encryption (SHE) scheme from [4]. It turns out that the performance of the SHE can be significantly improved by carefully deriving relevant crypto parameters from the concrete cloud auditing use cases for which the scheme serves as a privacy enhancing approach. We provide a generic algorithm for finding good SHE parameters with respect to a given use case scenario by analyzing and taking into consideration security, correctness and performance of the scheme. Also, to show the relevance of our proposed algorithms we apply it to two predominant cloud auditing use cases.
In the past decade, the revolution in miniaturization (microprocessors, batteries, cameras etc.) and manufacturing of new type of sensors resulted in a new regime of applications based on smart objects called IoT. Majority of such applications or services are to ease human life and/or to setup efficient processes in automated environments. However, this convenience is coming up with new challenges related to data security and human privacy. The objects in IoT are resource constrained devices and cannot implement a fool-proof security framework. These end devices work like eyes and ears to interact with the physical world and collect data for analytics to make expedient decisions. The storage and analysis of the collected data is done remotely using cloud computing. The transfer of data from IoT to the computing clouds can introduce privacy issues and network delays. Some applications need a real-time decision and cannot tolerate the delays and jitters in the network. Here, edge computing or fog computing plays its role to settle down the mentioned issues by providing cloud-like facilities near the end devices. In this paper, we discuss IoT, fog computing, the relationship between IoT and fog computing, their security issues and solutions by different researchers. We summarize attack surface related to each layer of this paradigm which will help to propose new security solutions to escalate it acceptability among end users. We also propose a risk-based trust management model for smart healthcare environment to cope with security and privacy-related issues in this highly un-predictable heterogeneous ecosystem.
As an extension of cloud computing, fog computing is proving itself more and more potentially useful nowadays. Fog computing is introduced to overcome the shortcomings of cloud computing paradigm in handling the massive amount of traffic caused by the enormous number of Internet of Things devices being increasingly connected to the Internet on daily basis. Despite its advantages, fog architecture introduces new security and privacy threats that need to be studied and solved as soon as possible. In this work, we explore two privacy issues posed by the fog computing architecture and we define privacy challenges according to them. The first challenge is related to the fog's design purposes of reducing the latency and improving the bandwidth, where the existing privacy-preserving methods violate these design purposed. The other challenge is related to the proximity of fog nodes to the end-users or IoT devices. We discuss the importance of addressing these challenges by putting them in the context of real-life scenarios. Finally, we propose a privacy-preserving fog computing paradigm that solves these challenges and we assess the security and efficiency of our solution.
The paradigm of fog computing has set new trends and heights in the modern world networking and have overcome the major technical complexities of cloud computing. It is not a replacement of cloud computing technology but it just adds feasible advanced characteristics to existing cloud computing paradigm.fog computing not only provide storage, networking and computing services but also provide a platform for IoT (internet of things). However, the fog computing technology also arise the threat to privacy and security of the data and services. The existing security and privacy mechanisms of the cloud computing cannot be applied to the fog computing directly due to its basic characteristics of large-scale geo-distribution, mobility and heterogeneity. This article provides an overview of the present existing issues and challenges in fog computing.
Nowadays, data has become more important as the core resource for the information society. However, with the development of data analysis techniques, the privacy violation such as leakage of sensitive data and personal identification exposure are also increasing. Differential privacy is the technique to satisfy the requirement that any additional information should not be disclosed except information from the database itself. It is well known for protecting the privacy from arbitrary attack. However, recent research argues that there is a several ways to infer sensitive information from data although the differential privacy is applied. One of this inference method is to use the correlation between the data. In this paper, we investigate the new privacy threats using attribute correlation which are not covered by traditional studies and propose a privacy preserving technique that configures the differential privacy's noise parameter to solve this new threat. In the experiment, we show the weaknesses of traditional differential privacy method and validate that the proposed noise parameter configuration method provide a sufficient privacy protection and maintain an accuracy of data utility.
Differential privacy is an approach that preserves patient privacy while permitting researchers access to medical data. This paper presents mechanisms proposed to satisfy differential privacy while answering a given workload of range queries. Representing input data as a vector of counts, these methods partition the vector according to relationships between the data and the ranges of the given queries. After partitioning the vector into buckets, the counts of each bucket are estimated privately and split among the bucket's positions to answer the given query set. The performance of the proposed method was evaluated using different workloads over several attributes. The results show that partitioning the vector based on the data can produce more accurate answers, while partitioning the vector based on the given workload improves privacy. This paper's two main contributions are: (1) improving earlier work on partitioning mechanisms by building a greedy algorithm to partition the counts' vector efficiently, and (2) its adaptive algorithm considers the sensitivity of the given queries before providing results.
It is a challenging problem to preserve the friendly-correlations between individuals when publishing social-network data. To alleviate this problem, uncertain graph has been presented recently. The main idea of uncertain graph is converting an original graph into an uncertain form, where the correlations between individuals is an associated probability. However, the existing methods of uncertain graph lack rigorous guarantees of privacy and rely on the assumption of adversary's knowledge. In this paper we first introduced a general model for constructing uncertain graphs. Then, we proposed an algorithm under the model which is based on differential privacy and made an analysis of algorithm's privacy. Our algorithm provides rigorous guarantees of privacy and against the background knowledge attack. Finally, the algorithm we proposed satisfied differential privacy and showed feasibility in the experiments. And then, we compare our algorithm with (k, ε)-obfuscation algorithm in terms of data utility, the importance of nodes for network in our algorithm is similar to (k, ε)-obfuscation algorithm.
Comparing with the traditional grid, energy internet will collect data widely and connect more broader. The analysis of electrical data use of Non-intrusive Load Monitoring (NILM) can infer user behavior privacy. Consideration both data security and availability is a problem must be addressed. Due to its rigid and provable privacy guarantee, Differential Privacy has proverbially reached and applied to privacy preserving data release and data mining. Because of its high sensitivity, increases the noise directly will led to data unavailable. In this paper, we propose a differentially private mechanism to protect energy internet privacy. Our focus is the aggregated data be released by data owner after added noise in disaggregated data. The theoretically proves and experiments show that our scheme can achieve the purpose of privacy-preserving and data availability.
Trying to solve the risk of data privacy disclosure in classification process, a Random Forest algorithm under differential privacy named DPRF-gini is proposed in the paper. In the process of building decision tree, the algorithm first disturbed the process of feature selection and attribute partition by using exponential mechanism, and then meet the requirement of differential privacy by adding Laplace noise to the leaf node. Compared with the original algorithm, Empirical results show that protection of data privacy is further enhanced while the accuracy of the algorithm is slightly reduced.
Distributed data aggregation via summation (counting) helped us to learn the insights behind the raw data. However, such computing suffered from a high privacy risk of malicious collusion attacks. That is, the colluding adversaries infer a victim's privacy from the gaps between the aggregation outputs and their source data. Among the solutions against such collusion attacks, Distributed Differential Privacy (DDP) shows a significant effect of privacy preservation. Specifically, a DDP scheme guarantees the global differential privacy (the presence or absence of any data curator barely impacts the aggregation outputs) by ensuring local differential privacy at the end of each data curator. To guarantee an overall privacy performance of a distributed data aggregation system against malicious collusion attacks, part of the existing work on such DDP scheme aim to provide an estimated lower bound of privacy budget for the global differential privacy. However, there are two main problems: low data utility from using a large global function sensitivity; unknown privacy guarantee when the aggregation sensitivity of the whole system is less than the sum of the data curator's aggregation sensitivity. To address these problems while ensuring distributed differential privacy, we provide a new lower bound of privacy budget, which works with an unconditional aggregation sensitivity of the whole distributed system. Moreover, we study the performance of our privacy bound in different scenarios of data updates. Both theoretical and experimental evaluations show that our privacy bound offers better global privacy performance than the existing work.