Biblio
Some IoT data are time-sensitive and cannot be processed in clouds, which are too far away from IoT devices. Fog computing, located as close as possible to data sources at the edge of IoT systems, deals with this problem. Some IoT data are sensitive and require privacy controls. The proposed Policy Enforcement Fog Module (PEFM), running within a single fog, operates close to data sources connected to their fog, and enforces privacy policies for all sensitive IoT data generated by these data sources. PEFM distinguishes two kinds of fog data processing. First, fog nodes process data for local IoT applications, running within the local fog. All real-time data processing must be local to satisfy real-time constraints. Second, fog nodes disseminate data to nodes beyond the local fog (including remote fogs and clouds) for remote (and non-real-time) IoT applications. PEFM has two components for these two kinds of fog data processing. First, Local Policy Enforcement Module (LPEM), performs direct privacy policy enforcement for sensitive data accessed by local IoT applications. Second, Remote Policy Enforcement Module (RPEM), sets up a mechanism for indirectly enforcing privacy policies for sensitive data sent to remote IoT applications. RPEM is based on creating and disseminating Active Data Bundles-software constructs bundling inseparably sensitive data, their privacy policies, and an execution engine able to enforce privacy policies. To prove effectiveness and efficiency of the solution, we developed a proof-of-concept scenario for a smart home IoT application. We investigate privacy threats for sensitive IoT data and show a framework for using PEFM to overcome these threats.
Among the various types of Role-based access control (RBAC) policies proposed in the literature, contextual policies take into account the user's location and the time at which she requests an access. The precise characterization of the context in such policies and the definition of an access decision procedure for them are non-trivial ntasks, since they have to take into account the various facets of the temporal and spatial expressions occurring in these policies. Existing approaches for modeling contextual policies do not support all the various spatio-temporal concepts and often do not provide an access decision procedure. In this paper, we propose a model-driven approach to representing and checking RBAC contextual policies. We introduce GemRBAC+CTX, an extension of a generalized conceptual model for RBAC, which contains all the concepts required to model contextual policies. We formalize these policies as constraints, using the Object Constraint Language (OCL), on the GemRBAC+CTX model, as a way to operationalize the access decision for user's requests using model-driven technologies. We show the application of GemRBAC+CTX to model the RBAC contextual policies of an application developed by HITEC Luxembourg, a provider of situational-aware information management systems for emergency scenarios. The use of GemRBAC+CTX has allowed the engineers of HITEC to define several new types of contextual policies, with a fine-grained, precise description of contexts. The preliminary experimental results show the feasibility of applying our model-driven approach for making access decisions in real systems.
Blockchain-based cryptocurrencies offer an appealing alternative to Fiat currencies, due to their decentralized and borderless nature. However the decentralized settings make the authentication process more challenging: Standard cryptographic methods often rely on the ability of users to reliably store a (large) secret information. What happens if one user's key is lost or stolen? Blockchain systems lack of fallback mechanisms that allow one to recover from such an event, whereas the traditional banking system has developed and deploys quite effective solutions. In this work, we develop new cryptographic techniques to integrate security policies (developed in the traditional banking domain) in the blockchain settings. We propose a system where a smart contract is given the custody of the user's funds and has the ability to invoke a two-factor authentication (2FA) procedure in case of an exceptional event (e.g., a particularly large transaction or a key recovery request). To enable this, the owner of the account secret-shares the answers of some security questions among a committee of users. When the 2FA mechanism is triggered, the committee members can provide the smart contract with enough information to check whether an attempt was successful, and nothing more. We then design a protocol that securely and efficiently implements such a functionality: The protocol is round-optimal, is robust to the corruption of a subset of committee members, supports low-entropy secrets, and is concretely efficient. As a stepping stone towards the design of this protocol, we introduce a new threshold homomorphic encryption scheme for linear predicates from bilinear maps, which might be of independent interest. To substantiate the practicality of our approach, we implement the above protocol as a smart contract in Ethereum and show that it can be used today as an additional safeguard for suspicious transactions, at minimal added cost. We also implement a second scheme where the smart contract additionally requests a signature from a physical hardware token, whose verification key is registered upfront by the owner of the funds. We show how to integrate the widely used universal two-factor authentication (U2F) tokens in blockchain environments, thus enabling the deployment of our system with available hardware.
Android privacy control is an important but difficult problem to solve. Previously, there was much research effort either focusing on extending the Android permission model with better policies or modifying the Android framework for fine-grained access control. In this work, we take an integral approach by designing and implementing SweetDroid, a calling-context-sensitive privacy policy enforcement framework. SweetDroid combines automated policy generation with automated policy enforcement. The automatically generated policies in SweetDroid are based on the calling contexts of privacy sensitive APIs; hence, SweetDroid is able to tell whether a particular API (e.g., getLastKnownLocation) under a certain execution path is leaking private information. The policy enforcement in SweetDroid is also fine-grained - it is at the individual API level, not at the permission level. We implement and evaluate the system based on thousands of Android apps, including those from a third-party market and malicious apps from VirusTotal. Our experiment results show that SweetDroid can successfully distinguish and enforce different privacy policies based on calling contexts, and the current design is both developer hassle-free and user transparent. SweetDroid is also efficient because it only introduces small storage and computational overhead.
The emphasis on exhaustive passive capturing of images using wearable cameras like Autographer, which is often known as lifelogging has brought into foreground the challenge of preserving privacy, in addition to presenting the vast amount of images in a meaningful way. In this paper, we present a user-study to understand the importance of an array of factors that are likely to influence the lifeloggers to share their lifelog images in their online circle. The findings are a step forward in the emerging area intersecting HCI, and privacy, to help in exploring design directions for privacy mediating techniques in lifelogging applications.
Unease over data privacy will retard consumer acceptance of IoT deployments. The primary source of discomfort is a lack of user control over raw data that is streamed directly from sensors to the cloud. This is a direct consequence of the over-centralization of today's cloud-based IoT hub designs. We propose a solution that interposes a locally-controlled software component called a privacy mediator on every raw sensor stream. Each mediator is in the same administrative domain as the sensors whose data is being collected, and dynamically enforces the current privacy policies of the owners of the sensors or mobile users within the domain. This solution necessitates a logical point of presence for mediators within the administrative boundaries of each organization. Such points of presence are provided by cloudlets, which are small locally-administered data centers at the edge of the Internet that can support code mobility. The use of cloudlet-based mediators aligns well with natural personal and organizational boundaries of trust and responsibility.
Online privacy policies notify users of a Website how their personal information is collected, processed and stored. Against the background of rising privacy concerns, privacy policies seem to represent an influential instrument for increasing customer trust and loyalty. However, in practice, consumers seem to actually read privacy policies only in rare cases, possibly reflecting the common assumption stating that policies are hard to comprehend. By designing and implementing an automated extraction and readability analysis toolset that embodies a diversity of established readability measures, we present the first large-scale study that provides current empirical evidence on the readability of nearly 50,000 privacy policies of popular English-speaking Websites. The results empirically confirm that on average, current privacy policies are still hard to read. Furthermore, this study presents new theoretical insights for readability research, in particular, to what extent practical readability measures are correlated. Specifically, it shows the redundancy of several well-established readability metrics such as SMOG, RIX, LIX, GFI, FKG, ARI, and FRES, thus easing future choice making processes and comparisons between readability studies, as well as calling for research towards a readability measures framework. Moreover, a more sophisticated privacy policy extractor and analyzer as well as a solid policy text corpus for further research are provided.
The purpose of the General Data Protection Regulation (GDPR) is to provide improved privacy protection. If an app controls personal data from users, it needs to be compliant with GDPR. However, GDPR lists general rules rather than exact step-by-step guidelines about how to develop an app that fulfills the requirements. Therefore, there may exist GDPR compliance violations in existing apps, which would pose severe privacy threats to app users. In this paper, we take mobile health applications (mHealth apps) as a peephole to examine the status quo of GDPR compliance in Android apps. We first propose an automated system, named HPDROID, to bridge the semantic gap between the general rules of GDPR and the app implementations by identifying the data practices declared in the app privacy policy and the data relevant behaviors in the app code. Then, based on HPDROID, we detect three kinds of GDPR compliance violations, including the incompleteness of privacy policy, the inconsistency of data collections, and the insecurity of data transmission. We perform an empirical evaluation of 796 mHealth apps. The results reveal that 189 (23.7%) of them do not provide complete privacy policies. Moreover, 59 apps collect sensitive data through different measures, but 46 (77.9%) of them contain at least one inconsistent collection behavior. Even worse, among the 59 apps, only 8 apps try to ensure the transmission security of collected data. However, all of them contain at least one encryption or SSL misuse. Our work exposes severe privacy issues to raise awareness of privacy protection for app users and developers.
The lifelogging activity enables a user, the lifelogger, to passively capture multimodal records from a first-person perspective and ultimately create a visual diary encompassing every possible aspect of her life with unprecedented details. In recent years it has gained popularity among different groups of users. However, the possibility of ubiquitous presence of lifelogging devices especially in private spheres has raised serious concerns with respect to personal privacy. Different practitioners and active researchers in the field of lifelogging have analysed the issue of privacy in lifelogging and proposed different mitigation strategies. However, none of the existing works has considered a well-defined privacy threat model in the domain of lifelogging. Without a proper threat model, any analysis and discussion of privacy threats in lifelogging remains incomplete. In this paper we aim to fill in this gap by introducing a first-ever privacy threat model identifying several threats with respect to lifelogging. We believe that the introduced threat model will be an essential tool and will act as the basis for any further research within this domain.
When clients interact with a cloud-based service, they expect certain levels of quality of service guarantees. These are expressed as security and privacy policies, interaction authorization policies, and service performance policies among others. The main security challenge in a cloud-based service environment, typically modeled using service-oriented architecture (SOA), is that it is difficult to trust all services in a service composition. In addition, the details of the services involved in an end-to-end service invocation chain are usually not exposed to the clients. The complexity of the SOA services and multi-tenancy in the cloud environment leads to a large attack surface. In this paper we propose a novel approach for end-to-end security and privacy in cloud-based service orchestrations, which uses a service activity monitor to audit activities of services in a domain. The service monitor intercepts interactions between a client and services, as well as among services, and provides a pluggable interface for different modules to analyze service interactions and make dynamic decisions based on security policies defined over the service domain. Experiments with a real-world service composition scenario demonstrate that the overhead of monitoring is acceptable for real-time operation of Web services.