Biblio
Community Health Workers (CHWs) have been using Mobile Health Data Collection Systems (MDCSs) for supporting the delivery of primary healthcare and carrying out public health surveys, feeding national-level databases with families' personal data. Such systems are used for public surveillance and to manage sensitive data (i.e., health data), so addressing the privacy issues is crucial for successfully deploying MDCSs. In this paper we present a comprehensive privacy threat analysis for MDCSs, discuss the privacy challenges and provide recommendations that are specially useful to health managers and developers. We ground our analysis on a large-scale MDCS used for primary care (GeoHealth) and a well-known Privacy Impact Assessment (PIA) methodology. The threat analysis is based on a compilation of relevant privacy threats from the literature as well as brain-storming sessions with privacy and security experts. Among the main findings, we observe that existing MDCSs do not employ adequate controls for achieving transparency and interveinability. Thus, threatening fundamental privacy principles regarded as data quality, right to access and right to object. Furthermore, it is noticeable that although there has been significant research to deal with data security issues, the attention with privacy in its multiple dimensions is prominently lacking.
Healthcare Internet of Things (HIoT) is transforming healthcare industry by providing large scale connectivity for medical devices, patients, physicians, clinical and nursing staff who use them and facilitate real-time monitoring based on the information gathered from the connected things. Heterogeneity and vastness of this network provide both opportunity and challenges for information collection and sharing. Patient-centric information such as health status and medical devices used by them must be protected to respect their safety and privacy, while healthcare knowledge should be shared in confidence by experts for healthcare innovation and timely treatment of patients. In this paper an overview of HIoT is given, emphasizing its characteristics to those of Big Data, and a security and privacy architecture is proposed for it. Context-sensitive role-based access control scheme is discussed to ensure that HIoT is reliable, provides data privacy, and achieves regulatory compliance.
In this paper, we report our work on using machine learning techniques to predict back bending activity based on field data acquired in a local nursing home. The data are recorded by a privacy-aware compliance tracking system (PACTS). The objective of PACTS is to detect back-bending activities and issue real-time alerts to the participant when she bends her back excessively, which we hope could help the participant form good habits of using proper body mechanics when performing lifting/pulling tasks. We show that our algorithms can differentiate nursing staffs baseline and high-level bending activities by using human skeleton data without any expert rules.
Consent is a key measure for privacy protection and needs to be `meaningful' to give people informational power. It is increasingly important that individuals are provided with real choices and are empowered to negotiate for meaningful consent. Meaningful consent is an important area for consideration in IoT systems since privacy is a significant factor impacting on adoption of IoT. Obtaining meaningful consent is becoming increasingly challenging in IoT environments. It is proposed that an ``apparency, pragmatic/semantic transparency model'' adopted for data management could make consent more meaningful, that is, visible, controllable and understandable. The model has illustrated the why and what issues regarding data management for potential meaningful consent [1]. In this paper, we focus on the `how' issue, i.e. how to implement the model in IoT systems. We discuss apparency by focusing on the interactions and data actions in the IoT system; pragmatic transparency by centring on the privacy risks, threats of data actions; and semantic transparency by focusing on the terms and language used by individuals and the experts. We believe that our discussion would elicit more research on the apparency model' in IoT for meaningful consent.
Situational awareness during sophisticated cyber attacks on the power grid is critical for the system operator to perform suitable attack response and recovery functions to ensure grid reliability. The overall theme of this paper is to identify existing practical issues and challenges that utilities face while monitoring substations, and to suggest potential approaches to enhance the situational awareness for the grid operators. In this paper, we provide a broad discussion about the various gaps that exist in the utility industry today in monitoring substations, and how those gaps could be addressed by identifying the various data sources and monitoring tools to improve situational awareness. The paper also briefly describes the advantages of contextualizing and correlating substation monitoring alerts using expert systems at the control center to obtain a holistic systems-level view of potentially malicious cyber activity at the substations before they cause impacts to grid operation.
Security Evaluation and Management (SEM) is considerably important process to protect the Embedded System (ES) from various kinds of security's exploits. In general, SEM's processes have some challenges, which limited its efficiency. Some of these challenges are system-based challenges like the hetero-geneity among system's components and system's size. Some other challenges are expert-based challenges like mis-evaluation possibility and experts non-continuous availability. Many of these challenges were addressed by the Multi Metric (MM) framework, which depends on experts' or subjective evaluation for basic evaluations. Despite of its productivity, subjective evaluation has some drawbacks (e.g. expert misevaluation) foster the need for considering objective evaluations in the MM framework. In addition, the MM framework is system centric framework, thus, by modelling complex and huge system using the MM framework a guide is needed indicating changes toward desirable security's requirements. This paper proposes extensions for the MM framework consider the usage of objective evaluations and work as guide for needed changes to satisfy desirable security requirements.
The widespread adoption of Location-Based Services (LBSs) has come with controversy about privacy. While leveraging location information leads to improving services through geo-contextualization, it rises privacy concerns as new knowledge can be inferred from location records, such as home/work places, habits or religious beliefs. To overcome this problem, several Location Privacy Protection Mechanisms (LPPMs) have been proposed in the literature these last years. However, every mechanism comes with its own configuration parameters that directly impact the privacy guarantees and the resulting utility of protected data. In this context, it can be difficult for a non-expert system designer to choose appropriate configuration parameters to use according to the expected privacy and utility. In this paper, we present a framework enabling the easy configuration of LPPMs. To achieve that, our framework performs an offline, in-depth automated analysis of LPPMs to provide the formal relationship between their configuration parameters and both privacy and the utility metrics. This framework is modular: by using different metrics, a system designer is able to fine-tune her LPPM according to her expected privacy and utility guarantees (i.e., the guarantee itself and the level of this guarantee). To illustrate the capability of our framework, we analyse Geo-Indistinguishability (a well known differentially private LPPM) and we provide the formal relationship between its &epsis; configuration parameter and two privacy and utility metrics.
When customers purchase a product or sign up for service from a company, they often are required to agree to a Privacy Policy or Terms of Service agreement. Many of these policies are lengthy, and a typical customer agrees to them without reading them carefully if at all. To address this problem, we have developed a prototype automatic text summarization system which is specifically designed for privacy policies. Our system generates a summary of a policy statement by identifying important sentences from the statement, categorizing these sentences by which of 5 "statement categories" the sentence addresses, and displaying to a user a list of the sentences which match each category. Our system incorporates keywords identified by a human domain expert and rules that were obtained by machine learning, and they are combined in an ensemble architecture. We have tested our system on a sample corpus of privacy statements, and preliminary results are promising.
In the last couple of years, organizations have demonstrated an increased willingness to participate in threat intelligence sharing platforms. The open exchange of information and knowledge regarding threats, vulnerabilities, incidents and mitigation strategies results from the organizations' growing need to protect against today's sophisticated cyber attacks. To investigate data quality challenges that might arise in threat intelligence sharing, we conducted focus group discussions with ten expert stakeholders from security operations centers of various globally operating organizations. The study addresses several factors affecting shared threat intelligence data quality at multiple levels, including collecting, processing, sharing and storing data. As expected, the study finds that the main factors that affect shared threat intelligence data stem from the limitations and complexities associated with integrating and consolidating shared threat intelligence from different sources while ensuring the data's usefulness for an inhomogeneous group of participants.Data quality is extremely important for shared threat intelligence. As our study has shown, there are no fundamentally new data quality issues in threat intelligence sharing. However, as threat intelligence sharing is an emerging domain and a large number of threat intelligence sharing tools are currently being rushed to market, several data quality issues – particularly related to scalability and data source integration – deserve particular attention.
One of the main concerns for smartphone users is the quality of apps they download. Before installing any app from the market, users first check its rating and reviews. However, these ratings are not computed by experts and most times are not associated with malicious behavior. In this work, we present an IDS/rating system based on a game theoretic model with crowdsourcing. Our results show that, with minor control over the error in categorizing users and the fraction of experts in the crowd, our system provides proper ratings while flagging all malicious apps.
Collaborative filtering plays an essential role in a recommender system, which recommends a list of items to a user by learning behavior patterns from user rating matrix. However, if an attacker has some auxiliary knowledge about a user purchase history, he/she can infer more information about this user. This brings great threats to user privacy. Some methods adopt differential privacy algorithms in collaborative filtering by adding noises to a rating matrix. Although they provide theoretically private results, the influence on recommendation accuracy are not discussed. In this paper, we solve the privacy problem in recommender system in a different way by applying the differential privacy method into the procedure of recommendation. We design two differentially private recommender algorithms with sampling, named Differentially Private Item Based Recommendation with sampling (DP-IR for short) and Differentially Private User Based Recommendation with sampling(DP-UR for short). Both algorithms are based on the exponential mechanism with a carefully designed quality function. Theoretical analyses on privacy of these algorithms are presented. We also investigate the accuracy of the proposed method and give theoretical results. Experiments are performed on real datasets to verify our methods.
In international military coalitions, situation awareness is achieved by gathering critical intel from different authorities. Authorities want to retain control over their data, as they are sensitive by nature, and, thus, usually employ their own authorization solutions to regulate access to them. In this paper, we highlight that harmonizing authorization solutions at the coalition level raises many challenges. We demonstrate how we address authorization challenges in the context of a scenario defined by military experts using a prototype implementation of SAFAX, an XACML-based architectural framework tailored to the development of authorization services for distributed systems.
Security and making trust is the first step toward development in both real and virtual societies. Internet-based development is inevitable. Increasing penetration of technology in the internet banking and its effectiveness in contributing to banking profitability and prosperity requires that satisfied customers turn into loyal customers. Currently, a large number of cyber attacks have been focused on online banking systems, and these attacks are considered as a significant security threat. Banks or customers might become the victim of the most complicated financial crime, namely internet fraud. This study has developed an intelligent system that enables detecting the user's abnormal behavior in online banking. Since the user's behavior is associated with uncertainty, the system has been developed based on the fuzzy theory, This enables it to identify user behaviors and categorize suspicious behaviors with various levels of intensity. The performance of the fuzzy expert system has been evaluated using an receiver operating characteristic curve, which provides the accuracy of 94%. This expert system is optimistic to be used for improving e-banking services security and quality.
Today's more reliable communication technology, together with the availability of higher computational power, have paved the way for introduction of more advanced automation systems based on distributed intelligence and multi-agent technology. However, abundance of data, while making these systems more powerful, can at the same time act as their biggest vulnerability. In a web of interconnected devices and components functioning within an automation framework, potential impact of malfunction in a single device, either through internal failure or external damage/intrusion, may lead to detrimental side-effects spread across the whole underlying system. The potentially large number of devices, along with their inherent interrelations and interdependencies, may hinder the ability of human operators to interpret events, identify their scope of impact and take remedial actions if necessary. Through utilization of the concepts of graph-theoretic fuzzy cognitive maps (FCM) and expert systems, this paper puts forth a solution that is able to reveal weak links and vulnerabilities of an automation system, should it become exposed to partial internal failure or external damage. A case study has been performed on the IEEE 34-bus test distribution system to show the efficiency of the proposed scheme.
Threat evaluation is concerned with estimating the intent, capability and opportunity of detected objects in relation to our own assets in an area of interest. To infer whether a target is threatening and to which degree is far from a trivial task. Expert operators have normally to their aid different support systems that analyze the incoming data and provide recommendations for actions. Since the ultimate responsibility lies in the operators, it is crucial that they trust and know how to configure and use these systems, as well as have a good understanding of their inner workings, strengths and limitations. To limit the negative effects of inadequate cooperation between the operators and their support systems, this paper presents a design proposal that aims at making the threat evaluation process more transparent. We focus on the initialization, configuration and preparation phases of the threat evaluation process, supporting the user in the analysis of the behavior of the system considering the relevant parameters involved in the threat estimations. For doing so, we follow a known design process model and we implement our suggestions in a proof-of-concept prototype that we evaluate with military expert system designers.