Biblio

Found 2356 results

Filters: Keyword is privacy  [Clear All Filters]
2020-05-11
Ma, Yuxiang, Wu, Yulei, Ge, Jingguo, Li, Jun.  2018.  A Flow-Level Architecture for Balancing Accountability and Privacy. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :984–989.
With the rapid development of the Internet, flow-based approach has attracted more and more attention. To this end, this paper presents a new and efficient architecture to balance accountability and privacy based on network flows. A self-certifying identifier is proposed to efficiently identify a flow. In addition, a delegate-registry cooperation scheme and a multi-delegate mechanism are developed to ensure users' privacy. The effectiveness and overhead of the proposed architecture are evaluated by virtue of the real trace collected from an Internet service provider. The experimental results show that our architecture can achieve a better network performance in terms of lower resource consumption, lower response time, and higher stability.
2019-04-05
Vastel, A., Rudametkin, W., Rouvoy, R..  2018.  FP -TESTER : Automated Testing of Browser Fingerprint Resilience. 2018 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :103-107.
Despite recent regulations and growing user awareness, undesired browser tracking is increasing. In addition to cookies, browser fingerprinting is a stateless technique that exploits a device's configuration for tracking purposes. In particular, browser fingerprinting builds on attributes made available from Javascript and HTTP headers to create a unique and stable fingerprint. For example, browser plugins have been heavily exploited by state-of-the-art browser fingerprinters as a rich source of entropy. However, as browser vendors abandon plugins in favor of extensions, fingerprinters will adapt. We present FP-TESTER, an approach to automatically test the effectiveness of browser fingerprinting countermeasure extensions. We implement a testing toolkit to be used by developers to reduce browser fingerprintability. While countermeasures aim to hinder tracking by changing or blocking attributes, they may easily introduce subtle side-effects that make browsers more identifiable, rendering the extensions counterproductive. FP-TESTER reports on the side-effects introduced by the countermeasure, as well as how they impact tracking duration from a fingerprinter's point-of-view. To the best of our knowledge, FP-TESTER is the first tool to assist developers in fighting browser fingerprinting and reducing the exposure of end-users to such privacy leaks.
Vastel, A., Laperdrix, P., Rudametkin, W., Rouvoy, R..  2018.  FP-STALKER: Tracking Browser Fingerprint Evolutions. 2018 IEEE Symposium on Security and Privacy (SP). :728-741.
Browser fingerprinting has emerged as a technique to track users without their consent. Unlike cookies, fingerprinting is a stateless technique that does not store any information on devices, but instead exploits unique combinations of attributes handed over freely by browsers. The uniqueness of fingerprints allows them to be used for identification. However, browser fingerprints change over time and the effectiveness of tracking users over longer durations has not been properly addressed. In this paper, we show that browser fingerprints tend to change frequently-from every few hours to days-due to, for example, software updates or configuration changes. Yet, despite these frequent changes, we show that browser fingerprints can still be linked, thus enabling long-term tracking. FP-STALKER is an approach to link browser fingerprint evolutions. It compares fingerprints to determine if they originate from the same browser. We created two variants of FP-STALKER, a rule-based variant that is faster, and a hybrid variant that exploits machine learning to boost accuracy. To evaluate FP-STALKER, we conduct an empirical study using 98,598 fingerprints we collected from 1, 905 distinct browser instances. We compare our algorithm with the state of the art and show that, on average, we can track browsers for 54.48 days, and 26 % of browsers can be tracked for more than 100 days.
2019-02-18
Wang, G., Wang, B., Wang, T., Nika, A., Zheng, H., Zhao, B. Y..  2018.  Ghost Riders: Sybil Attacks on Crowdsourced Mobile Mapping Services. IEEE/ACM Transactions on Networking. 26:1123–1136.
Real-time crowdsourced maps, such as Waze provide timely updates on traffic, congestion, accidents, and points of interest. In this paper, we demonstrate how lack of strong location authentication allows creation of software-based Sybil devices that expose crowdsourced map systems to a variety of security and privacy attacks. Our experiments show that a single Sybil device with limited resources can cause havoc on Waze, reporting false congestion and accidents and automatically rerouting user traffic. More importantly, we describe techniques to generate Sybil devices at scale, creating armies of virtual vehicles capable of remotely tracking precise movements for large user populations while avoiding detection. To defend against Sybil devices, we propose a new approach based on co-location edges, authenticated records that attest to the one-time physical co-location of a pair of devices. Over time, co-location edges combine to form large proximity graphs that attest to physical interactions between devices, allowing scalable detection of virtual vehicles. We demonstrate the efficacy of this approach using large-scale simulations, and how they can be used to dramatically reduce the impact of the attacks. We have informed Waze/Google team of our research findings. Currently, we are in active collaboration with Waze team to improve the security and privacy of their system.
2019-03-18
Elsden, Chris, Nissen, Bettina, Jabbar, Karim, Talhouk, Reem, Lustig, Caitlin, Dunphy, Paul, Speed, Chris, Vines, John.  2018.  HCI for Blockchain: Studying, Designing, Critiquing and Envisioning Distributed Ledger Technologies. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. :W28:1–W28:8.
This workshop aims to develop an agenda within the CHI community to address the emergence of blockchain, or distributed ledger technologies (DLTs). As blockchains emerge as a general purpose technology, with applications well beyond cryptocurrencies, DLTs present exciting challenges and opportunities for developing new ways for people and things to transact, collaborate, organize and identify themselves. Requiring interdisciplinary skills and thinking, the field of HCI is well placed to contribute to the research and development of this technology. This workshop will build a community for human-centred researchers and practitioners to present studies, critiques, design-led work, and visions of blockchain applications.
2019-11-11
Tesfay, Welderufael B., Hofmann, Peter, Nakamura, Toru, Kiyomoto, Shinsaku, Serna, Jetzabel.  2018.  I Read but Don'T Agree: Privacy Policy Benchmarking Using Machine Learning and the EU GDPR. Companion Proceedings of the The Web Conference 2018. :163–166.
With the continuing growth of the Internet landscape, users share large amount of personal, sometimes, privacy sensitive data. When doing so, often, users have little or no clear knowledge about what service providers do with the trails of personal data they leave on the Internet. While regulations impose rather strict requirements that service providers should abide by, the defacto approach seems to be communicating data processing practices through privacy policies. However, privacy policies are long and complex for users to read and understand, thus failing their mere objective of informing users about the promised data processing behaviors of service providers. To address this pertinent issue, we propose a machine learning based approach to summarize the rather long privacy policy into short and condensed notes following a risk-based approach and using the European Union (EU) General Data Protection Regulation (GDPR) aspects as assessment criteria. The results are promising and indicate that our tool can summarize lengthy privacy policies in a short period of time, thus supporting users to take informed decisions regarding their information disclosure behaviors.
2020-07-24
Zhang, Leyou, Liang, Pengfei, Mu, Yi.  2018.  Improving Privacy-Preserving and Security for Decentralized Key-Policy Attributed-Based Encryption. IEEE Access. 6:12736—12745.
Decentralized attribute-based encryption (ABE) is an efficient and flexible multi-authority attribute-based encryption system, since it does not requires the central authority and does not need to cooperate among the authorities for creating public parameters. Unfortunately, recent works show that the reality of the privacy preserving and security in almost well-known decentralized key policy ABE (KP-ABE) schemes are doubtful. How to construct a decentralized KP-ABE with the privacy-preserving and user collusion avoidance is still a challenging problem. Most recently, Y. Rahulamathavam et al. proposed a decentralized KP ABE scheme to try avoiding user collusion and preserving the user's privacy. However, we exploit the vulnerability of their scheme in this paper at first and present a collusion attack on their decentralized KP-ABE scheme. The attack shows the user collusion cannot be avoided. Subsequently, a new privacy-preserving decentralized KP-ABE is proposed. The proposed scheme avoids the linear attacks at present and achieves the user collusion avoidance. We also show that the security of the proposed scheme is reduced to decisional bilinear Diffie-Hellman assumption. Finally, numerical experiments demonstrate the efficiency and validity of the proposed scheme.
2020-09-28
Gao, Meng-Qi, Han, Jian-Min, Lu, Jian-Feng, Peng, Hao, Hu, Zhao-Long.  2018.  Incentive Mechanism for User Collaboration on Trajectory Privacy Preservation. 2018 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :1976–1981.
Collaborative trajectory privacy preservation (CTPP) scheme is an effective method for continuous queries. However, collaborating with other users need pay some cost. Therefore, some rational and selfish users will not choose collaboration, which will result in users' privacy disclosing. To solve the problem, this paper proposes a collaboration incentive mechanism by rewarding collaborative users and punishing non-collaborative users. The paper models the interactions of users participating in CTPP as a repeated game and analysis the utility of participated users. The analytical results show that CTPP with the proposed incentive mechanism can maximize user's payoffs. Experiments show that the proposed mechanism can effectively encourage users' collaboration behavior and effectively preserve the trajectory privacy for continuous query users.
2019-02-14
Oohama, Y., Santoso, B..  2018.  Information Theoretical Analysis of Side-Channel Attacks to the Shannon Cipher System. 2018 IEEE International Symposium on Information Theory (ISIT). :581-585.
We study side-channel attacks for the Shannon cipher system. To pose side channel-attacks to the Shannon cipher system, we regard them as a signal estimation via encoded data from two distributed sensors. This can be formulated as the one helper source coding problem posed and investigated by Ahlswede, Korner(1975), and Wyner(1975). We further investigate the posed problem to derive new secrecy bounds. Our results are derived by a coupling of the result Watanabe and Oohama(2012) obtained on bounded storage eavesdropper with the exponential strong converse theorem Oohama(2015) established for the one helper source coding problem.
2019-11-11
Pierce, James, Fox, Sarah, Merrill, Nick, Wong, Richmond, DiSalvo, Carl.  2018.  An Interface Without A User: An Exploratory Design Study of Online Privacy Policies and Digital Legalese. Proceedings of the 2018 Designing Interactive Systems Conference. :1345–1358.
Privacy policies are critical to understanding one's rights on online platforms, yet few users read them. In this pictorial, we approach this as a systemic issue that is part a failure of interaction design. We provided a variety of people with printed packets of privacy policies, aiming to tease out this form's capabilities and limitations as a design interface, to understand people's perception and uses, and to critically imagine pragmatic revisions and creative alternatives to existing privacy policies.
2019-03-06
El Haourani, Lamia, Elkalam, Anas Abou, Ouahman, Abdelah Ait.  2018.  Knowledge Based Access Control a Model for Security and Privacy in the Big Data. Proceedings of the 3rd International Conference on Smart City Applications. :16:1-16:8.
The most popular features of Big Data revolve around the so-called "3V" criterion: Volume, Variety and Velocity. Big Data is based on the massive collection and in-depth analysis of personal data, with a view to profiling, or even marketing and commercialization, thus violating citizens' privacy and the security of their data. In this article we discuss security and privacy solutions in the context of Big Data. We then focus on access control and present our new model called Knowledge-based Access Control (KBAC); this strengthens the access control already deployed in the target company (e.g., based on "RBAC" role or "ABAC" attributes for example) by adding a semantic access control layer. KBAC offers thinner access control, tailored to Big Data, with effective protection against intrusion attempts and unauthorized data inferences.
2020-04-20
Lim, Yeon-sup, Srivatsa, Mudhakar, Chakraborty, Supriyo, Taylor, Ian.  2018.  Learning Light-Weight Edge-Deployable Privacy Models. 2018 IEEE International Conference on Big Data (Big Data). :1290–1295.
Privacy becomes one of the important issues in data-driven applications. The advent of non-PC devices such as Internet-of-Things (IoT) devices for data-driven applications leads to needs for light-weight data anonymization. In this paper, we develop an anonymization framework that expedites model learning in parallel and generates deployable models for devices with low computing capability. We evaluate our framework with various settings such as different data schema and characteristics. Our results exhibit that our framework learns anonymization models up to 16 times faster than a sequential anonymization approach and that it preserves enough information in anonymized data for data-driven applications.
Lim, Yeon-sup, Srivatsa, Mudhakar, Chakraborty, Supriyo, Taylor, Ian.  2018.  Learning Light-Weight Edge-Deployable Privacy Models. 2018 IEEE International Conference on Big Data (Big Data). :1290–1295.
Privacy becomes one of the important issues in data-driven applications. The advent of non-PC devices such as Internet-of-Things (IoT) devices for data-driven applications leads to needs for light-weight data anonymization. In this paper, we develop an anonymization framework that expedites model learning in parallel and generates deployable models for devices with low computing capability. We evaluate our framework with various settings such as different data schema and characteristics. Our results exhibit that our framework learns anonymization models up to 16 times faster than a sequential anonymization approach and that it preserves enough information in anonymized data for data-driven applications.
2019-08-05
Kita, Kentaro, Kurihara, Yoshiki, Koizumi, Yuki, Hasegawa, Toru.  2018.  Location Privacy Protection with a Semi-honest Anonymizer in Information Centric Networking. Proceedings of the 5th ACM Conference on Information-Centric Networking. :95–105.
Location-based services, which provide services based on locations of consumers' interests, are becoming essential for our daily lives. Since the location of a consumer's interest contains private information, several studies propose location privacy protection mechanisms using an anonymizer, which sends queries specifying anonymous location sets, each of which contains k - 1 locations in addition to a location of a consumer's interest, to an LBS provider based on the k-anonymity principle. The anonymizer is, however, assumed to be trusted/honest, and hence it is a single point of failure in terms of privacy leakage. To address this privacy issue, this paper designs a semi-honest anonymizer to protect location privacy in NDN networks. This study first reveals that session anonymity and location anonymity must be achieved to protect location privacy with a semi-honest anonymizer. Session anonymity is to hide who specifies which anonymous location set and location anonymity is to hide a location of a consumer's interest in a crowd of locations. We next design an architecture to achieve session anonymity and an algorithm to generate anonymous location sets achieving location anonymity. Our evaluations show that the architecture incurs marginal overhead to achieve session anonymity and anonymous location sets generated by the algorithm sufficiently achieve location anonymity.
2019-09-04
Maltitz, M. von, Smarzly, S., Kinkelin, H., Carle, G..  2018.  A management framework for secure multiparty computation in dynamic environments. NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium. :1–7.
Secure multiparty computation (SMC) is a promising technology for privacy-preserving collaborative computation. In the last years several feasibility studies have shown its practical applicability in different fields. However, it is recognized that administration, and management overhead of SMC solutions are still a problem. A vital next step is the incorporation of SMC in the emerging fields of the Internet of Things and (smart) dynamic environments. In these settings, the properties of these contexts make utilization of SMC even more challenging since some vital premises for its application regarding environmental stability and preliminary configuration are not initially fulfilled. We bridge this gap by providing FlexSMC, a management and orchestration framework for SMC which supports the discovery of nodes, supports a trust establishment between them and realizes robustness of SMC session by handling nodes failures and communication interruptions. The practical evaluation of FlexSMC shows that it enables the application of SMC in dynamic environments with reasonable performance penalties and computation durations allowing soft real-time and interactive use cases.
2019-08-05
Papernot, Nicolas.  2018.  A Marauder's Map of Security and Privacy in Machine Learning: An Overview of Current and Future Research Directions for Making Machine Learning Secure and Private. Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security. :1–1.
There is growing recognition that machine learning (ML) exposes new security and privacy vulnerabilities in software systems, yet the technical community's understanding of the nature and extent of these vulnerabilities remains limited but expanding. In this talk, we explore the threat model space of ML algorithms through the lens of Saltzer and Schroeder's principles for the design of secure computer systems. This characterization of the threat space prompts an investigation of current and future research directions. We structure our discussion around three of these directions, which we believe are likely to lead to significant progress. The first seeks to design mechanisms for assembling reliable records of compromise that would help understand the degree to which vulnerabilities are exploited by adversaries, as well as favor psychological acceptability of machine learning applications. The second encompasses a spectrum of approaches to input verification and mediation, which is a prerequisite to enable fail-safe defaults in machine learning systems. The third pursues formal frameworks for security and privacy in machine learning, which we argue should strive to align machine learning goals such as generalization with security and privacy desirata like robustness or privacy. Key insights resulting from these three directions pursued both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train them. We conclude by systematizing best practices in our growing community.
2020-09-28
Han, Xu, Liu, Yanheng, Wang, Jian.  2018.  Modeling and analyzing privacy-awareness social behavior network. IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :7–12.
The increasingly networked human society requires that human beings have a clear understanding and control over the structure, nature and behavior of various social networks. There is a tendency towards privacy in the study of network evolutions because privacy disclosure behavior in the network has gradually developed into a serious concern. For this purpose, we extended information theory and proposed a brand-new concept about so-called “habitual privacy” to quantitatively analyze privacy exposure behavior and facilitate privacy computation. We emphasized that habitual privacy is an inherent property of the user and is correlated with their habitual behaviors. The widely approved driving force in recent modeling complex networks is originated from activity. Thus, we propose the privacy-driven model through synthetically considering the activity impact and habitual privacy underlying the decision process. Privacy-driven model facilitates to more accurately capture highly dynamical network behaviors and figure out the complex evolution process, allowing a profound understanding of the evolution of network driven by privacy.
2019-05-08
Chen, Quan, Kapravelos, Alexandros.  2018.  Mystique: Uncovering Information Leakage from Browser Extensions. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :1687–1700.
Browser extensions are small JavaScript, CSS and HTML programs that run inside the browser with special privileges. These programs, often written by third parties, operate on the pages that the browser is visiting, giving the user a programmatic way to configure the browser. The privacy implications that arise by allowing privileged third-party code to execute inside the users' browser are not well understood. In this paper, we develop a taint analysis framework for browser extensions and use it to perform a large scale study of extensions in regard to their privacy practices. We first present a hybrid approach to traditional taint analysis: by leveraging the fact that extension source code is available to the runtime JavaScript engine, we implement as well as enhance traditional taint analysis using information gathered from static data flow and control-flow analysis of the JavaScript source code. Based on this, we further modify the Chromium browser to support taint tracking for extensions. We analyzed 178,893 extensions crawled from the Chrome Web Store between September 2016 and March 2018, as well as a separate set of all available extensions (2,790 in total) for the Opera browser at the time of analysis. From these, our analysis flagged 3,868 (2.13%) extensions as potentially leaking privacy-sensitive information. The top 10 most popular Chrome extensions that we confirmed to be leaking privacy-sensitive information have more than 60 million users combined. We ran the analysis on a local Kubernetes cluster and were able to finish within a month, demonstrating the feasibility of our approach for large-scale analysis of browser extensions. At the same time, our results emphasize the threat browser extensions pose to user privacy, and the need for countermeasures to safeguard against misbehaving extensions that abuse their privileges.
2020-07-24
Khuntia, Sucharita, Kumar, P. Syam.  2018.  New Hidden Policy CP-ABE for Big Data Access Control with Privacy-preserving Policy in Cloud Computing. 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1—7.
Cloud offers flexible and cost effective storage for big data but the major challenge is access control of big data processing. CP-ABE is a desirable solution for data access control in cloud. However, in CP-ABE the access policy may leak user's private information. To address this issue, Hidden Policy CP-ABE schemes proposed but those schemes still causing data leakage problem because the access policies are partially hidden and create more computational cost. In this paper, we propose a New Hidden Policy Ciphertext Policy Attribute Based Encryption (HP-CP-ABE) to ensure Big Data Access Control with Privacy-preserving Policy in Cloud. In proposed method, we used Multi Secret Sharing Scheme(MSSS) to reduce the computational overhead, while encryption and decryption process. We also applied mask technique on each attribute in access policy and embed the access policy in ciphertext, to protect user's private information from access policy. The security analysis shows that HP-CP-ABE is more secure and preserve the access policy privacy. Performance evaluation shows that our schemes takes less computational cost than existing scheme.
2020-04-20
Liu, Kai-Cheng, Kuo, Chuan-Wei, Liao, Wen-Chiuan, Wang, Pang-Chieh.  2018.  Optimized Data de-Identification Using Multidimensional k-Anonymity. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :1610–1614.
In the globalized knowledge economy, big data analytics have been widely applied in diverse areas. A critical issue in big data analysis on personal information is the possible leak of personal privacy. Therefore, it is necessary to have an anonymization-based de-identification method to avoid undesirable privacy leak. Such method can prevent published data form being traced back to personal privacy. Prior empirical researches have provided approaches to reduce privacy leak risk, e.g. Maximum Distance to Average Vector (MDAV), Condensation Approach and Differential Privacy. However, previous methods inevitably generate synthetic data of different sizes and is thus unsuitable for general use. To satisfy the need of general use, k-anonymity can be chosen as a privacy protection mechanism in the de-identification process to ensure the data not to be distorted, because k-anonymity is strong in both protecting privacy and preserving data authenticity. Accordingly, this study proposes an optimized multidimensional method for anonymizing data based on both the priority weight-adjusted method and the mean difference recommending tree method (MDR tree method). The results of this study reveal that this new method generate more reliable anonymous data and reduce the information loss rate.
Liu, Kai-Cheng, Kuo, Chuan-Wei, Liao, Wen-Chiuan, Wang, Pang-Chieh.  2018.  Optimized Data de-Identification Using Multidimensional k-Anonymity. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :1610–1614.
In the globalized knowledge economy, big data analytics have been widely applied in diverse areas. A critical issue in big data analysis on personal information is the possible leak of personal privacy. Therefore, it is necessary to have an anonymization-based de-identification method to avoid undesirable privacy leak. Such method can prevent published data form being traced back to personal privacy. Prior empirical researches have provided approaches to reduce privacy leak risk, e.g. Maximum Distance to Average Vector (MDAV), Condensation Approach and Differential Privacy. However, previous methods inevitably generate synthetic data of different sizes and is thus unsuitable for general use. To satisfy the need of general use, k-anonymity can be chosen as a privacy protection mechanism in the de-identification process to ensure the data not to be distorted, because k-anonymity is strong in both protecting privacy and preserving data authenticity. Accordingly, this study proposes an optimized multidimensional method for anonymizing data based on both the priority weight-adjusted method and the mean difference recommending tree method (MDR tree method). The results of this study reveal that this new method generate more reliable anonymous data and reduce the information loss rate.
2020-09-28
Park, Seok-Hwan, Simeone, Osvaldo, Shamai Shitz, Shlomo.  2018.  Optimizing Spectrum Pooling for Multi-Tenant C-RAN Under Privacy Constraints. 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC). :1–5.
This work studies the optimization of spectrum pooling for the downlink of a multi-tenant Cloud Radio Access Network (C-RAN) system in the presence of inter-tenant privacy constraints. The spectrum available for downlink transmission is partitioned into private and shared subbands, and the participating operators cooperate to serve the user equipments (UEs) on the shared subband. The network of each operator consists of a cloud processor (CP) that is connected to proprietary radio units (RUs) by means of finite-capacity fronthaul links. In order to enable inter-operator cooperation, the CPs of the participating operators are also connected by finite-capacity backhaul links. Inter-operator cooperation may hence result in loss of privacy. The problem of optimizing the bandwidth allocation, precoding, and fronthaul/backhaul compression strategies is tackled under constraints on backhaul and fronthaul capacity, as well as on per-RU transmit power and inter-onerator privacy.
2019-11-11
Kunihiro, Noboru, Lu, Wen-jie, Nishide, Takashi, Sakuma, Jun.  2018.  Outsourced Private Function Evaluation with Privacy Policy Enforcement. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :412–423.
We propose a novel framework for outsourced private function evaluation with privacy policy enforcement (OPFE-PPE). Suppose an evaluator evaluates a function with private data contributed by a data contributor, and a client obtains the result of the evaluation. OPFE-PPE enables a data contributor to enforce two different kinds of privacy policies to the process of function evaluation: evaluator policy and client policy. An evaluator policy restricts entities that can conduct function evaluation with the data. A client policy restricts entities that can obtain the result of function evaluation. We demonstrate our construction with three applications: personalized medication, genetic epidemiology, and prediction by machine learning. Experimental results show that the overhead caused by enforcing the two privacy policies is less than 10% compared to function evaluation by homomorphic encryption without any privacy policy enforcement.
Al-Hasnawi, Abduljaleel, Mohammed, Ihab, Al-Gburi, Ahmed.  2018.  Performance Evaluation of the Policy Enforcement Fog Module for Protecting Privacy of IoT Data. 2018 IEEE International Conference on Electro/Information Technology (EIT). :0951–0957.
The rapid development of the Internet of Things (IoT) results in generating massive amounts of data. Significant portions of these data are sensitive since they reflect (directly or indirectly) peoples' behaviors, interests, lifestyles, etc. Protecting sensitive IoT data from privacy violations is a challenge since these data need to be communicated, processed, analyzed, and stored by public networks, servers, and clouds; most of them are untrusted parties for data owners. We propose a solution for protecting sensitive IoT data called Policy Enforcement Fog Module (PEFM). The major task of the PEFM solution is mandatory enforcement of privacy policies for sensitive IoT data-wherever these data are accessed throughout their entire lifecycle. The key feature of PEFM is its placement within the fog computing infrastructure, which assures that PEFM operates as closely as possible to data sources within the edge. PEFM enforces policies directly for local IoT applications. In contrast, for remote applications, PEFM provides a self-protecting mechanism based on creating and disseminating Active Data Bundles (ADBs). ADBs are software constructs bundling inseparably sensitive data, their privacy policies, and an execution engine able to enforce privacy policies. To prove effectiveness and efficiency of the proposed module, we developed a smart home proof-of-concept scenario. We investigate privacy threats for sensitive IoT data. We run simulation experiments, based on network calculus, for testing performance of the PEFM controls for different network configurations. The results of the simulation show that-even with using from 1 to 5 additional privacy policies for improved data privacy-penalties in terms of execution time and delay are reasonable (approx. 12-15% and 13-19%, respectively). The results also show that PEFM is scalable regarding the number of the real-time constraints for real-time IoT applications.
2019-05-01
Arefi, Meisam Navaki, Alexander, Geoffrey, Crandall, Jedidiah R..  2018.  PIITracker: Automatic Tracking of Personally Identifiable Information in Windows. Proceedings of the 11th European Workshop on Systems Security. :3:1–3:6.
Personally Identifiable Information (PII) is information that can be used on its own or with other information to distinguish or trace an individual's identity. To investigate an application for PII tracking, a reverse engineer has to put considerable effort to reverse engineer an application and discover what an application does with PII. To automate this process and save reverse engineers substantial time and effort, we propose PIITracker which is a new and novel tool that can track PII automatically and capture if any processes are sending PII over the network. This is made possible by 1) whole-system dynamic information flow tracking 2) monitoring specific function and system calls. We analyzed 15 popular chat applications and browsers using PIITracker, and determined that 12 of these applications collect some form of PII.