Biblio

Filters: Keyword is Privacy Policies  [Clear All Filters]
2021-10-12
Zaeem, Razieh Nokhbeh, Anya, Safa, Issa, Alex, Nimergood, Jake, Rogers, Isabelle, Shah, Vinay, Srivastava, Ayush, Barber, K. Suzanne.  2020.  PrivacyCheck's Machine Learning to Digest Privacy Policies: Competitor Analysis and Usage Patterns. 2020 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT). :291–298.
Online privacy policies are lengthy and hard to comprehend. To address this problem, researchers have utilized machine learning (ML) to devise tools that automatically summarize online privacy policies for web users. One such tool is our free and publicly available browser extension, PrivacyCheck. In this paper, we enhance PrivacyCheck by adding a competitor analysis component-a part of PrivacyCheck that recommends other organizations in the same market sector with better privacy policies. We also monitored the usage patterns of about a thousand actual PrivacyCheck users, the first work to track the usage and traffic of an ML-based privacy analysis tool. Results show: (1) there is a good number of privacy policy URLs checked repeatedly by the user base; (2) the users are particularly interested in privacy policies of software services; and (3) PrivacyCheck increased the number of times a user consults privacy policies by 80%. Our work demonstrates the potential of ML-based privacy analysis tools and also sheds light on how these tools are used in practice to give users actionable knowledge they can use to pro-actively protect their privacy.
Chang, Kai Chih, Nokhbeh Zaeem, Razieh, Barber, K. Suzanne.  2020.  Is Your Phone You? How Privacy Policies of Mobile Apps Allow the Use of Your Personally Identifiable Information 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :256–262.
People continue to store their sensitive information in their smart-phone applications. Users seldom read an app's privacy policy to see how their information is being collected, used, and shared. In this paper, using a reference list of over 600 Personally Identifiable Information (PII) attributes, we investigate the privacy policies of 100 popular health and fitness mobile applications in both Android and iOS app markets to find the set of personal information these apps collect, use and share. The reference list of PII was independently built from a longitudinal study at The University of Texas investigating thousands of identity theft and fraud cases where PII attributes and associated value and risks were empirically quantified. This research leverages the reference PII list to identify and analyze the value of personal information collected by the mobile apps and the risk of disclosing this information. We found that the set of PII collected by these mobile apps covers 35% of the entire reference set of PII and, due to dependencies between PII attributes, these mobile apps have a likelihood of indirectly impacting 70% of the reference PII if breached. For a specific app, we discovered the monetary loss could reach \$1M if the set of sensitive data it collects is breached. We finally utilize Bayesian inference to measure risks of a set of PII gathered by apps: the probability that fraudsters can discover, impersonate and cause harm to the user by misusing only the PII the mobile apps collected.
Farooq, Emmen, Nawaz UI Ghani, M. Ahmad, Naseer, Zuhaib, Iqbal, Shaukat.  2020.  Privacy Policies' Readability Analysis of Contemporary Free Healthcare Apps. 2020 14th International Conference on Open Source Systems and Technologies (ICOSST). :1–7.
mHealth apps have a vital role in facilitation of human health management. Users have to enter sensitive health related information in these apps to fully utilize their functionality. Unauthorized sharing of sensitive health information is undesirable by the users. mHealth apps also collect data other than that required for their functionality like surfing behavior of a user or hardware details of devices used. mHealth software and their developers also share such data with third parties for reasons other than medical support provision to the user, like advertisements of medicine and health insurance plans. Existence of a comprehensive and easy to understand data privacy policy, on user data acquisition, sharing and management is a salient requirement of modern user privacy protection demands. Readability is one parameter by which ease of understanding of privacy policy is determined. In this research, privacy policies of 27 free Android, medical apps are analyzed. Apps having user rating of 4.0 and downloads of 1 Million or more are included in data set of this research.RGL, Flesch-Kincaid Reading Grade Level, SMOG, Gunning Fox, Word Count, and Flesch Reading Ease of privacy policies are calculated. Average Reading Grade Level of privacy policies is 8.5. It is slightly greater than average adult RGL in the US. Free mHealth apps have a large number of users in other, less educated parts of the World. Privacy policies with an average RGL of 8.5 may be difficult to comprehend in less educated populations.
Al Omar, Abdullah, Jamil, Abu Kaisar, Nur, Md. Shakhawath Hossain, Hasan, Md Mahamudul, Bosri, Rabeya, Bhuiyan, Md Zakirul Alam, Rahman, Mohammad Shahriar.  2020.  Towards A Transparent and Privacy-Preserving Healthcare Platform with Blockchain for Smart Cities. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1291–1296.
In smart cities, data privacy and security issues of Electronic Health Record(EHR) are grabbing importance day by day as cyber attackers have identified the weaknesses of EHR platforms. Besides, health insurance companies interacting with the EHRs play a vital role in covering the whole or a part of the financial risks of a patient. Insurance companies have specific policies for which patients have to pay them. Sometimes the insurance policies can be altered by fraudulent entities. Another problem that patients face in smart cities is when they interact with a health organization, insurance company, or others, they have to prove their identity to each of the organizations/companies separately. Health organizations or insurance companies have to ensure they know with whom they are interacting. To build a platform where a patient's personal information and insurance policy are handled securely, we introduce an application of blockchain to solve the above-mentioned issues. In this paper, we present a solution for the healthcare system that will provide patient privacy and transparency towards the insurance policies incorporating blockchain. Privacy of the patient information will be provided using cryptographic tools.
Jayabalan, Manoj.  2020.  Towards an Approach of Risk Analysis in Access Control. 2020 13th International Conference on Developments in eSystems Engineering (DeSE). :287–292.
Information security provides a set of mechanisms to be implemented in the organisation to protect the disclosure of data to the unauthorised person. Access control is the primary security component that allows the user to authorise the consumption of resources and data based on the predefined permissions. However, the access rules are static in nature, which does not adapt to the dynamic environment includes but not limited to healthcare, cloud computing, IoT, National Security and Intelligence Arena and multi-centric system. There is a need for an additional countermeasure in access decision that can adapt to those working conditions to assess the threats and to ensure privacy and security are maintained. Risk analysis is an act of measuring the threats to the system through various means such as, analysing the user behaviour, evaluating the user trust, and security policies. It is a modular component that can be integrated into the existing access control to predict the risk. This study presents the different techniques and approaches applied for risk analysis in access control. Based on the insights gained, this paper formulates the taxonomy of risk analysis and properties that will allow researchers to focus on areas that need to be improved and new features that could be beneficial to stakeholders.
2021-06-02
Gohari, Parham, Hale, Matthew, Topcu, Ufuk.  2020.  Privacy-Preserving Policy Synthesis in Markov Decision Processes. 2020 59th IEEE Conference on Decision and Control (CDC). :6266—6271.
In decision-making problems, the actions of an agent may reveal sensitive information that drives its decisions. For instance, a corporation's investment decisions may reveal its sensitive knowledge about market dynamics. To prevent this type of information leakage, we introduce a policy synthesis algorithm that protects the privacy of the transition probabilities in a Markov decision process. We use differential privacy as the mathematical definition of privacy. The algorithm first perturbs the transition probabilities using a mechanism that provides differential privacy. Then, based on the privatized transition probabilities, we synthesize a policy using dynamic programming. Our main contribution is to bound the "cost of privacy," i.e., the difference between the expected total rewards with privacy and the expected total rewards without privacy. We also show that computing the cost of privacy has time complexity that is polynomial in the parameters of the problem. Moreover, we establish that the cost of privacy increases with the strength of differential privacy protections, and we quantify this increase. Finally, numerical experiments on two example environments validate the established relationship between the cost of privacy and the strength of data privacy protections.
2021-10-12
Adibi, Mahya, van der Woude, Jacob.  2020.  Distributed Learning Control for Economic Power Dispatch: A Privacy Preserved Approach*. 2020 IEEE 29th International Symposium on Industrial Electronics (ISIE). :821–826.
We present a privacy-preserving distributed reinforcement learning-based control scheme to address the problem of frequency control and economic dispatch in power generation systems. The proposed control approach requires neither a priori system model knowledge nor the mathematical formulation of the generation cost functions. Due to not requiring the generation cost models, the control scheme is capable of dealing with scenarios in which the cost functions are hard to formulate and/or non-convex. Furthermore, it is privacy-preserving, i.e. none of the units in the network needs to communicate its cost function and/or control policy to its neighbors. To realize this, we propose an actor-critic algorithm with function approximation in which the actor step is performed individually by each unit with no need to infer the policies of others. Moreover, in the critic step each generation unit shares its estimate of the local measurements and the estimate of its cost function with the neighbors, and via performing a consensus algorithm, a consensual estimate is achieved. The performance of our proposed control scheme, in terms of minimizing the overall cost while persistently fulfilling the demand and fast reaction and convergence of our distributed algorithm, is demonstrated on a benchmark case study.
Suharsono, Teguh Nurhadi, Anggraini, Dini, Kuspriyanto, Rahardjo, Budi, Gunawan.  2020.  Implementation of Simple Verifiability Metric to Measure the Degree of Verifiability of E-Voting Protocol. 2020 14th International Conference on Telecommunication Systems, Services, and Applications (TSSA. :1–3.
Verifiability is one of the parameters in e-voting that can increase confidence in voting technology with several parties ensuring that voters do not change their votes. Voting has become an important part of the democratization system, both to make choices regarding policies, to elect representatives to sit in the representative assembly, and to elect leaders. the more voters and the wider the distribution, the more complex the social life, and the need to manage the voting process efficiently and determine the results more quickly, electronic-based voting (e-Voting) is becoming a more promising option. The level of confidence in voting depends on the capabilities of the system. E-voting must have parameters that can be used as guidelines, which include the following: Accuracy, Invulnerability, Privacy and Verifiability. The implementation of the simple verifiability metric to measure the degree of verifiability in the e-voting protocol, the researchers can calculate the degree of verifiability in the e-voting protocol and the researchers have been able to assess the proposed e-voting protocol with the standard of the best degree of verifiability is 1, where the value of 1 is is absolutely verified protocol.
Ferraro, Angelo.  2020.  When AI Gossips. 2020 IEEE International Symposium on Technology and Society (ISTAS). :69–71.
The concept of AI Gossip is presented. It is analogous to the traditional understanding of a pernicious human failing. It is made more egregious by the technology of AI, internet, current privacy policies, and practices. The recognition by the technological community of its complacency is critical to realizing its damaging influence on human rights. A current example from the medical field is provided to facilitate the discussion and illustrate the seriousness of AI Gossip. Further study and model development is encouraged to support and facilitate the need to develop standards to address the implications and consequences to human rights and dignity.
2021-01-28
Fan, M., Yu, L., Chen, S., Zhou, H., Luo, X., Li, S., Liu, Y., Liu, J., Liu, T..  2020.  An Empirical Evaluation of GDPR Compliance Violations in Android mHealth Apps. 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE). :253—264.

The purpose of the General Data Protection Regulation (GDPR) is to provide improved privacy protection. If an app controls personal data from users, it needs to be compliant with GDPR. However, GDPR lists general rules rather than exact step-by-step guidelines about how to develop an app that fulfills the requirements. Therefore, there may exist GDPR compliance violations in existing apps, which would pose severe privacy threats to app users. In this paper, we take mobile health applications (mHealth apps) as a peephole to examine the status quo of GDPR compliance in Android apps. We first propose an automated system, named HPDROID, to bridge the semantic gap between the general rules of GDPR and the app implementations by identifying the data practices declared in the app privacy policy and the data relevant behaviors in the app code. Then, based on HPDROID, we detect three kinds of GDPR compliance violations, including the incompleteness of privacy policy, the inconsistency of data collections, and the insecurity of data transmission. We perform an empirical evaluation of 796 mHealth apps. The results reveal that 189 (23.7%) of them do not provide complete privacy policies. Moreover, 59 apps collect sensitive data through different measures, but 46 (77.9%) of them contain at least one inconsistent collection behavior. Even worse, among the 59 apps, only 8 apps try to ensure the transmission security of collected data. However, all of them contain at least one encryption or SSL misuse. Our work exposes severe privacy issues to raise awareness of privacy protection for app users and developers.

2020-04-03
Liau, David, Zaeem, Razieh Nokhbeh, Barber, K. Suzanne.  2019.  Evaluation Framework for Future Privacy Protection Systems: A Dynamic Identity Ecosystem Approach. 2019 17th International Conference on Privacy, Security and Trust (PST). :1—3.
In this paper, we leverage previous work in the Identity Ecosystem, a Bayesian network mathematical representation of a person's identity, to create a framework to evaluate identity protection systems. Information dynamic is considered and a protection game is formed given that the owner and the attacker both gain some level of control over the status of other PII within the dynamic Identity Ecosystem. We present a policy iteration algorithm to solve the optimal policy for the game and discuss its convergence. Finally, an evaluation and comparison of identity protection strategies is provided given that an optimal policy is used against different protection policies. This study is aimed to understand the evolutionary process of identity theft and provide a framework for evaluating different identity protection strategies and future privacy protection system.
Bello-Ogunu, Emmanuel, Shehab, Mohamed, Miazi, Nazmus Sakib.  2019.  Privacy Is The Best Policy: A Framework for BLE Beacon Privacy Management. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 1:823—832.
Bluetooth Low Energy (BLE) beacons are an emerging type of technology in the Internet-of-Things (IoT) realm, which use BLE signals to broadcast a unique identifier that is detected by a compatible device to determine the location of nearby users. Beacons can be used to provide a tailored user experience with each encounter, yet can also constitute an invasion of privacy, due to their covertness and ability to track user behavior. Therefore, we hypothesize that user-driven privacy policy configuration is key to enabling effective and trustworthy privacy management during beacon encounters. We developed a framework for beacon privacy management that provides a policy configuration platform. Through an empirical analysis with 90 users, we evaluated this framework through a proof-of-concept app called Beacon Privacy Manager (BPM), which focused on the user experience of such a tool. Using BPM, we provided users with the ability to create privacy policies for beacons, testing different configuration schemes to refine the framework and then offer recommendations for future research.
Werner, Jorge, Westphall, Carla Merkle, Vargas, André Azevedo, Westphall, Carlos Becker.  2019.  Privacy Policies Model in Access Control. 2019 IEEE International Systems Conference (SysCon). :1—8.
With the increasing advancement of services on the Internet, due to the strengthening of cloud computing, the exchange of data between providers and users is intense. Management of access control and applications need data to identify users and/or perform services in an automated and more practical way. Applications have to protect access to data collected. However, users often provide data in cloud environments and do not know what was collected, how or by whom data will be used. Privacy of personal data has been a challenge for information security. This paper presents the development and use of a privacy policy strategy, i. e., it was proposed a privacy policy model and format to be integrated with the authorization task. An access control language and the preferences defined by the owner of information were used to implement the proposals. The results showed that the strategy is feasible, guaranteeing to the users the right over their data.
Sadique, Farhan, Bakhshaliyev, Khalid, Springer, Jeff, Sengupta, Shamik.  2019.  A System Architecture of Cybersecurity Information Exchange with Privacy (CYBEX-P). 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC). :0493—0498.
Rapid evolution of cyber threats and recent trends in the increasing number of cyber-attacks call for adopting robust and agile cybersecurity techniques. Cybersecurity information sharing is expected to play an effective role in detecting and defending against new attacks. However, reservations and or-ganizational policies centering the privacy of shared data have become major setbacks in large-scale collaboration in cyber defense. The situation is worsened by the fact that the benefits of cyber-information exchange are not realized unless many actors participate. In this paper, we argue that privacy preservation of shared threat data will motivate entities to share threat data. Accordingly, we propose a framework called CYBersecurity information EXchange with Privacy (CYBEX-P) to achieve this. CYBEX-P is a structured information sharing platform with integrating privacy-preserving mechanisms. We propose a complete system architecture for CYBEX-P that guarantees maximum security and privacy of data. CYBEX-P outlines the details of a cybersecurity information sharing platform. The adoption of blind processing, privacy preservation, and trusted computing paradigms make CYBEX-P a versatile and secure information exchange platform.
Renjan, Arya, Narayanan, Sandeep Nair, Joshi, Karuna Pande.  2019.  A Policy Based Framework for Privacy-Respecting Deep Packet Inspection of High Velocity Network Traffic. 2019 IEEE 5th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :47—52.

Deep Packet Inspection (DPI) is instrumental in investigating the presence of malicious activity in network traffic and most existing DPI tools work on unencrypted payloads. As the internet is moving towards fully encrypted data-transfer, there is a critical requirement for privacy-aware techniques to efficiently decrypt network payloads. Until recently, passive proxying using certain aspects of TLS 1.2 were used to perform decryption and further DPI analysis. With the introduction of TLS 1.3 standard that only supports protocols with Perfect Forward Secrecy (PFS), many such techniques will become ineffective. Several security solutions will be forced to adopt active proxying that will become a big-data problem considering the velocity and veracity of network traffic involved. We have developed an ABAC (Attribute Based Access Control) framework that efficiently supports existing DPI tools while respecting user's privacy requirements and organizational policies. It gives the user the ability to accept or decline access decision based on his privileges. Our solution evaluates various observed and derived attributes of network connections against user access privileges using policies described with semantic technologies. In this paper, we describe our framework and demonstrate the efficacy of our technique with the help of use-case scenarios to identify network connections that are candidates for Deep Packet Inspection. Since our technique makes selective identification of connections based on policies, both processing and memory load at the gateway will be reduced significantly.

Garigipati, Nagababu, Krishna, Reddy V.  2019.  A Study on Data Security and Query privacy in Cloud. 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI). :337—341.

A lot of organizations need effective resolutions to record and evaluate the existing enormous volume of information. Cloud computing as a facilitator offers scalable resources and noteworthy economic assistances as the decreased operational expenditures. This model increases a wide set of security and privacy problems that have to be taken into reflexion. Multi-occupancy, loss of control, and confidence are the key issues in cloud computing situations. This paper considers the present know-hows and a comprehensive assortment of both previous and high-tech tasks on cloud security and confidentiality. The paradigm shift that supplements the usage of cloud computing is progressively enabling augmentation to safety and privacy contemplations linked with the different facades of cloud computing like multi-tenancy, reliance, loss of control and responsibility. So, cloud platforms that deal with big data that have sensitive information are necessary to use technical methods and structural precautions to circumvent data defence failures that might lead to vast and costly harms.

Fawaz, Kassem, Linden, Thomas, Harkous, Hamza.  2019.  Invited Paper: The Applications of Machine Learning in Privacy Notice and Choice. 2019 11th International Conference on Communication Systems Networks (COMSNETS). :118—124.
For more than two decades since the rise of the World Wide Web, the “Notice and Choice” framework has been the governing practice for the disclosure of online privacy practices. The emergence of new forms of user interactions, such as voice, and the enforcement of new regulations, such as the EU's recent General Data Protection Regulation (GDPR), promise to change this privacy landscape drastically. This paper discusses the challenges towards providing the privacy stakeholders with privacy awareness and control in this changing landscape. We will also present our recent research on utilizing Machine learning to analyze privacy policies and settings.
Alom, Md. Zulfikar, Carminati, Barbara, Ferrari, Elena.  2019.  Adapting Users' Privacy Preferences in Smart Environments. 2019 IEEE International Congress on Internet of Things (ICIOT). :165—172.
A smart environment is a physical space where devices are connected to provide continuous support to individuals and make their life more comfortable. For this purpose, a smart environment collects, stores, and processes a massive amount of personal data. In general, service providers collect these data according to their privacy policies. To enhance the privacy control, individuals can explicitly express their privacy preferences, stating conditions on how their data have to be used and managed. Typically, privacy checking is handled through the hard matching of users' privacy preferences against service providers' privacy policies, by denying all service requests whose privacy policies do not fully match with individual's privacy preferences. However, this hard matching might be too restrictive in a smart environment because it denies the services that partially satisfy the individual's privacy preferences. To cope with this challenge, in this paper, we propose a soft privacy matching mechanism, able to relax, in a controlled way, some conditions of users' privacy preferences such to match with service providers' privacy policies. At this aim, we exploit machine learning algorithms to build a classifier, which is able to make decisions on future service requests, by learning which privacy preference components a user is prone to relax, as well as the relaxation tolerance. We test our approach on two realistic datasets, obtaining promising results.
Lachner, Clemens, Rausch, Thomas, Dustdar, Schahram.  2019.  Context-Aware Enforcement of Privacy Policies in Edge Computing. 2019 IEEE International Congress on Big Data (BigDataCongress). :1—6.
Privacy is a fundamental concern that confronts systems dealing with sensitive data. The lack of robust solutions for defining and enforcing privacy measures continues to hinder the general acceptance and adoption of these systems. Edge computing has been recognized as a key enabler for privacy enhanced applications, and has opened new opportunities. In this paper, we propose a novel privacy model based on context-aware edge computing. Our model leverages the context of data to make decisions about how these data need to be processed and managed to achieve privacy. Based on a scenario from the eHealth domain, we show how our generalized model can be used to implement and enact complex domain-specific privacy policies. We illustrate our approach by constructing real world use cases involving a mobile Electronic Health Record that interacts with, and in different environments.
Gerl, Armin, Becher, Stefan.  2019.  Policy-Based De-Identification Test Framework. 2019 IEEE World Congress on Services (SERVICES). 2642-939X:356—357.
Protecting privacy of individuals is a basic right, which has to be considered in our data-centered society in which new technologies emerge rapidly. To preserve the privacy of individuals de-identifying technologies have been developed including pseudonymization, personal privacy anonymization, and privacy models. Each having several variations with different properties and contexts which poses the challenge for the proper selection and application of de-identification methods. We tackle this challenge proposing a policy-based de-identification test framework for a systematic approach to experimenting and evaluation of various combinations of methods and their interplay. Evaluation of the experimental results regarding performance and utility is considered within the framework. We propose a domain-specific language, expressing the required complex configuration options, including data-set, policy generator, and various de-identification methods.
2020-01-21
Vo, Tri Hoang, Fuhrmann, Woldemar, Fischer-Hellmann, Klaus-Peter, Furnell, Steven.  2019.  Efficient Privacy-Preserving User Identity with Purpose-Based Encryption. 2019 International Symposium on Networks, Computers and Communications (ISNCC). :1–8.
In recent years, users may store their Personal Identifiable Information (PII) in the Cloud environment so that Cloud services may access and use it on demand. When users do not store personal data in their local machines, but in the Cloud, they may be interested in questions such as where their data are, who access it except themselves. Even if Cloud services specify privacy policies, we cannot guarantee that they will follow their policies and will not transfer user data to another party. In the past 10 years, many efforts have been taken in protecting PII. They target certain issues but still have limitations. For instance, users require interacting with the services over the frontend, they do not protect identity propagation between intermediaries and against an untrusted host, or they require Cloud services to accept a new protocol. In this paper, we propose a broader approach that covers all the above issues. We prove that our solution is efficient: the implementation can be easily adapted to existing Identity Management systems and the performance is fast. Most importantly, our approach is compliant with the General Data Protection Regulation from the European Union.
2019-11-11
Barrett, Ayodele A., Matthee, Machdel.  2018.  A Critical Analysis of Informed Use of Context-aware Technologies. Proceedings of the Annual Conference of the South African Institute of Computer Scientists and Information Technologists. :126–134.
There is a move towards a future in which consumers of technology are untethered from the devices and technology recedes to the subconscious. One way of achieving this vision is with context-aware technologies, which smartphones exemplify. Key figures in the creation of modern technologies suggest that consumers are fully informed of the implications of the use of these technologies. Typically, privacy policy documents are used both to inform, and gain consent from users of these technologies, on how their personal data will be used. This paper examines opinions of African-based users of smartphones. There is also an examination of the privacy policy statement of a popular app, using Critical Discourse Analysis. The analysis reveals concerns of consumers regarding absence of choice, a lack of knowledge and information privacy erosion are not unfounded.
Martiny, Karsten, Denker, Grit.  2018.  Expiring Decisions for Stream-based Data Access in a Declarative Privacy Policy Framework. Proceedings of the 2Nd International Workshop on Multimedia Privacy and Security. :71–80.
This paper describes how a privacy policy framework can be extended with timing information to not only decide if requests for data are allowed at a given point in time, but also to decide for how long such permission is granted. Augmenting policy decisions with expiration information eliminates the need to reason about access permissions prior to every individual data access operation. This facilitates the application of privacy policy frameworks to protect multimedia streaming data where repeated re-computations of policy decisions are not a viable option. We show how timing information can be integrated into an existing declarative privacy policy framework. In particular, we discuss how to obtain valid expiration information in the presence of complex sets of policies with potentially interacting policies and varying timing information.
Tesfay, Welderufael B., Hofmann, Peter, Nakamura, Toru, Kiyomoto, Shinsaku, Serna, Jetzabel.  2018.  I Read but Don'T Agree: Privacy Policy Benchmarking Using Machine Learning and the EU GDPR. Companion Proceedings of the The Web Conference 2018. :163–166.
With the continuing growth of the Internet landscape, users share large amount of personal, sometimes, privacy sensitive data. When doing so, often, users have little or no clear knowledge about what service providers do with the trails of personal data they leave on the Internet. While regulations impose rather strict requirements that service providers should abide by, the defacto approach seems to be communicating data processing practices through privacy policies. However, privacy policies are long and complex for users to read and understand, thus failing their mere objective of informing users about the promised data processing behaviors of service providers. To address this pertinent issue, we propose a machine learning based approach to summarize the rather long privacy policy into short and condensed notes following a risk-based approach and using the European Union (EU) General Data Protection Regulation (GDPR) aspects as assessment criteria. The results are promising and indicate that our tool can summarize lengthy privacy policies in a short period of time, thus supporting users to take informed decisions regarding their information disclosure behaviors.
Pierce, James, Fox, Sarah, Merrill, Nick, Wong, Richmond, DiSalvo, Carl.  2018.  An Interface Without A User: An Exploratory Design Study of Online Privacy Policies and Digital Legalese. Proceedings of the 2018 Designing Interactive Systems Conference. :1345–1358.
Privacy policies are critical to understanding one's rights on online platforms, yet few users read them. In this pictorial, we approach this as a systemic issue that is part a failure of interaction design. We provided a variety of people with printed packets of privacy policies, aiming to tease out this form's capabilities and limitations as a design interface, to understand people's perception and uses, and to critically imagine pragmatic revisions and creative alternatives to existing privacy policies.