Visible to the public Biblio

Filters: Keyword is Quantifying privacy  [Clear All Filters]
2021-07-08
Kunz, Immanuel, Schneider, Angelika, Banse, Christian.  2020.  Privacy Smells: Detecting Privacy Problems in Cloud Architectures. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1324—1331.
Many organizations are still reluctant to move sensitive data to the cloud. Moreover, data protection regulations have established considerable punishments for violations of privacy and security requirements. Privacy, however, is a concept that is difficult to measure and to demonstrate. While many privacy design strategies, tactics and patterns have been proposed for privacy-preserving system design, it is difficult to evaluate an existing system with regards to whether these strategies have or have not appropriately been implemented. In this paper we propose indicators for a system's non-compliance with privacy design strategies, called privacy smells. To that end we first identify concrete metrics that measure certain aspects of existing privacy design strategies. We then define smells based on these metrics and discuss their limitations and usefulness. We identify these indicators on two levels of a cloud system: the data flow level and the access control level. Using a cloud system built in Microsoft Azure we show how the metrics can be measured technically and discuss the differences to other cloud providers, namely Amazon Web Services and Google Cloud Platform. We argue that while it is difficult to evaluate the privacy-awareness in a cloud system overall, certain privacy aspects in cloud systems can be mapped to useful metrics that can indicate underlying privacy problems. With this approach we aim at enabling cloud users and auditors to detect deep-rooted privacy problems in cloud systems.
2020-09-28
Oya, Simon, Troncoso, Carmela, Pèrez-Gonzàlez, Fernando.  2019.  Rethinking Location Privacy for Unknown Mobility Behaviors. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :416–431.
Location Privacy-Preserving Mechanisms (LPPMs) in the literature largely consider that users' data available for training wholly characterizes their mobility patterns. Thus, they hardwire this information in their designs and evaluate their privacy properties with these same data. In this paper, we aim to understand the impact of this decision on the level of privacy these LPPMs may offer in real life when the users' mobility data may be different from the data used in the design phase. Our results show that, in many cases, training data does not capture users' behavior accurately and, thus, the level of privacy provided by the LPPM is often overestimated. To address this gap between theory and practice, we propose to use blank-slate models for LPPM design. Contrary to the hardwired approach, that assumes known users' behavior, blank-slate models learn the users' behavior from the queries to the service provider. We leverage this blank-slate approach to develop a new family of LPPMs, that we call Profile Estimation-Based LPPMs. Using real data, we empirically show that our proposal outperforms optimal state-of-the-art mechanisms designed on sporadic hardwired models. On non-sporadic location privacy scenarios, our method is only better if the usage of the location privacy service is not continuous. It is our hope that eliminating the need to bootstrap the mechanisms with training data and ensuring that the mechanisms are lightweight and easy to compute help fostering the integration of location privacy protections in deployed systems.