Visible to the public Biblio

Filters: Keyword is mechanism design  [Clear All Filters]
2023-02-03
Halabi, Talal, Abusitta, Adel, Carvalho, Glaucio H.S., Fung, Benjamin C. M..  2022.  Incentivized Security-Aware Computation Offloading for Large-Scale Internet of Things Applications. 2022 7th International Conference on Smart and Sustainable Technologies (SpliTech). :1–6.

With billions of devices already connected to the network's edge, the Internet of Things (IoT) is shaping the future of pervasive computing. Nonetheless, IoT applications still cannot escape the need for the computing resources available at the fog layer. This becomes challenging since the fog nodes are not necessarily secure nor reliable, which widens even further the IoT threat surface. Moreover, the security risk appetite of heterogeneous IoT applications in different domains or deploy-ment contexts should not be assessed similarly. To respond to this challenge, this paper proposes a new approach to optimize the allocation of secure and reliable fog computing resources among IoT applications with varying security risk level. First, the security and reliability levels of fog nodes are quantitatively evaluated, and a security risk assessment methodology is defined for IoT services. Then, an online, incentive-compatible mechanism is designed to allocate secure fog resources to high-risk IoT offloading requests. Compared to the offline Vickrey auction, the proposed mechanism is computationally efficient and yields an acceptable approximation of the social welfare of IoT devices, allowing to attenuate security risk within the edge network.

2020-09-28
Oya, Simon, Troncoso, Carmela, Pèrez-Gonzàlez, Fernando.  2019.  Rethinking Location Privacy for Unknown Mobility Behaviors. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :416–431.
Location Privacy-Preserving Mechanisms (LPPMs) in the literature largely consider that users' data available for training wholly characterizes their mobility patterns. Thus, they hardwire this information in their designs and evaluate their privacy properties with these same data. In this paper, we aim to understand the impact of this decision on the level of privacy these LPPMs may offer in real life when the users' mobility data may be different from the data used in the design phase. Our results show that, in many cases, training data does not capture users' behavior accurately and, thus, the level of privacy provided by the LPPM is often overestimated. To address this gap between theory and practice, we propose to use blank-slate models for LPPM design. Contrary to the hardwired approach, that assumes known users' behavior, blank-slate models learn the users' behavior from the queries to the service provider. We leverage this blank-slate approach to develop a new family of LPPMs, that we call Profile Estimation-Based LPPMs. Using real data, we empirically show that our proposal outperforms optimal state-of-the-art mechanisms designed on sporadic hardwired models. On non-sporadic location privacy scenarios, our method is only better if the usage of the location privacy service is not continuous. It is our hope that eliminating the need to bootstrap the mechanisms with training data and ensuring that the mechanisms are lightweight and easy to compute help fostering the integration of location privacy protections in deployed systems.
Kohli, Nitin, Laskowski, Paul.  2018.  Epsilon Voting: Mechanism Design for Parameter Selection in Differential Privacy. 2018 IEEE Symposium on Privacy-Aware Computing (PAC). :19–30.
The behavior of a differentially private system is governed by a parameter epsilon which sets a balance between protecting the privacy of individuals and returning accurate results. While a system owner may use a number of heuristics to select epsilon, existing techniques may be unresponsive to the needs of the users who's data is at risk. A promising alternative is to allow users to express their preferences for epsilon. In a system we call epsilon voting, users report the parameter values they want to a chooser mechanism, which aggregates them into a single value. We apply techniques from mechanism design to ask whether such a chooser mechanism can itself be truthful, private, anonymous, and also responsive to users. Without imposing restrictions on user preferences, the only feasible mechanisms belong to a class we call randomized dictatorships with phantoms. This is a restrictive class in which at most one user has any effect on the chosen epsilon. On the other hand, when users exhibit single-peaked preferences, a broader class of mechanisms - ones that generalize the median and other order statistics - becomes possible.
2017-05-18
Wang, Weina, Ying, Lei, Zhang, Junshan.  2016.  The Value of Privacy: Strategic Data Subjects, Incentive Mechanisms and Fundamental Limits. Proceedings of the 2016 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Science. :249–260.

We study the value of data privacy in a game-theoretic model of trading private data, where a data collector purchases private data from strategic data subjects (individuals) through an incentive mechanism. The private data of each individual represents her knowledge about an underlying state, which is the information that the data collector desires to learn. Different from most of the existing work on privacy-aware surveys, our model does not assume the data collector to be trustworthy. Then, an individual takes full control of its own data privacy and reports only a privacy-preserving version of her data. In this paper, the value of ε units of privacy is measured by the minimum payment of all nonnegative payment mechanisms, under which an individual's best response at a Nash equilibrium is to report the data with a privacy level of ε. The higher ε is, the less private the reported data is. We derive lower and upper bounds on the value of privacy which are asymptotically tight as the number of data subjects becomes large. Specifically, the lower bound assures that it is impossible to use less amount of payment to buy ε units of privacy, and the upper bound is given by an achievable payment mechanism that we designed. Based on these fundamental limits, we further derive lower and upper bounds on the minimum total payment for the data collector to achieve a given learning accuracy target, and show that the total payment of the designed mechanism is at most one individual's payment away from the minimum.