Visible to the public Biblio

Filters: Keyword is inference attack  [Clear All Filters]
2018-09-28
Jung, Taebo, Jung, Kangsoo, Park, Sehwa, Park, Seog.  2017.  A noise parameter configuration technique to mitigate detour inference attack on differential privacy. 2017 IEEE International Conference on Big Data and Smart Computing (BigComp). :186–192.

Nowadays, data has become more important as the core resource for the information society. However, with the development of data analysis techniques, the privacy violation such as leakage of sensitive data and personal identification exposure are also increasing. Differential privacy is the technique to satisfy the requirement that any additional information should not be disclosed except information from the database itself. It is well known for protecting the privacy from arbitrary attack. However, recent research argues that there is a several ways to infer sensitive information from data although the differential privacy is applied. One of this inference method is to use the correlation between the data. In this paper, we investigate the new privacy threats using attribute correlation which are not covered by traditional studies and propose a privacy preserving technique that configures the differential privacy's noise parameter to solve this new threat. In the experiment, we show the weaknesses of traditional differential privacy method and validate that the proposed noise parameter configuration method provide a sufficient privacy protection and maintain an accuracy of data utility.

2017-06-05
Sonchack, John, Aviv, Adam J., Keller, Eric.  2016.  Timing SDN Control Planes to Infer Network Configurations. Proceedings of the 2016 ACM International Workshop on Security in Software Defined Networks & Network Function Virtualization. :19–22.

In this paper, we study information leakage by control planes of Software Defined Networks. We find that the response time of an OpenFlow control plane depends on its workload, and we develop an inference attack that an adversary with control of a single host could use to learn about network configurations without needing to compromise any network infrastructure (i.e. switches or controller servers). We also demonstrate that our inference attack works on real OpenFlow hardware. To our knowledge, no previous work has evaluated OpenFlow inference attacks outside of simulation.

2017-05-22
Zhu, Xue, Sun, Yuqing.  2016.  Differential Privacy for Collaborative Filtering Recommender Algorithm. Proceedings of the 2016 ACM on International Workshop on Security And Privacy Analytics. :9–16.

Collaborative filtering plays an essential role in a recommender system, which recommends a list of items to a user by learning behavior patterns from user rating matrix. However, if an attacker has some auxiliary knowledge about a user purchase history, he/she can infer more information about this user. This brings great threats to user privacy. Some methods adopt differential privacy algorithms in collaborative filtering by adding noises to a rating matrix. Although they provide theoretically private results, the influence on recommendation accuracy are not discussed. In this paper, we solve the privacy problem in recommender system in a different way by applying the differential privacy method into the procedure of recommendation. We design two differentially private recommender algorithms with sampling, named Differentially Private Item Based Recommendation with sampling (DP-IR for short) and Differentially Private User Based Recommendation with sampling(DP-UR for short). Both algorithms are based on the exponential mechanism with a carefully designed quality function. Theoretical analyses on privacy of these algorithms are presented. We also investigate the accuracy of the proposed method and give theoretical results. Experiments are performed on real datasets to verify our methods.