Biblio

Filters: Author is Ying, Lei  [Clear All Filters]
2018-01-10
Chen, Chen, Tong, Hanghang, Xie, Lei, Ying, Lei, He, Qing.  2017.  Cross-Dependency Inference in Multi-Layered Networks: A Collaborative Filtering Perspective. ACM Trans. Knowl. Discov. Data. 11:42:1–42:26.
The increasingly connected world has catalyzed the fusion of networks from different domains, which facilitates the emergence of a new network model—multi-layered networks. Examples of such kind of network systems include critical infrastructure networks, biological systems, organization-level collaborations, cross-platform e-commerce, and so forth. One crucial structure that distances multi-layered network from other network models is its cross-layer dependency, which describes the associations between the nodes from different layers. Needless to say, the cross-layer dependency in the network plays an essential role in many data mining applications like system robustness analysis and complex network control. However, it remains a daunting task to know the exact dependency relationships due to noise, limited accessibility, and so forth. In this article, we tackle the cross-layer dependency inference problem by modeling it as a collective collaborative filtering problem. Based on this idea, we propose an effective algorithm F\textbackslashtextlessscp;\textbackslashtextgreaterascinate\textbackslashtextless/scp;\textbackslashtextgreater that can reveal unobserved dependencies with linear complexity. Moreover, we derive F\textbackslashtextlessscp;\textbackslashtextgreaterascinate\textbackslashtextless/scp;\textbackslashtextgreater-ZERO, an online variant of F\textbackslashtextlessscp;\textbackslashtextgreaterascinate\textbackslashtextless/scp;\textbackslashtextgreater that can respond to a newly added node timely by checking its neighborhood dependencies. We perform extensive evaluations on real datasets to substantiate the superiority of our proposed approaches.
2017-05-18
Wang, Weina, Ying, Lei, Zhang, Junshan.  2016.  The Value of Privacy: Strategic Data Subjects, Incentive Mechanisms and Fundamental Limits. Proceedings of the 2016 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Science. :249–260.

We study the value of data privacy in a game-theoretic model of trading private data, where a data collector purchases private data from strategic data subjects (individuals) through an incentive mechanism. The private data of each individual represents her knowledge about an underlying state, which is the information that the data collector desires to learn. Different from most of the existing work on privacy-aware surveys, our model does not assume the data collector to be trustworthy. Then, an individual takes full control of its own data privacy and reports only a privacy-preserving version of her data. In this paper, the value of ε units of privacy is measured by the minimum payment of all nonnegative payment mechanisms, under which an individual's best response at a Nash equilibrium is to report the data with a privacy level of ε. The higher ε is, the less private the reported data is. We derive lower and upper bounds on the value of privacy which are asymptotically tight as the number of data subjects becomes large. Specifically, the lower bound assures that it is impossible to use less amount of payment to buy ε units of privacy, and the upper bound is given by an achievable payment mechanism that we designed. Based on these fundamental limits, we further derive lower and upper bounds on the minimum total payment for the data collector to achieve a given learning accuracy target, and show that the total payment of the designed mechanism is at most one individual's payment away from the minimum.