Visible to the public Biblio

Filters: Author is Li, Ninghui  [Clear All Filters]
2020-01-06
Cormode, Graham, Jha, Somesh, Kulkarni, Tejas, Li, Ninghui, Srivastava, Divesh, Wang, Tianhao.  2018.  Privacy at Scale: Local Differential Privacy in Practice. Proceedings of the 2018 International Conference on Management of Data. :1655–1658.
Local differential privacy (LDP), where users randomly perturb their inputs to provide plausible deniability of their data without the need for a trusted party, has been adopted recently by several major technology organizations, including Google, Apple and Microsoft. This tutorial aims to introduce the key technical underpinnings of these deployed systems, to survey current research that addresses related problems within the LDP model, and to identify relevant open problems and research directions for the community.
Zhang, Zhikun, Wang, Tianhao, Li, Ninghui, He, Shibo, Chen, Jiming.  2018.  CALM: Consistent Adaptive Local Marginal for Marginal Release Under Local Differential Privacy. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :212–229.
Marginal tables are the workhorse of capturing the correlations among a set of attributes. We consider the problem of constructing marginal tables given a set of user's multi-dimensional data while satisfying Local Differential Privacy (LDP), a privacy notion that protects individual user's privacy without relying on a trusted third party. Existing works on this problem perform poorly in the high-dimensional setting; even worse, some incur very expensive computational overhead. In this paper, we propose CALM, Consistent Adaptive Local Marginal, that takes advantage of the careful challenge analysis and performs consistently better than existing methods. More importantly, CALM can scale well with large data dimensions and marginal sizes. We conduct extensive experiments on several real world datasets. Experimental results demonstrate the effectiveness and efficiency of CALM over existing methods.
2019-09-05
Deshotels, Luke, Deaconescu, Razvan, Carabas, Costin, Manda, Iulia, Enck, William, Chiroiu, Mihai, Li, Ninghui, Sadeghi, Ahmad-Reza.  2018.  iOracle: Automated Evaluation of Access Control Policies in iOS. Proceedings of the 2018 on Asia Conference on Computer and Communications Security. :117-131.

Modern operating systems, such as iOS, use multiple access control policies to define an overall protection system. However, the complexity of these policies and their interactions can hide policy flaws that compromise the security of the protection system. We propose iOracle, a framework that logically models the iOS protection system such that queries can be made to automatically detect policy flaws. iOracle models policies and runtime context extracted from iOS firmware images, developer resources, and jailbroken devices, and iOracle significantly reduces the complexity of queries by modeling policy semantics. We evaluate iOracle by using it to successfully triage executables likely to have policy flaws and comparing our results to the executables exploited in four recent jailbreaks. When applied to iOS 10, iOracle identifies previously unknown policy flaws that allow attackers to modify or bypass access control policies. For compromised system processes, consequences of these policy flaws include sandbox escapes (with respect to read/write file access) and changing the ownership of arbitrary files. By automating the evaluation of iOS access control policies, iOracle provides a practical approach to hardening iOS security by identifying policy flaws before they are exploited.

2019-02-08
Aafer, Yousra, Tao, Guanhong, Huang, Jianjun, Zhang, Xiangyu, Li, Ninghui.  2018.  Precise Android API Protection Mapping Derivation and Reasoning. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :1151-1164.

The Android research community has long focused on building an Android API permission specification, which can be leveraged by app developers to determine the optimum set of permissions necessary for a correct and safe execution of their app. However, while prominent existing efforts provide a good approximation of the permission specification, they suffer from a few shortcomings. Dynamic approaches cannot generate complete results, although accurate for the particular execution. In contrast, static approaches provide better coverage, but produce imprecise mappings due to their lack of path-sensitivity. In fact, in light of Android's access control complexity, the approximations hardly abstract the actual co-relations between enforced protections. To address this, we propose to precisely derive Android protection specification in a path-sensitive fashion, using a novel graph abstraction technique. We further showcase how we can apply the generated maps to tackle security issues through logical satisfiability reasoning. Our constructed maps for 4 Android Open Source Project (AOSP) images highlight the significance of our approach, as \textasciitilde41% of APIs' protections cannot be correctly modeled without our technique.

2018-08-23
Wang, Ruowen, Azab, Ahmed M., Enck, William, Li, Ninghui, Ning, Peng, Chen, Xun, Shen, Wenbo, Cheng, Yueqiang.  2017.  SPOKE: Scalable Knowledge Collection and Attack Surface Analysis of Access Control Policy for Security Enhanced Android. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :612–624.

SEAndroid is a mandatory access control (MAC) framework that can confine faulty applications on Android. Nevertheless, the effectiveness of SEAndroid enforcement depends on the employed policy. The growing complexity of Android makes it difficult for policy engineers to have complete domain knowledge on every system functionality. As a result, policy engineers sometimes craft over-permissive and ineffective policy rules, which unfortunately increased the attack surface of the Android system and have allowed multiple real-world privilege escalation attacks. We propose SPOKE, an SEAndroid Policy Knowledge Engine, that systematically extracts domain knowledge from rich-semantic functional tests and further uses the knowledge for characterizing the attack surface of SEAndroid policy rules. Our attack surface analysis is achieved by two steps: 1) It reveals policy rules that cannot be justified by the collected domain knowledge. 2) It identifies potentially over-permissive access patterns allowed by those unjustified rules as the attack surface. We evaluate SPOKE using 665 functional tests targeting 28 different categories of functionalities developed by Samsung Android Team. SPOKE successfully collected 12,491 access patterns for the 28 categories as domain knowledge, and used the knowledge to reveal 320 unjustified policy rules and 210 over-permissive access patterns defined by those rules, including one related to the notorious libstagefright vulnerability. These findings have been confirmed by policy engineers.

2017-09-26
Chen, Haining, Chowdhury, Omar, Li, Ninghui, Khern-am-nuai, Warut, Chari, Suresh, Molloy, Ian, Park, Youngja.  2016.  Tri-Modularization of Firewall Policies. Proceedings of the 21st ACM on Symposium on Access Control Models and Technologies. :37–48.

Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.

2017-08-22
Chen, Haining, Chowdhury, Omar, Li, Ninghui, Khern-am-nuai, Warut, Chari, Suresh, Molloy, Ian, Park, Youngja.  2016.  Tri-Modularization of Firewall Policies. Proceedings of the 21st ACM on Symposium on Access Control Models and Technologies. :37–48.

Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.

2017-07-24
Su, Dong, Cao, Jianneng, Li, Ninghui, Bertino, Elisa, Jin, Hongxia.  2016.  Differentially Private K-Means Clustering. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :26–37.

There are two broad approaches for differentially private data analysis. The interactive approach aims at developing customized differentially private algorithms for various data mining tasks. The non-interactive approach aims at developing differentially private algorithms that can output a synopsis of the input dataset, which can then be used to support various data mining tasks. In this paper we study the effectiveness of the two approaches on differentially private k-means clustering. We develop techniques to analyze the empirical error behaviors of the existing interactive and non-interactive approaches. Based on the analysis, we propose an improvement of DPLloyd which is a differentially private version of the Lloyd algorithm. We also propose a non-interactive approach EUGkM which publishes a differentially private synopsis for k-means clustering. Results from extensive and systematic experiments support our analysis and demonstrate the effectiveness of our improvement on DPLloyd and the proposed EUGkM algorithm.

2017-05-22
Day, Wei-Yen, Li, Ninghui, Lyu, Min.  2016.  Publishing Graph Degree Distribution with Node Differential Privacy. Proceedings of the 2016 International Conference on Management of Data. :123–138.

Graph data publishing under node-differential privacy (node-DP) is challenging due to the huge sensitivity of queries. However, since a node in graph data oftentimes represents a person, node-DP is necessary to achieve personal data protection. In this paper, we investigate the problem of publishing the degree distribution of a graph under node-DP by exploring the projection approach to reduce the sensitivity. We propose two approaches based on aggregation and cumulative histogram to publish the degree distribution. The experiments demonstrate that our approaches greatly reduce the error of approximating the true degree distribution and have significant improvement over existing works. We also present the introspective analysis for understanding the factors of publishing the degree distribution with node-DP.

2015-09-28
2015-06-30
Yang, Weining, Chen, Jing, Xiong, Aiping, Proctor, Robert W, Li, Ninghui.  2015.  Effectiveness of a phishing warning in field settings. Proceedings of the 2015 Symposium and Bootcamp on the Science of Security. :14.

We have begun to investigate the effectiveness of a phishing warning Chrome extension in a field setting of everyday computer use. A preliminary experiment has been conducted in which participants installed and used the extension. They were required to fill out an online browsing behavior questionnaire by clicking on a survey link sent in a weekly email by us. Two phishing attacks were simulated during the study by directing participants to "fake" (phishing) survey sites we created. Almost all participants who saw the warnings on our fake sites input incorrect passwords, but follow-up interviews revealed that only one participant did so intentionally. A follow-up interview revealed that the warning failure was mainly due to the survey task being mandatory. Another finding of interest from the interview was that about 50% of the participants had never heard of phishing or did not understand its meaning.