Reasoning about Accidental and Malicious Misuse via Formal Methods
PI(s), Co-PI(s), Researchers:
PI: Munindar Singh; Co-PIs: William Enck, Laurie Williams; Researchers: Hui Guo, Samin Yaseer Mahmud, Md Rayhanur Rahman, Vaibhav Garg
HARD PROBLEM(S) ADDRESSED
This refers to Hard Problems, released November 2012.
- Policy
This project seeks to aid security analysts in identifying and protecting against accidental and malicious actions by users or software through automated reasoning on unified representations of user expectations and software implementations to identify misuses sensitive to usage and machine context.
PUBLICATIONS
None.
KEY HIGHLIGHTS
Each effort should submit one or two specific highlights. Each item should include a paragraph or two along with a citation if available. Write as if for the general reader of IEEE S&P.
The purpose of the highlights is to give our immediate sponsors a body of evidence that the funding they are providing (in the framework of the SoS lablet model) is delivering results that "more than justify" the investment they are making.
-
We evaluated our framework which identifies rogue apps, i.e., those violating privacy expectations, based on app reviews. We created a gold dataset of 100 apps that contains 62 rogue apps. Our method an F1 score of 76% on this dataset, beating the previous study (which leveraged app descriptions), which achieved an F1 score of 63%. In this setting, false negatives carry a high risk and are thus costlier than false positives. Our approach achieved 89% recall at 66% precision, whereas the previous study achieved 52% recall at 82% precision.
-
We built a dataflow-based static program analysis tool to study how Payment Service Provider (PSP) libaries for mobile Android apps store security critical information.
-
We conducted a comparison study of three Natural Language Processing (NLP)/Machine Learning (ML) models for extracting attacker techniques from CTI.
COMMUNITY ENGAGEMENTS
None.
EDUCATIONAL ADVANCES:
None.