Predicting the Difficulty of Compromise through How Attackers Discover Vulnerabilities
PI(s), Co-PI(s), Researchers:
PI: Andrew Meneely; Co-PI: Laurie Williams; Researchers: Akond Rahman, Nuthan Munaiah
HARD PROBLEM(S) ADDRESSED
This refers to Hard Problems, released November 2012.
- Metrics
PUBLICATIONS
Papers written as a result of your research from the current quarter only.
KEY HIGHLIGHTS
Each effort should submit one or two specific highlights. Each item should include a paragraph or two along with a citation if available. Write as if for the general reader of IEEE S&P.
The purpose of the highlights is to give our immediate sponsors a body of evidence that the funding they are providing (in the framework of the SoS lablet model) is delivering results that "more than justify" the investment they are making.
- We have ramped up our efforts to manually annotate and curate the data from the 2018 National Collegiate Penetration Testing Competition (CPTC) data set. As part of our submission to the ESEM 2019 New Ideas and Emerging Results track, we honed our annotation process by curating one team's entire timeline. Then, to scale up our approach, we hired a CPTC-trained competitor to capture the timelines and map the events to MITRE ATT&CK. The resulting data set will be a fine-grained log of events that map the techniques attackers use to break into systems in the competition environment. The next step for this data set will be to create a probabilistic model that can estimate the probability of discovery for a given vulnerability based upon attacker behavior. A single team had 47 relevant events with approximately 17 vulnerabilities found, which we filtered from millions of events from the SPLUNK monitoring system.
- We are also beginning our instrumentation for collecting data for CPTC 2019 currently. Our techniques for gathering data and creating timelines will be optimized ahead of time so that we can re-create the timelines and deliver them faster to the research public.
- Security smells in Infrastructure as Code scripts. Defects in infrastructure as code (IaC) scripts can have serious consequences for organizations who adopt DevOps. While developing IaC scripts, practitioners may inadvertently introduce security smells. Security smells are recurring coding patterns that are indicative of security weakness and can potentially lead to security breaches. The goal of this work is to help practitioners avoid insecure coding practices while developing IaC scripts through an empirical study of security smells in IaC scripts. We expanded the scale and depth of our previously-reported security smell work by increasing the number of languages to three: Ansible, Chef, and Puppet (prior reported results were for the Puppet language only). We identify nine security smells for IaC scripts.
COMMUNITY ENGAGEMENT
- Laurie Williams presented our paper at ICSE 2019, and the paper also won an ACM Distinguished Paper award.
EDUCATIONAL ADVANCES:
- None.