Predicting the Difficulty of Compromise through How Attackers Discover Vulnerabilities
PI(s), Co-PI(s), Researchers:
PI: Laurie Williams, Andrew Meneely
HARD PROBLEM(S) ADDRESSED
This refers to Hard Problems, released November 2012.
- Metrics
PUBLICATIONS
Papers written as a result of your research from the current quarter only.
- No new publications
KEY HIGHLIGHTS
Each effort should submit one or two specific highlights. Each item should include a paragraph or two along with a citation if available. Write as if for the general reader of IEEE S&P.
The purpose of the highlights is to give our immediate sponsors a body of evidence that the funding they are providing (in the framework of the SoS lablet model) is delivering results that "more than justify" the investment they are making.
- We have begun the manual annotation process of the 2018 National Collegiate Penetration Testing Competition (CPTC) data set. 54 vulnerabilities were reported and scored in the competition, and we are constructing a timeline of events for each vulnerability report. These timelines are constructed using the MITRE ATT&CK framework. With each event mapped to MITRE ATT&CK, our plan is to use conditional probabilities for the system to determine the probability of discovery. The annotated data set will be considerably useful for the community, as it is a detailed record of penetration testing activity in a controlled environment.
- Security smells in Infrastructure as Code scripts. Defects in infrastructure as code (IaC) scripts can have serious consequences for organizations who adopt DevOps. While developing IaC scripts, practitioners may inadvertently introduce security smells. Security smells are recurring coding patterns that are indicative of security weakness and can potentially lead to security breaches. The goal of this work is to help practitioners avoid insecure coding practices while developing IaC scripts through an empirical study of security smells in IaC scripts. We applied qualitative analysis on 1,726 IaC scripts to identify seven security smells. Next, we implemented and validated a static analysis tool called Security Linter for Infrastructure as Code scripts (SLIC) to identify the occurrence of each smell in 15,232 IaC scripts collected from 293 open source repositories. We identify 21,201 occurrences of security smells that include 1,326 occurrences of hard-coded passwords. We submitted bug reports for 1,000 randomly-selected security smell occurrences. We obtain 212 responses to these bug reports, of which 148 occurrences were accepted by the development teams to be fixed. We observe security smells can have a long lifetime, e.g., a hard-coded secret can persist for as long as 98 months, with a median lifetime of 20 months.
COMMUNITY ENGAGEMENT
- Nuthan Munaiah presented our paper on the CPTC data set.
EDUCATIONAL ADVANCES:
- We have begun consulting with RIT's Department of Computing Security in working on a new Foundations of Computing Security that would be required for all first-year students in the Golisano College of Computing and Information Sciences (about a thousand students per year). The work from this grant helps shape that course by providing specific examples from competition for students to analyze.
Groups: