Visible to the public Predicting the Difficulty of Compromise through How Attackers Discover VulnerabilitiesConflict Detection Enabled

PI(s), Co-PI(s), Researchers:

HARD PROBLEM(S) ADDRESSED
This refers to Hard Problems, released November 2012.

  • Metrics

PUBLICATIONS
Papers written as a result of your research from the current quarter only.

Theisen, C., Sohn, H., Tripp, D., and Williams, L., BP: Profiling Vulnerabilities on the Attack Surface, IEEE SecDev Builiding Security In 2018 conference, Cambridge, MA, to appear.

KEY HIGHLIGHTS
Each effort should submit one or two specific highlights. Each item should include a paragraph or two along with a citation if available. Write as if for the general reader of IEEE S&P.
The purpose of the highlights is to give our immediate sponsors a body of evidence that the funding they are providing (in the framework of the SoS lablet model) is delivering results that "more than justify" the investment they are making.

  • We are developing a new set of metrics for measuring exploitability using the attack surface. These metrics are based on the behavior observed by penetration testers in a competition environment. We have collected the intrustion detection data of over 4 billion events from the Regional Collegiate Penetration Testing Competition (https://nationalcptc.org/). In November, we will be collecting data from the national competition as well. This data will provide us with detailed timelines of how attackers find, exploit, and pivot with vulnerabilties. When studying how they work with the known attack surface, we will develop metrics that show which vulnerabilities are at highest risk based on the current deployment.
  • To date, approaches for predicting which code artifacts are vulnerable have utilized a binary classification of code as vulnerable or not vulnerable. To better understand the strengths and weaknesses of vulnerability prediction approaches, vulnerability
    datasets with classification and severity data are needed. In this work, we use crash dump stack traces to approximate the attack surface of Mozilla Firefox. We then generate a dataset of 271 vulnerable files in Firefox, classified using the Common Weakness Enumeration (CWE) system. We use these files as an oracle for the evaluation of the attack surface generated using crash data. In the Firefox vulnerability dataset, 14 different classifications of vulnerabilities appeared at least once. In our study, 85.3% of vulnerable files were on the attack surface generated using crash data. We found no difference between the severity of vulnerabilities found on the attack surface generated using crash data and vulnerabilities not occurring on the attack surface.
  • Our systematic literature review was approved for publication. This systematic literature review examines the current body of literature to determine the various definitions of the "attack surface" metaphor and determines clusters of those definitions. The phrase "attack surface" can mean many things to many people, and this study helps clarify what is intended when using the metaphor.

COMMUNITY ENGAGEMENTS

  • Andy Meneely presented this work to the Cybercorps Scholarship for Service program at RIT, getting feedback on the work.

EDUCATIONAL ADVANCES:

  • Andy Meneely revised his web application fuzz testing project in the SWEN 331 Engineering Secure Software course, based on the research in this lablet. This course sees 60-80 students per academic year, and is required for all software engineering majors at RIT.