Biblio

Filters: Keyword is Predicting the Difficulty of Compromise through How Attackers Discover Vulnerabilities  [Clear All Filters]
2019-10-10
Nuthan Munaiah, Akond Rahman, Justin Pelletier, Laurie Williams, Andrew Meneely.  2019.  Characterizing Attacker Behavior in a Cybersecurity Penetration Testing Competition. 13th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM).

Inculcating an attacker mindset (i.e. learning to think like an attacker) is an essential skill for engineers and administrators to improve the overall security of software. Describing the approach that adversaries use to discover and exploit vulnerabilities to infiltrate software systems can help inform such an attacker mindset. Aims: Our goal is to assist developers and administrators in inculcating an attacker mindset by proposing an approach to codify attacker behavior in cybersecurity penetration testing competition. Method: We use an existing multimodal dataset of events captured during the 2018 National Collegiate Penetration Testing Competition (CPTC'18) to characterize the approach a team of attackers used to discover and exploit vulnerabilities. Results: We collected 44 events to characterize the approach that one of the participating teams took to discover and exploit seven vulnerabilities. We used the MITRE ATT&CK ™ framework to codify the approach in terms of tactics and techniques. Conclusions: We show that characterizing attackers' campaign as a chronological sequence of MITRE ATT&CK ™ tactics and techniques is feasible. We hope that such a characterization can inform the attacker mindset of engineers and administrators in their pursuit of engineering secure software systems.

2019-01-07
2018-07-09
Christopher Theisen, Hyunwoo Sohn, Dawson Tripp, Laurie Williams.  2018.  BP: Profiling Vulnerabilities on the Attack Surface. IEEE SecDev.

Security practitioners use the attack surface of software systems to prioritize areas of systems to test and analyze. To date, approaches for predicting which code artifacts are vulnerable have utilized a binary classification of code as vulnerable or not vulnerable. To better understand the strengths and weaknesses of vulnerability prediction approaches, vulnerability datasets with classification and severity data are needed. The goal of this paper is to help researchers and practitioners make security effort prioritization decisions by evaluating which classifications and severities of vulnerabilities are on an attack surface approximated using crash dump stack traces. In this work, we use crash dump stack traces to approximate the attack surface of Mozilla Firefox. We then generate a dataset of 271 vulnerable files in Firefox, classified using the Common Weakness Enumeration (CWE) system. We use these files as an oracle for the evaluation of the attack surface generated using crash data. In the Firefox vulnerability dataset, 14 different classifications of vulnerabilities appeared at least once. In our study, 85.3%
of vulnerable files were on the attack surface generated using crash data. We found no difference between the severity of vulnerabilities found on the attack surface generated using crash data and vulnerabilities not occurring on the attack surface. Additionally, we discuss lessons learned during the development of this vulnerability dataset.