Biblio
While the existence of many security elements in software can be measured (e.g., vulnerabilities, security controls, or privacy controls), it is challenging to measure their relative security impact. In the physical world we can often measure the impact of individual elements to a system. However, in cyber security we often lack ground truth (i.e., the ability to directly measure significance). In this work we propose to solve this by leveraging human expert opinion to provide ground truth. Experts are iteratively asked to compare pairs of security elements to determine their relative significance. On the back end our knowledge encoding tool performs a form of binary insertion sort on a set of security elements using each expert as an oracle for the element comparisons. The tool not only sorts the elements (note that equality may be permitted), but it also records the strength or degree of each relationship. The output is a directed acyclic ‘constraint’ graph that provides a total ordering among the sets of equivalent elements. Multiple constraint graphs are then unified together to form a single graph that is used to generate a scoring or prioritization system.For our empirical study, we apply this domain-agnostic measurement approach to generate scoring/prioritization systems in the areas of vulnerability scoring, privacy control prioritization, and cyber security control evaluation.
Vulnerability exploitation is reportedly one of the main attack vectors against computer systems. Yet, most vulnerabilities remain unexploited by attackers. It is therefore of central importance to identify vulnerabilities that carry a high 'potential for attack'. In this paper we rely on Symantec data on real attacks detected in the wild to identify a trade-off in the Impact and Complexity of a vulnerability in terms of attacks that it generates; exploiting this effect, we devise a readily computable estimator of the vulnerability's Attack Potential that reliably estimates the expected volume of attacks against the vulnerability. We evaluate our estimator performance against standard patching policies by measuring foiled attacks and demanded workload expressed as the number of vulnerabilities entailed to patch. We show that our estimator significantly improves over standard patching policies by ruling out low-risk vulnerabilities, while maintaining invariant levels of coverage against attacks in the wild. Our estimator can be used as a first aid for vulnerability prioritisation to focus assessment efforts on high-potential vulnerabilities.