Biblio
Software metrics help developers discover and fix mistakes. However, despite promising empirical evidence, vulnerability discovery metrics are seldom relied upon in practice. In prior research, the effectiveness of these metrics has typically been expressed using precision and recall of a prediction model that uses the metrics as explanatory variables. These prediction models, being black boxes, may not be perceived as useful by developers. However, by systematically interpreting the models and metrics, we can provide developers with nuanced insights about factors that have led to security mistakes in the past. In this paper, we present a preliminary approach to using vulnerability discovery metrics to provide insightful feedback to developers as they engineer software. We collected ten metrics (churn, collaboration centrality, complexity, contribution centrality, nesting, known offender, source lines of code, \# inputs, \# outputs, and \# paths) from six open-source projects. We assessed the generalizability of the metrics across two contextual dimensions (application domain and programming language) and between projects within a domain, computed thresholds for the metrics using an unsupervised approach from literature, and assessed the ability of these unsupervised thresholds to classify risk from historical vulnerabilities in the Chromium project. The observations from this study feeds into our ongoing research to automatically aggregate insights from the various analyses to generate natural language feedback on security. We hope that our approach to generate automated feedback will accelerate the adoption of research in vulnerability discovery metrics.
Mobile app developers today have a hard decision to make: to independently develop native apps for different operating systems or to develop an app that is cross-platform compatible. The availability of different tools and approaches to support cross-platform app development only makes the decision harder. In this study, we used user reviews of apps to empirically understand the relationship (if any) between the approach used in the development of an app and its perceived quality. We used Natural Language Processing (NLP) models to classify 787,228 user reviews of the Android version and iOS version of 50 apps as complaints in one of four quality concerns: performance, usability, security, and reliability. We found that hybrid apps (on both Android and iOS platforms) tend to be more prone to user complaints than interpreted/generated apps. In a study of Facebook, an app that underwent a change in development approach from hybrid to native, we found that change in the development approach was accompanied by a reduction in user complaints about performance and reliability.
The Common Vulnerability Scoring System (CVSS) is the de facto standard for vulnerability severity measurement today and is crucial in the analytics driving software fortification. Required by the U.S. National Vulnerability Database, over 75,000 vulnerabilities have been scored using CVSS. We compare how the CVSS correlates with another, closely-related measure of security impact: bounties. Recent economic studies of vulnerability disclosure processes show a clear relationship between black market value and bounty payments. We analyzed the CVSS scores and bounty awarded for 703 vulnerabilities across 24 products. We found a weak (Spearmanâs Ï = 0.34) correlation between CVSS scores and bounties, with CVSS being more likely to underestimate bounty. We believe such a negative result is a cause for concern. We investigated why these measurements were so discordant by (a) analyzing the individual questions of CVSS with respect to bounties and (b) conducting a qualitative study to find the similarities and differences between CVSS and the publicly-available criteria for awarding bounties. Among our findings were that the bounty criteria were more explicit about code execution and privilege escalation whereas CVSS makes no explicit mention of those. We also found that bounty valuations are evaluated solely by project maintainers, whereas CVSS has little provenance in practice.
When reasoning about software security, researchers and practitioners use the phrase ``attack surface'' as a metaphor for risk. Enumerate and minimize the ways attackers can break in then risk is reduced and the system is better protected, the metaphor says. But software systems are much more complicated than their surfaces. We propose function- and file-level attack surface metrics–-proximity and risky walk–-that enable fine-grained risk assessment. Our risky walk metric is highly configurable: we use PageRank on a probability-weighted call graph to simulate attacker behavior of finding or exploiting a vulnerability. We provide evidence-based guidance for deploying these metrics, including an extensive parameter tuning study. We conducted an empirical study on two large open source projects, FFmpeg and Wireshark, to investigate the potential correlation between our metrics and historical post-release vulnerabilities. We found our metrics to be statistically significantly associated with vulnerable functions/files with a small-to-large Cohen's d effect size. Our prediction model achieved an increase of 36% (in FFmpeg) and 27% (in Wireshark) in the average value of F-measure over a base model built with SLOC and coupling metrics. Our prediction model outperformed comparable models from prior literature with notable improvements: 58% reduction in false negative rate, 81% reduction in false positive rate, and 548% increase in F-measure. These metrics advance vulnerability prevention by [(a)] being flexible in terms of granularity, performing better than vulnerability prediction literature, and being tunable so that practitioners can tailor the metrics to their products and better assess security risk.
The success or failure of a mobile application (`app') is largely determined by user ratings. Users frequently make their app choices based on the ratings of apps in comparison with similar, often competing apps. Users also expect apps to continually provide new features while maintaining quality, or the ratings drop. At the same time apps must also be secure, but is there a historical trade-off between security and ratings? Or are app store ratings a more all-encompassing measure of product maturity? We used static analysis tools to collect security-related metrics in 38,466 Android apps from the Google Play store. We compared the rate of an app's permission misuse, number of requested permissions, and Androrisk score, against its user rating. We found that high-rated apps have statistically significantly higher security risk metrics than low-rated apps. However, the correlations are weak. This result supports the conventional wisdom that users are not factoring security risks into their ratings in a meaningful way. This could be due to several reasons including users not placing much emphasis on security, or that the typical user is unable to gauge the security risk level of the apps they use everyday.
When reasoning about software security, researchers and practitioners use the phrase “attack surface” as a metaphor for risk. Enumerate and minimize the ways attackers can break in then risk is reduced and the system is better pro- tected, the metaphor says. But software systems are much more complicated than their surfaces. We propose function- and file-level attack surface metrics—proximity and risky walk—that enable fine-grained risk assessment. Our risky walk metric is highly configurable: we use PageRank on a probability-weighted call graph to simulate attacker be- havior of finding or exploiting a vulnerability. We provide evidence-based guidance for deploying these metrics, includ- ing an extensive parameter tuning study. We conducted an empirical study on two large open source projects, FFmpeg and Wireshark, to investigate the potential correlation be- tween our metrics and historical post-release vulnerabilities. We found our metrics to be statistically significantly asso- ciated with vulnerable functions/files with a small-to-large Cohen’s d effect size. Our prediction model achieved an increase of 36% (in FFmpeg) and 27% (in Wireshark) in the average value of F 2 -measure over a base model built with SLOC and coupling metrics. Our prediction model outperformed comparable models from prior literature with notable improvements: 58% reduction in false negative rate, 81% reduction in false positive rate, and 548% increase in F 2 -measure. These metrics advance vulnerability prevention by (a) being flexible in terms of granularity, (b) performing better than vulnerability prediction literature, and (c) being tunable so that practitioners can tailor the metrics to their products and better assess security risk.