Biblio

Filters: Author is Laurie Williams  [Clear All Filters]
2017-04-11
Christopher Theisen, Brendan Murphy, Kim Herzig, Laurie Williams.  Submitted.  Risk-Based Attack Surface Approximation: How Much Data is Enough? International Conference on Software Engineering (ICSE) Software Engineering in Practice (SEIP) 2017.

Proactive security reviews and test efforts are a necessary component of the software development lifecycle. Resource limitations often preclude reviewing the entire code
base. Making informed decisions on what code to review can improve a team’s ability to find and remove vulnerabilities. Risk-based attack surface approximation (RASA) is a technique that uses crash dump stack traces to predict what code may contain exploitable vulnerabilities. The goal of this research is to help software development teams prioritize security efforts by the efficient development of a risk-based attack surface approximation. We explore the use of RASA using Mozilla Firefox and Microsoft Windows stack traces from crash dumps. We create RASA at the file level for Firefox, in which the 15.8% of the files that were part of the approximation contained 73.6% of the vulnerabilities seen for the product. We also explore the effect of random sampling of crashes on the approximation, as it may be impractical for organizations to store and process every crash received. We find that 10-fold random sampling of crashes at a rate of 10% resulted in 3% less vulnerabilities identified than using the entire set of stack traces for Mozilla Firefox. Sampling crashes in Windows 8.1 at a rate of 40% resulted in insignificant differences in vulnerability and file coverage as compared to a rate of 100%.

2023-01-05
Nusrat Zahan, Thomas Zimmermann, Patrice Godefroid, Brendan Murphy, Chandra Maddila, Laurie Williams.  2022.  What are Weak Links in the npm Supply Chain? ICSE-SEIP '22: Proceedings of the 44th International Conference on Software Engineering: Software Engineering in Practice.

Modern software development frequently uses third-party packages, raising the concern of supply chain security attacks. Many attackers target popular package managers, like npm, and their users with supply chain attacks. In 2021 there was a 650% year-on-year growth in security attacks by exploiting Open Source Software's supply chain. Proactive approaches are needed to predict package vulnerability to high-risk supply chain attacks. The goal of this work is to help software developers and security specialists in measuring npm supply chain weak link signals to prevent future supply chain attacks by empirically studying npm package metadata.

In this paper, we analyzed the metadata of 1.63 million JavaScript npm packages. We propose six signals of security weaknesses in a software supply chain, such as the presence of install scripts, maintainer accounts associated with an expired email domain, and inactive packages with inactive maintainers. One of our case studies identified 11 malicious packages from the install scripts signal. We also found 2,818 maintainer email addresses associated with expired domains, allowing an attacker to hijack 8,494 packages by taking over the npm accounts. We obtained feedback on our weak link signals through a survey responded to by 470 npm package developers. The majority of the developers supported three out of our six proposed weak link signals. The developers also indicated that they would want to be notified about weak links signals before using third-party packages. Additionally, we discussed eight new signals suggested by package developers.

2020-10-08
Akond Rahman, Effat Farhana, Laurie Williams.  2020.  The ‘as code’ activities: development anti-patterns for infrastructure as code. Empirical Software Engineering . 25(3467)

Context:

The ‘as code’ suffix in infrastructure as code (IaC) refers to applying software engineering activities, such as version control, to maintain IaC scripts. Without the application of these activities, defects that can have serious consequences may be introduced in IaC scripts. A systematic investigation of the development anti-patterns for IaC scripts can guide practitioners in identifying activities to avoid defects in IaC scripts. Development anti-patterns are recurring development activities that relate with defective IaC scripts.

Goal:

The goal of this paper is to help practitioners improve the quality of infrastructure as code (IaC) scripts by identifying development activities that relate with defective IaC scripts.

Methodology:

We identify development anti-patterns by adopting a mixed-methods approach, where we apply quantitative analysis with 2,138 open source IaC scripts and conduct a survey with 51 practitioners.

Findings:

We observe five development activities to be related with defective IaC scripts from our quantitative analysis. We identify five development anti-patterns namely, ‘boss is not around’, ‘many cooks spoil’, ‘minors are spoiler’, ‘silos’, and ‘unfocused contribution’.

Conclusion:

Our identified development anti-patterns suggest the importance of ‘as code’ activities in IaC because these activities are related to quality of IaC scripts.

Akond Rahman, Effat Farhana, Chris Parnin, Laurie Williams.  2020.  Gang of Eight: A Defect Taxonomy for Infrastructure as Code Scripts. International Conference of Softare Engineering (ICSE).

Defects in infrastructure as code (IaC) scripts can have serious
consequences, for example, creating large-scale system outages. A
taxonomy of IaC defects can be useful for understanding the nature
of defects, and identifying activities needed to fix and prevent
defects in IaC scripts. The goal of this paper is to help practitioners
improve the quality of infrastructure as code (IaC) scripts by developing
a defect taxonomy for IaC scripts through qualitative analysis.
We develop a taxonomy of IaC defects by applying qualitative analysis
on 1,448 defect-related commits collected from open source
software (OSS) repositories of the Openstack organization. We conduct
a survey with 66 practitioners to assess if they agree with the
identified defect categories included in our taxonomy. We quantify
the frequency of identified defect categories by analyzing 80,425
commits collected from 291 OSS repositories spanning across 2005
to 2019.


Our defect taxonomy for IaC consists of eight categories, including
a category specific to IaC called idempotency (i.e., defects that
lead to incorrect system provisioning when the same IaC script is
executed multiple times). We observe the surveyed 66 practitioners
to agree most with idempotency. The most frequent defect category
is configuration data i.e., providing erroneous configuration data
in IaC scripts. Our taxonomy and the quantified frequency of the
defect categories may help in advancing the science of IaC script
quality.

2019-10-10
Nuthan Munaiah, Akond Rahman, Justin Pelletier, Laurie Williams, Andrew Meneely.  2019.  Characterizing Attacker Behavior in a Cybersecurity Penetration Testing Competition. 13th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM).

Inculcating an attacker mindset (i.e. learning to think like an attacker) is an essential skill for engineers and administrators to improve the overall security of software. Describing the approach that adversaries use to discover and exploit vulnerabilities to infiltrate software systems can help inform such an attacker mindset. Aims: Our goal is to assist developers and administrators in inculcating an attacker mindset by proposing an approach to codify attacker behavior in cybersecurity penetration testing competition. Method: We use an existing multimodal dataset of events captured during the 2018 National Collegiate Penetration Testing Competition (CPTC'18) to characterize the approach a team of attackers used to discover and exploit vulnerabilities. Results: We collected 44 events to characterize the approach that one of the participating teams took to discover and exploit seven vulnerabilities. We used the MITRE ATT&CK ™ framework to codify the approach in terms of tactics and techniques. Conclusions: We show that characterizing attackers' campaign as a chronological sequence of MITRE ATT&CK ™ tactics and techniques is feasible. We hope that such a characterization can inform the attacker mindset of engineers and administrators in their pursuit of engineering secure software systems.

2018-07-09
Christopher Theisen, Hyunwoo Sohn, Dawson Tripp, Laurie Williams.  2018.  BP: Profiling Vulnerabilities on the Attack Surface. IEEE SecDev.

Security practitioners use the attack surface of software systems to prioritize areas of systems to test and analyze. To date, approaches for predicting which code artifacts are vulnerable have utilized a binary classification of code as vulnerable or not vulnerable. To better understand the strengths and weaknesses of vulnerability prediction approaches, vulnerability datasets with classification and severity data are needed. The goal of this paper is to help researchers and practitioners make security effort prioritization decisions by evaluating which classifications and severities of vulnerabilities are on an attack surface approximated using crash dump stack traces. In this work, we use crash dump stack traces to approximate the attack surface of Mozilla Firefox. We then generate a dataset of 271 vulnerable files in Firefox, classified using the Common Weakness Enumeration (CWE) system. We use these files as an oracle for the evaluation of the attack surface generated using crash data. In the Firefox vulnerability dataset, 14 different classifications of vulnerabilities appeared at least once. In our study, 85.3%
of vulnerable files were on the attack surface generated using crash data. We found no difference between the severity of vulnerabilities found on the attack surface generated using crash data and vulnerabilities not occurring on the attack surface. Additionally, we discuss lessons learned during the development of this vulnerability dataset.

2016-12-27
2017-04-11
Akond Rahman, Priysha Pradhan, Asif Parthoϕ, Laurie Williams.  2017.  Predicting Android Application Security and Privacy Risk With Static Code Metrics. 4th IEEE/ACM International Conference on Mobile Software Engineering and Systems.

Android applications pose security and privacy risks for end-users. These risks are often quantified by performing dynamic analysis and permission analysis of the Android applications after release. Prediction of security and privacy risks associated with Android applications at early stages of application development, e.g. when the developer (s) are
writing the code of the application, might help Android application developers in releasing applications to end-users that have less security and privacy risk. The goal of this paper
is to aid Android application developers in assessing the security and privacy risk associated with Android applications by using static code metrics as predictors. In our paper, we consider security and privacy risk of Android application as how susceptible the application is to leaking private information of end-users and to releasing vulnerabilities. We investigate how effectively static code metrics that are extracted from the source code of Android applications, can be used to predict security and privacy risk of Android applications. We collected 21 static code metrics of 1,407 Android applications, and use the collected static code metrics to predict security and privacy risk of the applications. As the oracle of security and privacy risk, we used Androrisk, a tool that quantifies the amount of security and privacy risk of an Android application using analysis of Android permissions and dynamic analysis. To accomplish our goal, we used statistical learners such as, radial-based support vector machine (r-SVM). For r-SVM, we observe a precision of 0.83. Findings from our paper suggest that with proper selection of static code metrics, r-SVM can be used effectively to predict security and privacy risk of Android applications

2016-04-02
Ozgur Kafali, Munindar P. Singh, Laurie Williams.  2016.  Toward a Normative Approach for Forensicability: Extended Abstract. Proceedings of the International Symposium and Bootcamp on the Science of Security (HotSoS). :65-67.

Sociotechnical systems (STSs), where users interact with software components, support automated logging, i.e., what a user has performed in the system. However, most systems do not implement automated processes for inspecting the logs when a misuse happens. Deciding what needs to be logged is crucial as excessive amounts of logs might be overwhelming for human analysts to inspect. The goal of this research is to aid software practitioners to implement automated forensic logging by providing a systematic method of using attackers' malicious intentions to decide what needs to be logged. We propose Lokma: a normative framework to construct logging rules for forensic knowledge. We describe the general forensic process of Lokma, and discuss related directions.

2016-12-06
Hanan Hibshi, Travis Breaux, Maria Riaz, Laurie Williams.  2016.  A grounded analysis of experts’ decision-making during security assessments. Journal of Cybersecurity Advance Access .

Security analysis requires specialized knowledge to align threats and vulnerabilities in information technology. To identify mitigations, analysts need to understand how threats, vulnerabilities, and mitigations are composed together to yield security requirements. Despite abundant guidance in the form of checklists and controls about how to secure systems, evidence suggests that security experts do not apply these checklists. Instead, they rely on their prior knowledge and experience to identify security vulnerabilities. To better understand the different effects of checklists, design analysis, and expertise, we conducted a series of interviews to capture and encode the decisionmaking process of security experts and novices during three security analysis exercises. Participants were asked to analyze three kinds of artifacts: source code, data flow diagrams, and network diagrams, for vulnerabilities, and then to apply a requirements checklist to demonstrate their ability to mitigate vulnerabilities. We framed our study using Situation Awareness, which is a theory about human perception that was used to elicit interviewee responses. The responses were then analyzed using coding theory and grounded analysis. Our results include decision-making patterns that characterize how analysts perceive, comprehend, and project future threats against a system, and how these patterns relate to selecting security mitigations. Based on this analysis, we discovered new theory to measure how security experts and novices apply attack models and how structured and unstructured analysis enables increasing security requirements coverage. We highlight the role of expertise level and requirements composition in affecting security decision-making and we discuss how our method produced new hypotheses about security analysis and decisionmaking.

2016-06-17
Ozgur Kafali, Munindar P. Singh, Laurie Williams.  2016.  Nane: Identifying Misuse Cases Using Temporal Norm Enactments. 24th IEEE International Requirements Engineering Conference.

Recent data breaches in domains such as healthcare, where confidentiality of data is crucial, indicate that misuse cases often originate from user errors rather than vulnerabilities in the technical (software or hardware) architecture. Current requirements engineering (RE) approaches determine what access control mechanisms are needed to protect sensitive resources. However, current RE approaches inadequately characterize how a user is expected to interact with others in relation to the relevant resources. Consequently, a requirements analyst cannot readily identify the vulnerabilities based on user interactions. We adopt social norms as a natural, formal means of characterizing user interactions wherein potential misuses map to norm violations. Our research goal is to help analysts identify misuse cases by systematically generating potential temporal enactments that violate formally stated social norms. We propose Nane: a formal framework for identifying misuse cases from norm enactments. We represent misuse cases formally, and propose a semiautomated process for identifying misuse cases based on norm enactments. We show that our process is sound and complete with respect to the stated norms. We discuss the expressiveness of our representation, and demonstrate how Nane enables monitoring of misuse cases via temporal reasoning.

2016-12-05
Hanan Hibshi, Travis Breaux, Maria Riaz, Laurie Williams.  2015.  Discovering Decision-Making Patterns for Security Novices and Experts.

Security analysis requires some degree of knowledge to align threats to vulnerabilities in information technology. Despite the abundance of security requirements, the evidence suggests that security experts are not applying these checklists. Instead, they default to their background knowledge to identify security vulnerabilities. To better understand the different effects of security checklists, analysis and expertise, we conducted a series of interviews to capture and encode the decisionmaking process of security experts and novices during three security requirements analysis exercises. Participants were asked to analyze three kinds of artifacts: source code, data flow diagrams, and network diagrams, for vulnerabilities, and then to apply a requirements checklist to demonstrate their ability to mitigate vulnerabilities. We framed our study using Situation Awareness theory to elicit responses that were analyzed using coding theory and grounded analysis. Our results include decision-making patterns that characterize how analysts perceive, comprehend and project future threats, and how these patterns relate to selecting security mitigations. Based on this analysis, we discovered new theory to measure how security experts and novices apply attack models and how structured and unstructured analysis enables increasing security requirements coverage. We discuss suggestions of how our method could be adapted and applied to improve training and education instruments of security analysts.

2016-12-07
Maria Riaz, Travis Breaux, Laurie Williams.  2015.  How have we evaluated software pattern application? A systematic mapping study of research design practices Journal Information and Software Technology . 65(C):14-38.

ContextSoftware patterns encapsulate expert knowledge for constructing successful solutions to recurring problems. Although a large collection of software patterns is available in literature, empirical evidence on how well various patterns help in problem solving is limited and inconclusive. The context of these empirical findings is also not well understood, limiting applicability and generalizability of the findings. ObjectiveTo characterize the research design of empirical studies exploring software pattern application involving human participants. MethodWe conducted a systematic mapping study to identify and analyze 30 primary empirical studies on software pattern application, including 24 original studies and 6 replications. We characterize the research design in terms of the questions researchers have explored and the context of empirical research efforts. We also classify the studies in terms of measures used for evaluation, and threats to validity considered during study design and execution. ResultsUse of software patterns in maintenance is the most commonly investigated theme, explored in 16 studies. Object-oriented design patterns are evaluated in 14 studies while 4 studies evaluate architectural patterns. We identified 10 different constructs with 31 associated measures used to evaluate software patterns. Measures for 'efficiency' and 'usability' are commonly used to evaluate the problem solving process. While measures for 'completeness', 'correctness' and 'quality' are commonly used to evaluate the final artifact. Overall, 'time to complete a task' is the most frequently used measure, employed in 15 studies to measure 'efficiency'. For qualitative measures, studies do not report approaches for minimizing biases 27% of the time. Nine studies do not discuss any threats to validity. ConclusionSubtle differences in study design and execution can limit comparison of findings. Establishing baselines for participants' experience level, providing appropriate training, standardizing problem sets, and employing commonly used measures to evaluate performance can support replication and comparison of results across studies.

2015-01-13
John Slankas, Maria Riaz, Jason King, Laurie Williams.  2014.  Discovering Security Requirements from Natural Language. 36th International Conference on Software Engineering.

Project documentation often contains security-relevant statements that are indicative of the security requirements of a system. However these statements may not be explicitly specified or straightforward to locate. At best, requirements analysts manually extract applicable security requirements from project documents. However, security requirements that are not explicitly stated may not be considered during implementation. The goal of this research is to aid requirements analysts in generating security requirements through identifying securityrelevant statements in project documentation and providing context-specific templates to generate security requirements. First, we identify the most prevalent security objectives from software security literature. To identify security-relevant statements in project documentation, we propose a tool-based process to classify statements as related to zero or more security objectives. We then develop a set of context-specific templates to help translate the security objectives of each statement into explicit sets of security functional requirements. We evaluate our process on six documents from the electronic healthcare software industry, identifying 46% of statements as implicitly or explicitly related to security. Our classification approach identified security objectives with a precision of .82 and recall of .79. From our total set of classified statements, we extracted 16 context-specific templates that identify 41 reusable security requirements.

2016-12-05
Hanan Hibshi, Travis Breaux, Maria Riaz, Laurie Williams.  2014.  A Framework to Measure Experts' Decision Making in Security Requirements Analysis. 2014 IEEE 1st International Workshop on Evolving Security and Privacy Requirements Engineering (ESPRE).

Research shows that commonly accepted security requirements   are  not  generally  applied  in  practice.  Instead  of relying on requirements checklists, security experts rely on their expertise and background knowledge to identify security vulnerabilities.  To  understand  the  gap  between  available checklists  and  practice,  we  conducted  a  series  of  interviews  to encode   the   decision-making   process   of  security   experts   and novices during security requirements analysis. Participants were asked to analyze two types of artifacts: source code, and network diagrams  for  vulnerabilities  and  to  apply  a  requirements checklist to mitigate some of those vulnerabilities.  We framed our study using Situation Awareness—a cognitive theory from psychology—to   elicit  responses   that  we  later  analyzed   using coding theory and grounded analysis.  We report our preliminary results of analyzing two interviews that reveal possible decision- making patterns that could characterize how analysts perceive, comprehend   and  project  future  threats  which  leads  them  to decide upon requirements  and their specifications,  in addition, to how  experts  use  assumptions  to  overcome  ambiguity  in specifications.  Our goal is to build a model that researchers  can use to evaluate their security requirements methods against how experts transition through different situation awareness levels in their decision-making  process.

2016-12-06
Maria Riaz, Laurie Williams.  2012.  Security Requirements Patterns: Understanding the Science Behind the Art of Pattern Writing. 2012 Second IEEE International Workshop on Requirements Patterns (RePa).

Security requirements engineering ideally combines expertise in software security with proficiency in requirements engineering to provide a foundation for developing secure systems. However, security requirements are often inadequately understood and improperly specified, often due to lack of security expertise and a lack of emphasis on security during early stages of system development. Software systems often have common and recurrent security requirements in addition to system-specific security needs. Security requirements patterns can provide a means of capturing common security requirements while documenting the context in which a requirement manifests itself and the tradeoffs involved. The objective of this paper is to aid in understanding of the process for pattern development and provide considerations for writing effective security requirements patterns. We analyzed existing literature on software patterns, problem solving and cognition to outline the process for developing software patterns. We also reviewed strategies for specifying reusable security requirements and security requirements patterns. Our proposed considerations can aid pattern writers in capturing necessary contextual information when documenting security requirements patterns to facilitate application and integration of security requirements.