Biblio
Bus factor is a metric that identifies how resilient is the project to the sudden engineer turnover. It states the minimal number of engineers that have to be hit by a bus for a project to be stalled. Even though the metric is often discussed in the community, few studies consider its general relevance. Moreover, the existing tools for bus factor estimation focus solely on the data from version control systems, even though there exists other channels for knowledge generation and distribution. With a survey of 269 engineers, we find that the bus factor is perceived as an important problem in collective development, and determine the highest impact channels of knowledge generation and distribution in software development teams. We also propose a multimodal bus factor estimation algorithm that uses data on code reviews and meetings together with the VCS data. We test the algorithm on 13 projects developed at JetBrains and compared its results to the results of the state-of-the-art tool by Avelino et al. against the ground truth collected in a survey of the engineers working on these projects. Our algorithm is slightly better in terms of both predicting the bus factor as well as key developers compared to the results of Avelino et al. Finally, we use the interviews and the surveys to derive a set of best practices to address the bus factor issue and proposals for the possible bus factor assessment tool.
As modern web browsers gain new and increasingly powerful features the importance of impact assessments of the new functionality becomes crucial. A web privacy impact assessment of a planned web browser feature, the Ambient Light Sensor API, indicated risks arising from the exposure of overly precise information about the lighting conditions in the user environment. The analysis led to the demonstration of direct risks of leaks of user data, such as the list of visited websites or exfiltration of sensitive content across distinct browser contexts. Our work contributed to the creation of web standards leading to decisions by browser vendors (i.e. obsolescence, non-implementation or modification to the operation of browser features). We highlight the need to consider broad risks when making reviews of new features. We offer practically-driven high-level observations lying on the intersection of web security and privacy risk engineering and modeling, and standardization. We structure our work as a case study from activities spanning over three years.
Cloud computing is widely believed to be the future of computing. It has grown from being a promising idea to one of the fastest research and development paradigms of the computing industry. However, security and privacy concerns represent a significant hindrance to the widespread adoption of cloud computing services. Likewise, the attributes of the cloud such as multi-tenancy, dynamic supply chain, limited visibility of security controls and system complexity, have exacerbated the challenge of assessing cloud risks. In this paper, we conduct a real-world case study to validate the use of a supply chaininclusive risk assessment model in assessing the risks of a multicloud SaaS application. Using the components of the Cloud Supply Chain Cyber Risk Assessment (CSCCRA) model, we show how the model enables cloud service providers (CSPs) to identify critical suppliers, map their supply chain, identify weak security spots within the chain, and analyse the risk of the SaaS application, while also presenting the value of the risk in monetary terms. A key novelty of the CSCCRA model is that it caters for the complexities involved in the delivery of SaaS applications and adapts to the dynamic nature of the cloud, enabling CSPs to conduct risk assessments at a higher frequency, in response to a change in the supply chain.
According to a 2011 survey in healthcare, the most commonly reported breaches of protected health information involved employees snooping into medical records of friends and relatives. Logging mechanisms can provide a means for forensic analysis of user activity in software systems by proving that a user performed certain actions in the system. However, logging mechanisms often inconsistently capture user interactions with sensitive data, creating gaps in traces of user activity. Explicit design principles and systematic testing of logging mechanisms within the software development lifecycle may help strengthen the overall security of software. The objective of this research is to observe the current state of logging mechanisms by performing an exploratory case study in which we systematically evaluate logging mechanisms by supplementing the expected results of existing functional black-box test cases to include log output. We perform an exploratory case study of four open-source electronic health record (EHR) logging mechanisms: OpenEMR, OSCAR, Tolven eCHR, and WorldVistA. We supplement the expected results of 30 United States government-sanctioned test cases to include log output to track access of sensitive data. We then execute the test cases on each EHR system. Six of the 30 (20%) test cases failed on all four EHR systems because user interactions with sensitive data are not logged. We find that viewing protected data is often not logged by default, allowing unauthorized views of data to go undetected. Based on our results, we propose a set of principles that developers should consider when developing logging mechanisms to ensure the ability to capture adequate traces of user activity.