Torquato, Matheus, Vieira, Marco.
2021.
VM Migration Scheduling as Moving Target Defense against Memory DoS Attacks: An Empirical Study. 2021 IEEE Symposium on Computers and Communications (ISCC). :1—6.
Memory Denial of Service (DoS) attacks are easy-to-launch, hard to detect, and significantly impact their targets. In memory DoS, the attacker targets the memory of his Virtual Machine (VM) and, due to hardware isolation issues, the attack affects the co-resident VMs. Theoretically, we can deploy VM migration as Moving Target Defense (MTD) against memory DoS. However, the current literature lacks empirical evidence supporting this hypothesis. Moreover, there is a need to evaluate how the VM migration timing impacts the potential MTD protection. This practical experience report presents an experiment on VM migration-based MTD against memory DoS. We evaluate the impact of memory DoS attacks in the context of two applications running in co-hosted VMs: machine learning and OLTP. The results highlight that the memory DoS attacks lead to more than 70% reduction in the applications' performance. Nevertheless, timely VM migrations can significantly mitigate the attack effects in both considered applications.
Pereira, José D'Abruzzo, Antunes, João Henggeler, Vieira, Marco.
2021.
On Building a Vulnerability Dataset with Static Information from the Source Code. 2021 10th Latin-American Symposium on Dependable Computing (LADC). :1–2.
Software vulnerabilities are weaknesses in software systems that can have serious consequences when exploited. Examples of side effects include unauthorized authentication, data breaches, and financial losses. Due to the nature of the software industry, companies are increasingly pressured to deploy software as quickly as possible, leading to a large number of undetected software vulnerabilities. Static code analysis, with the support of Static Analysis Tools (SATs), can generate security alerts that highlight potential vulnerabilities in an application's source code. Software Metrics (SMs) have also been used to predict software vulnerabilities, usually with the support of Machine Learning (ML) classification algorithms. Several datasets are available to support the development of improved software vulnerability detection techniques. However, they suffer from the same issues: they are either outdated or use a single type of information. In this paper, we present a methodology for collecting software vulnerabilities from known vulnerability databases and enhancing them with static information (namely SAT alerts and SMs). The proposed methodology aims to define a mechanism capable of more easily updating the collected data.
Pereira, José D'Abruzzo, Campos, João R., Vieira, Marco.
2021.
Machine Learning to Combine Static Analysis Alerts with Software Metrics to Detect Security Vulnerabilities: An Empirical Study. 2021 17th European Dependable Computing Conference (EDCC). :1—8.
Software developers can use diverse techniques and tools to reduce the number of vulnerabilities, but the effectiveness of existing solutions in real projects is questionable. For example, Static Analysis Tools (SATs) report potential vulnerabilities by analyzing code patterns, and Software Metrics (SMs) can be used to predict vulnerabilities based on high-level characteristics of the code. In theory, both approaches can be applied from the early stages of the development process, but it is well known that they fail to detect critical vulnerabilities and raise a large number of false alarms. This paper studies the hypothesis of using Machine Learning (ML) to combine alerts from SATs with SMs to predict vulnerabilities in a large software project (under development for many years). In practice, we use four ML algorithms, alerts from two SATs, and a large number of SMs to predict whether a source code file is vulnerable or not (binary classification) and to predict the vulnerability category (multiclass classification). Results show that one can achieve either high precision or high recall, but not both at the same time. To understand the reason, we analyze and compare snippets of source code, demonstrating that vulnerable and non-vulnerable files share similar characteristics, making it hard to distinguish vulnerable from non-vulnerable code based on SAT alerts and SMs.
Medeiros, Nadia, Ivaki, Naghmeh, Costa, Pedro, Vieira, Marco.
2021.
An Empirical Study On Software Metrics and Machine Learning to Identify Untrustworthy Code. 2021 17th European Dependable Computing Conference (EDCC). :87—94.
The increasingly intensive use of software systems in diverse sectors, especially in business, government, healthcare, and critical infrastructures, makes it essential to deliver code that is secure. In this work, we present two sets of experiments aiming at helping developers to improve software security from the early development stages. The first experiment is focused on using software metrics to build prediction models to distinguish vulnerable from non-vulnerable code. The second experiment studies the hypothesis of developing a consensus-based decision-making approach on top of several machine learning-based prediction models, trained using software metrics data to categorize code units with respect to their security. Such categories suggest a priority (ranking) of software code units based on the potential existence of security vulnerabilities. Results show that software metrics do not constitute sufficient evidence of security issues and cannot effectively be used to build a prediction model to distinguish vulnerable from non-vulnerable code. However, with a consensus-based decision-making approach, it is possible to classify code units from a security perspective, which allows developers to decide (considering the criticality of the system under development and the available resources) which parts of the software should be the focal point for the detection and removal of security vulnerabilities.