Visible to the public Biblio

Filters: Author is Medeiros, Nadia  [Clear All Filters]
2023-06-09
Carvalho, Gonçalo, Medeiros, Nadia, Madeira, Henrique, Cabral, Bruno.  2022.  A Functional FMECA Approach for the Assessment of Critical Infrastructure Resilience. 2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS). :672—681.
The damage or destruction of Critical Infrastructures (CIs) affect societies’ sustainable functioning. Therefore, it is crucial to have effective methods to assess the risk and resilience of CIs. Failure Mode and Effects Analysis (FMEA) and Failure Mode Effects and Criticality Analysis (FMECA) are two approaches to risk assessment and criticality analysis. However, these approaches are complex to apply to intricate CIs and associated Cyber-Physical Systems (CPS). We provide a top-down strategy, starting from a high abstraction level of the system and progressing to cover the functional elements of the infrastructures. This approach develops from FMECA but estimates risks and focuses on assessing resilience. We applied the proposed technique to a real-world CI, predicting how possible improvement scenarios may influence the overall system resilience. The results show the effectiveness of our approach in benchmarking the CI resilience, providing a cost-effective way to evaluate plausible alternatives concerning the improvement of preventive measures.
2022-04-01
Medeiros, Nadia, Ivaki, Naghmeh, Costa, Pedro, Vieira, Marco.  2021.  An Empirical Study On Software Metrics and Machine Learning to Identify Untrustworthy Code. 2021 17th European Dependable Computing Conference (EDCC). :87—94.
The increasingly intensive use of software systems in diverse sectors, especially in business, government, healthcare, and critical infrastructures, makes it essential to deliver code that is secure. In this work, we present two sets of experiments aiming at helping developers to improve software security from the early development stages. The first experiment is focused on using software metrics to build prediction models to distinguish vulnerable from non-vulnerable code. The second experiment studies the hypothesis of developing a consensus-based decision-making approach on top of several machine learning-based prediction models, trained using software metrics data to categorize code units with respect to their security. Such categories suggest a priority (ranking) of software code units based on the potential existence of security vulnerabilities. Results show that software metrics do not constitute sufficient evidence of security issues and cannot effectively be used to build a prediction model to distinguish vulnerable from non-vulnerable code. However, with a consensus-based decision-making approach, it is possible to classify code units from a security perspective, which allows developers to decide (considering the criticality of the system under development and the available resources) which parts of the software should be the focal point for the detection and removal of security vulnerabilities.