Biblio
Cyber security return on investment (RoI) or return on security investment (RoSI) is extremely challenging to measure. This is partly because it is difficult to measure the actual cost of a cyber security incident or cyber security proceeds. This is further complicated by the fact that there are no consensus metrics that every organisation agrees to, and even among cyber subject matter experts, there are no set of agreed parameters or metric upon which cyber security benefits or rewards can be assessed against. One approach to demonstrating return on security investment is by producing cyber security reports of certain key performance indicators (KPI) and metrics, such as number of cyber incidents detected, number of cyber-attacks or terrorist attacks that were foiled, or ongoing monitoring capabilities. These are some of the demonstratable and empirical metrics that could be used to measure RoSI. In this abstract paper, we investigate some of the cyber KPIs and metrics to be considered for cyber dashboard and reporting for RoSI.
Several operational and economic factors impact the patching decisions of critical infrastructures. The constraints imposed by such factors could prevent organizations from fully remedying all of the vulnerabilities that expose their (critical) assets to risk. Therefore, an involved decision maker (e.g. security officer) has to strategically decide on the allocation of possible remediation efforts towards minimizing the inherent security risk. This, however, involves the use of comparative judgments to prioritize risks and remediation actions. Throughout this work, the security risk is quantified using the security metric Time-To-Compromise (TTC). Our main contribution is to provide a generic TTC estimator to comparatively assess the security posture of computer networks taking into account interdependencies between the network components, different adversary skill levels, and characteristics of (known and zero-day) vulnerabilities. The presented estimator relies on a stochastic TTC model and Monte Carlo simulation (MCS) techniques to account for the input data variability and inherent prediction uncertainties.
Several assessment techniques and methodologies exist to analyze the security of an application dynamically. However, they either are focused on a particular product or are mainly concerned about the assessment process rather than the product's security confidence. Most crucially, they tend to assess the security of a target application as a standalone artifact without assessing its host infrastructure. Such attempts can undervalue the overall security posture since the infrastructure becomes crucial when it hosts a critical application. We present an ontology-based security model that aims to provide the necessary knowledge, including network settings, application configurations, testing techniques and tools, and security metrics to evaluate the security aptitude of a critical application in the context of its hosting infrastructure. The objective is to integrate the current good practices and standards in security testing and virtualization to furnish an on-demand and test-ready virtual target infrastructure to execute the critical application and to initiate a context-aware and quantifiable security assessment process in an automated manner. Furthermore, we present a security assessment architecture to reflect on how the ontology can be integrated into a standard process.
An improved algorithm of the Analytic Hierarchy Process (AHP) is proposed in this paper, which is realized by constructing an improved judgment matrix. Specifically, rough set theory is used in the algorithm to calculate the weight of the network metric data, and then the improved AHP algorithm nine-point systemic is structured, finally, an improved AHP judgment matrix is constructed. By performing an AHP operation on the improved judgment matrix, the weight of the improved network metric data can be obtained. If only the rough set theory is applied to process the network index data, the objective factors would dominate the whole process. If the improved algorithm of AHP is used to integrate the expert score into the process of measurement, then the combination of subjective factors and objective factors can be realized. Based on the aforementioned theory, a new network attack metrics system is proposed in this paper, which uses a metric structure based on "attack type-attack attribute-attack atomic operation-attack metrics", in which the metric process of attack attribute adopts AHP. The metrics of the system are comprehensive, given their judgment of frequent attacks is universal. The experiment was verified by an experiment of a common attack Smurf. The experimental results show the effectiveness and applicability of the proposed measurement system.
The last decade has witnessed a growing interest in exploiting the advantages of Cloud Computing technology. However, the full migration of services and data to the Cloud is still cautious due to the lack of security assurance. Cloud Service Providers (CSPs)are urged to exert the necessary efforts to boost their reputation and improve their trustworthiness. Nevertheless, the uniform implementation of advanced security solutions across all their data centers is not the ideal solution, since customers' security requirements are usually not monolithic. In this paper, we aim at integrating the Cloud security risk into the process of resource provisioning to increase the security of Cloud data centers. First, we propose a quantitative security risk evaluation approach based on the definition of distinct security metrics and configurations adapted to the Cloud Computing environment. Then, the evaluated security risk levels are incorporated into a resource provisioning model in an InterCloud setting. Finally, we adopt two different metaheuristics approaches from the family of evolutionary computation to solve the security risk-aware resource provisioning problem. Simulations show that our model reduces the security risk within the Cloud infrastructure and demonstrate the efficiency and scalability of proposed solutions.
Software integration in modern vehicles is continuously expanding. This is due to the fact that vehicle manufacturers are always trying to enhance and add more innovative and competitive features that may rely on complex software functionalities. However, these features come at a cost. They amplify the security vulnerabilities in vehicles and lead to more security issues in today's automobiles. As a result, the need for identifying vulnerable components in a vehicle software system has become crucial. Security experts need to know which components of the vehicle software system can be exploited for attacks and should focus their testing and inspection efforts on it. Nevertheless, it is a challenging and costly task to identify these weak components in a vehicle's system. In this paper, we propose some security vulnerability metrics for connected vehicles that aim to assist software testers during the development life-cycle in order to identify the frail links that put the vehicle at highsecurity risks. Vulnerable function assessment can give software testers a good idea about which components in a connected vehicle need to be prioritized in order to mitigate the risk and hence secure the vehicle. The proposed metrics were applied to OpenPilot - a software that provides Autopilot feature - and has been integrated with 48 different vehicles.. The application shows how the defined metrics can be effectively used to quantitatively measure the vulnerabilities of a vehicle software system.
Software security is a major concern of the developers who intend to deliver a reliable software. Although there is research that focuses on vulnerability prediction and discovery, there is still a need for building security-specific metrics to measure software security and vulnerability-proneness quantitatively. The existing methods are either based on software metrics (defined on the physical characteristics of code; e.g. complexity or lines of code) which are not security-specific or some generic patterns known as nano-patterns (Java method-level traceable patterns that characterize a Java method or function). Other methods predict vulnerabilities using text mining approaches or graph algorithms which perform poorly in cross-project validation and fail to be a generalized prediction model for any system. In this paper, we envision to construct an automated framework that will assist developers to assess the security level of their code and guide them towards developing secure code. To accomplish this goal, we aim to refine and redefine the existing nano-patterns and software metrics to make them more security-centric so that they can be used for measuring the software security level of a source code (either file or function) with higher accuracy. In this paper, we present our visionary approach through a series of three consecutive studies where we (1) will study the challenges of the current software metrics and nano-patterns in vulnerability prediction, (2) will redefine and characterize the nano-patterns and software metrics so that they can capture security-specific properties of code and measure the security level quantitatively, and finally (3) will implement an automated framework for the developers to automatically extract the values of all the patterns and metrics for the given code segment and then flag the estimated security level as a feedback based on our research results. We accomplished some preliminary experiments and presented the results which indicate that our vision can be practically implemented and will have valuable implications in the community of software security.
Software Defined Networking (SDN) provides new functionalities to efficiently manage the network traffic, which can be used to enhance the networking capabilities to support the growing communication demands today. But at the same time, it introduces new attack vectors that can be exploited by attackers. Hence, evaluating and selecting countermeasures to optimize the security of the SDN is of paramount importance. However, one should also take into account the trade-off between security and performance of the SDN. In this paper, we present a security optimization approach for the SDN taking into account the trade-off between security and performance. We evaluate the security of the SDN using graphical security models and metrics, and use queuing models to measure the performance of the SDN. Further, we use Genetic Algorithms, namely NSGA-II, to optimally select the countermeasure with performance and security constraints. Our experimental analysis results show that the proposed approach can efficiently compute the countermeasures that will optimize the security of the SDN while satisfying the performance constraints.
Realizing the importance of the concept of “smart city” and its impact on the quality of life, many infrastructures, such as power plants, began their digital transformation process by leveraging modern computing and advanced communication technologies. Unfortunately, by increasing the number of connections, power plants become more and more vulnerable and also an attractive target for cyber-physical attacks. The analysis of interdependencies among system components reveals interdependent connections, and facilitates the identification of those among them that are in need of special protection. In this paper, we review the recent literature which utilizes graph-based models and network-based models to study these interdependencies. A comprehensive overview, based on the main features of the systems including communication direction, control parameters, research target, scalability, security and safety, is presented. We also assess the computational complexity associated with the approaches presented in the reviewed papers, and we use this metric to assess the scalability of the approaches.
The increasing demand and the use of mobile ad hoc network (MANET) in recent days have attracted the attention of researchers towards pursuing active research work largely related to security attacks in MANET. Gray hole attack is one of the most common security attacks observed in MANET. The paper focuses on gray hole attack analysis in Ad hoc on demand distance vector(AODV) routing protocol based MANET with reliability as a metric. Simulation is performed using ns-2.35 simulation software under varying number of network nodes and varying number of gray hole nodes. Results of simulation indicates that increasing the number of gray hole node in the MANET will decrease the reliability of MANET.
Common vulnerability scoring system (CVSS) is an industry standard that can assess the vulnerability of nodes in traditional computer systems. The metrics computed by CVSS would determine critical nodes and attack paths. However, traditional IT security models would not fit IoT embedded networks due to distinct nature and unique characteristics of IoT systems. This paper analyses the application of CVSS for IoT embedded systems and proposes an improved vulnerability scoring system based on CVSS v3 framework. The proposed framework, named CVSSIoT, is applied to a realistic IT supply chain system and the results are compared with the actual vulnerabilities from the national vulnerability database. The comparison result validates the proposed model. CVSSIoT is not only effective, simple and capable of vulnerability evaluation for traditional IT system, but also exploits unique characteristics of IoT devices.
This paper begins with an introduction to security metrics, describing the need for security metrics, followed by a discussion of the nature of security metrics, including the challenges found with some security metrics used in the past. The paper then discusses what makes a good security metric and proposes a rigorous step-by-step method that can be applied to design good security metrics, and to test existing security metrics to see if they are good metrics. Application examples are included to illustrate the method.
How to evaluate software reliability based on historical data of embedded software projects is one of the problems we have to face in practical engineering. Therefore, we establish a software reliability evaluation model based on code metrics. This evaluation technique requires the aggregation of software code metrics into project metrics. Statistical value methods, metric distribution methods, and econometric methods are commonly-used aggregation methods. What are the differences between these methods in the software reliability evaluation process, and which methods can improve the accuracy of the reliability assessment model we have established are our concerns. In view of these concerns, we conduct an empirical study on the application of software code metric aggregation methods based on actual projects. We find the distribution of code metrics for the projects under study. Using these distribution laws, we optimize the aggregation method of code metrics and improve the accuracy of the software reliability evaluation model.
The proliferation of IoT devices in smart homes, hospitals, and enterprise networks is wide-spread and continuing to increase in a superlinear manner. The question is: how can one assess the security of an IoT network in a holistic manner? In this paper, we have explored two dimensions of security assessment- using vulnerability information and attack vectors of IoT devices and their underlying components (compositional security scores) and using SIEM logs captured from the communications and operations of such devices in a network (dynamic activity metrics). These measures are used to evaluate the security of IoT devices and the overall IoT network, demonstrating the effectiveness of attack circuits as practical tools for computing security metrics (exploitability, impact, and risk to confidentiality, integrity, and availability) of the network. We decided to approach threat modeling using attack graphs. To that end, we propose the notion of attack circuits, which are generated from input/output pairs constructed from CVEs using NLP, and an attack graph composed of these circuits. Our system provides insight into possible attack paths an adversary may utilize based on their exploitability, impact, or overall risk. We have performed experiments on IoT networks to demonstrate the efficacy of the proposed techniques.
It is notably challenging to design an efficient and secure signature scheme based on error-correcting codes. An approach to build such signature schemes is to derive it from an identification protocol through the Fiat-Shamir transform. All such protocols based on codes must be run several rounds, since each run of the protocol allows a cheating probability of either 2/3 or 1/2. The resulting signature size is proportional to the number of rounds, thus making the 1/2 cheating probability version more attractive. We present a signature scheme based on double circulant codes in the rank metric, derived from an identification protocol with cheating probability of 2/3. We reduced this probability to almost 1/2 to obtain the smallest signature among code-based signature schemes based on the Fiat-Shamir paradigm, around 22 KBytes for 128 bit security level. Furthermore, among all code-based signature schemes, our proposal has the lowest value of signature plus public key size, and the smallest secret and public key sizes. We provide a security proof in the Random Oracle Model, implementation performances, and a comparison with the parameters of similar signature schemes.
Nowadays, communication networks have a high relevance in any field. Because of this, it is necessary to maintain them working properly and with an adequate security level. In many fields, and in anomaly detection in communication networks in particular, it results really convenient the use of early detection methods. Therefore, adequate metrics must be defined to allow the correct evaluation of methods applied in relation to time delay in the detection. In this thesis the definition of time-aware metrics for early detection anomaly techniques evaluation.
Software metrics help developers discover and fix mistakes. However, despite promising empirical evidence, vulnerability discovery metrics are seldom relied upon in practice. In prior research, the effectiveness of these metrics has typically been expressed using precision and recall of a prediction model that uses the metrics as explanatory variables. These prediction models, being black boxes, may not be perceived as useful by developers. However, by systematically interpreting the models and metrics, we can provide developers with nuanced insights about factors that have led to security mistakes in the past. In this paper, we present a preliminary approach to using vulnerability discovery metrics to provide insightful feedback to developers as they engineer software. We collected ten metrics (churn, collaboration centrality, complexity, contribution centrality, nesting, known offender, source lines of code, \# inputs, \# outputs, and \# paths) from six open-source projects. We assessed the generalizability of the metrics across two contextual dimensions (application domain and programming language) and between projects within a domain, computed thresholds for the metrics using an unsupervised approach from literature, and assessed the ability of these unsupervised thresholds to classify risk from historical vulnerabilities in the Chromium project. The observations from this study feeds into our ongoing research to automatically aggregate insights from the various analyses to generate natural language feedback on security. We hope that our approach to generate automated feedback will accelerate the adoption of research in vulnerability discovery metrics.
This article presents the valuable experience and practical results of exploratory research by authors on the scientific problem of cyber-resilient (Cyber Resilience) critical information infrastructure in the previously unknown heterogeneous mass cyber attacks of attackers based on similarity invariants. It is essential that the results obtained significantly complement the well-known practices and recommendations of ISO 22301 (https://www.iso.org), MITER PR 15-1334 (www.mitre.org) and NIST SP 800-160 (www.nist.gov) in terms of developing quantitative metrics and cyber resistance measures. This allows you to open and formally present the ultimate law of the effectiveness of ensuring the cyber stability of modern systems of Industry 4.0. in the face of growing security threats.
In this paper, we propose a new method for optimizing the deployment of security solutions within an IoT network. Our approach uses dominating sets and centrality metrics to propose an IoT security framework where security functions are optimally deployed among devices. An example of such a solution is presented based on EndToEnd like encryption. The results reveal overall increased security within the network with minimal impact on the traffic.
This paper describes MADHAT (Multidimensional Anomaly Detection fusing HPC, Analytics, and Tensors), an integrated workflow that demonstrates the applicability of HPC resources to the problem of maintaining cyber situational awareness. MADHAT combines two high-performance packages: ENSIGN for large-scale sparse tensor decompositions and HAGGLE for graph analytics. Tensor decompositions isolate coherent patterns of network behavior in ways that common clustering methods based on distance metrics cannot. Parallelized graph analysis then uses directed queries on a representation that combines the elements of identified patterns with other available information (such as additional log fields, domain knowledge, network topology, whitelists and blacklists, prior feedback, and published alerts) to confirm or reject a threat hypothesis, collect context, and raise alerts. MADHAT was developed using the collaborative HPC Architecture for Cyber Situational Awareness (HACSAW) research environment and evaluated on structured network sensor logs collected from Defense Research and Engineering Network (DREN) sites using HPC resources at the U.S. Army Engineer Research and Development Center DoD Supercomputing Resource Center (ERDC DSRC). To date, MADHAT has analyzed logs with over 650 million entries.
This article discusses existing approaches to building regional scale networks. Authors offer a mathematical model of network growth process, on the basis of which simulation is performed. The availability characteristic is used as criterion for measuring optimality. This report describes the mechanism for measuring network availability and contains propositions to make changes to the procedure for designing of regional networks, which can improve its qualitative characteristics. The efficiency of changes is confirmed by simulation.