Visible to the public Biblio

Found 148 results

Filters: Keyword is security metrics  [Clear All Filters]
2019-10-23
Alshawish, Ali, Spielvogel, Korbinian, de Meer, Hermann.  2019.  A Model-Based Time-to-Compromise Estimator to Assess the Security Posture of Vulnerable Networks. 2019 International Conference on Networked Systems (NetSys). :1-3.

Several operational and economic factors impact the patching decisions of critical infrastructures. The constraints imposed by such factors could prevent organizations from fully remedying all of the vulnerabilities that expose their (critical) assets to risk. Therefore, an involved decision maker (e.g. security officer) has to strategically decide on the allocation of possible remediation efforts towards minimizing the inherent security risk. This, however, involves the use of comparative judgments to prioritize risks and remediation actions. Throughout this work, the security risk is quantified using the security metric Time-To-Compromise (TTC). Our main contribution is to provide a generic TTC estimator to comparatively assess the security posture of computer networks taking into account interdependencies between the network components, different adversary skill levels, and characteristics of (known and zero-day) vulnerabilities. The presented estimator relies on a stochastic TTC model and Monte Carlo simulation (MCS) techniques to account for the input data variability and inherent prediction uncertainties.

2019-09-26
Torkura, K. A., Sukmana, M. I. H., Meinig, M., Cheng, F., Meinel, C., Graupner, H..  2018.  A Threat Modeling Approach for Cloud Storage Brokerage and File Sharing Systems. NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium. :1-5.

Cloud storage brokerage systems abstract cloud storage complexities by mediating technical and business relationships between cloud stakeholders, while providing value-added services. This however raises security challenges pertaining to the integration of disparate components with sometimes conflicting security policies and architectural complexities. Assessing the security risks of these challenges is therefore important for Cloud Storage Brokers (CSBs). In this paper, we present a threat modeling schema to analyze and identify threats and risks in cloud brokerage brokerage systems. Our threat modeling schema works by generating attack trees, attack graphs, and data flow diagrams that represent the interconnections between identified security risks. Our proof-of-concept implementation employs the Common Configuration Scoring System (CCSS) to support the threat modeling schema, since current schemes lack sufficient security metrics which are imperatives for comprehensive risk assessments. We demonstrate the efficiency of our proposal by devising CCSS base scores for two attacks commonly launched against cloud storage systems: Cloud sStorage Enumeration Attack and Cloud Storage Exploitation Attack. These metrics are then combined with CVSS based metrics to assign probabilities in an Attack Tree. Thus, we show the possibility combining CVSS and CCSS for comprehensive threat modeling, and also show that our schemas can be used to improve cloud security.

2019-07-01
Savola, Reijo M., Savolainen, Pekka.  2018.  Risk-driven Security Metrics Development for Software-defined Networking. Proceedings of the 12th European Conference on Software Architecture: Companion Proceedings. :56:1–56:5.
Introduction of SDN (Software-Defined Networking) into the network management turns the formerly quite rigid networks to programmatically reconfigurable, dynamic and high-performing entities, which are managed remotely. At the same time, introduction of the new interfaces evidently widens the attack surface, and new kind of attack vectors are introduced threatening the QoS even critically. Thus, there is need for a security architecture, drawing from the SDN management and monitoring capabilities, and eventually covering the threats posed by the SDN evolution. For efficient security-architecture implementation, we analyze the security risks of SDN and based on that propose heuristic security objectives. Further, we decompose the objectives for effective security control implementation and security metrics definition to support informed security decision-making and continuous security improvement.
Šišejković, Dominik, Leupers, Rainer, Ascheid, Gerd, Metzner, Simon.  2018.  A Unifying Logic Encryption Security Metric. Proceedings of the 18th International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation. :179–186.
The globalization of the IC supply chain has brought forth the era of fabless companies. Due to security issues during design and fabrication processes, various security concerns have risen, ranging from IP piracy and reverse engineering to hardware Trojans. Logic encryption has emerged as a mitigation against these threats. However, no generic metrics for quantifying the security of logic encryption algorithms has been reported so far, making it impossible to formally compare different approaches. In this paper, we propose a unifying metric, capturing the key security aspects of logic encryption algorithms. The metric is evaluated on state-of-the-art algorithms and benchmarks.
Clemente, C. J., Jaafar, F., Malik, Y..  2018.  Is Predicting Software Security Bugs Using Deep Learning Better Than the Traditional Machine Learning Algorithms? 2018 IEEE International Conference on Software Quality, Reliability and Security (QRS). :95–102.

Software insecurity is being identified as one of the leading causes of security breaches. In this paper, we revisited one of the strategies in solving software insecurity, which is the use of software quality metrics. We utilized a multilayer deep feedforward network in examining whether there is a combination of metrics that can predict the appearance of security-related bugs. We also applied the traditional machine learning algorithms such as decision tree, random forest, naïve bayes, and support vector machines and compared the results with that of the Deep Learning technique. The results have successfully demonstrated that it was possible to develop an effective predictive model to forecast software insecurity based on the software metrics and using Deep Learning. All the models generated have shown an accuracy of more than sixty percent with Deep Learning leading the list. This finding proved that utilizing Deep Learning methods and a combination of software metrics can be tapped to create a better forecasting model thereby aiding software developers in predicting security bugs.

Kebande, V. R., Kigwana, I., Venter, H. S., Karie, N. M., Wario, R. D..  2018.  CVSS Metric-Based Analysis, Classification and Assessment of Computer Network Threats and Vulnerabilities. 2018 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD). :1–10.

This paper provides a Common Vulnerability Scoring System (CVSS) metric-based technique for classifying and analysing the prevailing Computer Network Security Vulnerabilities and Threats (CNSVT). The problem that is addressed in this paper, is that, at the time of writing this paper, there existed no effective approaches for analysing and classifying CNSVT for purposes of assessments based on CVSS metrics. The authors of this paper have achieved this by generating a CVSS metric-based dynamic Vulnerability Analysis Classification Countermeasure (VACC) criterion that is able to rank vulnerabilities. The CVSS metric-based VACC has allowed the computation of vulnerability Similarity Measure (VSM) using the Hamming and Euclidean distance metric functions. Nevertheless, the CVSS-metric based on VACC also enabled the random measuring of the VSM for a selected number of vulnerabilities based on the [Ma-Ma], [Ma-Mi], [Mi-Ci], [Ma-Ci] ranking score. This is a technique that is aimed at allowing security experts to be able to conduct proper vulnerability detection and assessments across computer-based networks based on the perceived occurrence by checking the probability that given threats will occur or not. The authors have also proposed high-level countermeasures of the vulnerabilities that have been listed. The authors have evaluated the CVSS-metric based VACC and the results are promising. Based on this technique, it is worth noting that these propositions can help in the development of stronger computer and network security tools.

Medeiros, N., Ivaki, N., Costa, P., Vieira, M..  2018.  An Approach for Trustworthiness Benchmarking Using Software Metrics. 2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC). :84–93.

Trustworthiness is a paramount concern for users and customers in the selection of a software solution, specially in the context of complex and dynamic environments, such as Cloud and IoT. However, assessing and benchmarking trustworthiness (worthiness of software for being trusted) is a challenging task, mainly due to the variety of application scenarios (e.g., businesscritical, safety-critical), the large number of determinative quality attributes (e.g., security, performance), and last, but foremost, due to the subjective notion of trust and trustworthiness. In this paper, we present trustworthiness as a measurable notion in relative terms based on security attributes and propose an approach for the assessment and benchmarking of software. The main goal is to build a trustworthiness assessment model based on software metrics (e.g., Cyclomatic Complexity, CountLine, CBO) that can be used as indicators of software security. To demonstrate the proposed approach, we assessed and ranked several files and functions of the Mozilla Firefox project based on their trustworthiness score and conducted a survey among several software security experts in order to validate the obtained rank. Results show that our approach is able to provide a sound ranking of the benchmarked software.

Meryem, Amar, Samira, Douzi, Bouabid, El Ouahidi.  2018.  Enhancing Cloud Security Using Advanced MapReduce K-means on Log Files. Proceedings of the 2018 International Conference on Software Engineering and Information Management. :63–67.

Many customers ranked cloud security as a major challenge that threaten their work and reduces their trust on cloud service's provider. Hence, a significant improvement is required to establish better adaptations of security measures that suit recent technologies and especially distributed architectures. Considering the meaningful recorded data in cloud generated log files, making analysis on them, mines insightful value about hacker's activities. It identifies malicious user behaviors and predicts new suspected events. Not only that, but centralizing log files, prevents insiders from causing damage to system. In this paper, we proposed to take away sensitive log files into a single server provider and combining both MapReduce programming and k-means on the same algorithm to cluster observed events into classes having similar features. To label unknown user behaviors and predict new suspected activities this approach considers cosine distances and deviation metrics.

Pope, Aaron Scott, Morning, Robert, Tauritz, Daniel R., Kent, Alexander D..  2018.  Automated Design of Network Security Metrics. Proceedings of the Genetic and Evolutionary Computation Conference Companion. :1680–1687.

Many abstract security measurements are based on characteristics of a graph that represents the network. These are typically simple and quick to compute but are often of little practical use in making real-world predictions. Practical network security is often measured using simulation or real-world exercises. These approaches better represent realistic outcomes but can be costly and time-consuming. This work aims to combine the strengths of these two approaches, developing efficient heuristics that accurately predict attack success. Hyper-heuristic machine learning techniques, trained on network attack simulation training data, are used to produce novel graph-based security metrics. These low-cost metrics serve as an approximation for simulation when measuring network security in real time. The approach is tested and verified using a simulation based on activity from an actual large enterprise network. The results demonstrate the potential of using hyper-heuristic techniques to rapidly evolve and react to emerging cybersecurity threats.

Ahmed, Yussuf, Naqvi, Syed, Josephs, Mark.  2018.  Aggregation of Security Metrics for Decision Making: A Reference Architecture. Proceedings of the 12th European Conference on Software Architecture: Companion Proceedings. :53:1–53:7.
Existing security technologies play a significant role in protecting enterprise systems but they are no longer enough on their own given the number of successful cyberattacks against businesses and the sophistication of the tactics used by attackers to bypass the security defences. Security measurement is different to security monitoring in the sense that it provides a means to quantify the security of the systems while security monitoring helps in identifying abnormal events and does not measure the actual state of an infrastructure's security. The goal of enterprise security metrics is to enable understanding of the overall security using measurements to guide decision making. In this paper we present a reference architecture for aggregating the measurement values from the different components of the system in order to enable stakeholders to see the overall security state of their enterprise systems and to assist with decision making. This will provide a newer dimension to security management by shifting from security monitoring to security measurement.
Zieger, A., Freiling, F., Kossakowski, K..  2018.  The β-Time-to-Compromise Metric for Practical Cyber Security Risk Estimation. 2018 11th International Conference on IT Security Incident Management IT Forensics (IMF). :115–133.

To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose β-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a β-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.

Carrasco, A., Ropero, J., Clavijo, P. Ruiz de, Benjumea, J., Luque, A..  2018.  A Proposal for a New Way of Classifying Network Security Metrics: Study of the Information Collected through a Honeypot. 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :633–634.

Nowadays, honeypots are a key tool to attract attackers and study their activity. They help us in the tasks of evaluating attacker's behaviour, discovering new types of attacks, and collecting information and statistics associated with them. However, the gathered data cannot be directly interpreted, but must be analyzed to obtain useful information. In this paper, we present a SSH honeypot-based system designed to simulate a vulnerable server. Thus, we propose an approach for the classification of metrics from the data collected by the honeypot along 19 months.

Arabsorkhi, A., Ghaffari, F..  2018.  Security Metrics: Principles and Security Assessment Methods. 2018 9th International Symposium on Telecommunications (IST). :305–310.

Nowadays, Information Technology is one of the important parts of human life and also of organizations. Organizations face problems such as IT problems. To solve these problems, they have to improve their security sections. Thus there is a need for security assessments within organizations to ensure security conditions. The use of security standards and general metric can be useful for measuring the safety of an organization; however, it should be noted that the general metric which are applied to businesses in general cannot be effective in this particular situation. Thus it's important to select metric standards for different businesses to improve both cost and organizational security. The selection of suitable security measures lies in the use of an efficient way to identify them. Due to the numerous complexities of these metric and the extent to which they are defined, in this paper that is based on comparative study and the benchmarking method, taxonomy for security measures is considered to be helpful for a business to choose metric tailored to their needs and conditions.

2019-05-01
Chen, Huashan, Cho, Jin-Hee, Xu, Shouhuai.  2018.  Quantifying the Security Effectiveness of Firewalls and DMZs. Proceedings of the 5th Annual Symposium and Bootcamp on Hot Topics in the Science of Security. :9:1–9:11.

Firewalls and Demilitarized Zones (DMZs) are two mechanisms that have been widely employed to secure enterprise networks. Despite this, their security effectiveness has not been systematically quantified. In this paper, we make a first step towards filling this void by presenting a representational framework for investigating their security effectiveness in protecting enterprise networks. Through simulation experiments, we draw useful insights into the security effectiveness of firewalls and DMZs. To the best of our knowledge, these insights were not reported in the literature until now.

2019-03-28
Llopis, S., Hingant, J., Pérez, I., Esteve, M., Carvajal, F., Mees, W., Debatty, T..  2018.  A Comparative Analysis of Visualisation Techniques to Achieve Cyber Situational Awareness in the Military. 2018 International Conference on Military Communications and Information Systems (ICMCIS). :1-7.
Starting from a common fictional scenario, simulated data sources and a set of measurements will feed two different visualization techniques with the aim to make a comparative analysis. Both visualization techniques described in this paper use the operational picture concept, deemed as the most appropriate tool for military commanders and their staff to achieve cyber situational awareness and to understand the cyber defence implications in operations. Cyber Common Operational Picture (CyCOP) is a tool developed by Universitat Politècnica de València in collaboration with the Spanish Ministry of Defence whose objective is to generate the Cyber Hybrid Situational Awareness (CyHSA). Royal Military Academy in Belgium developed a 3D Operational Picture able to display mission critical elements intuitively using a priori defined domain-knowledge. A comparative analysis will assist researchers in their way to progress solutions and implementation aspects.
2019-03-22
Alavizadeh, H., Jang-Jaccard, J., Kim, D. S..  2018.  Evaluation for Combination of Shuffle and Diversity on Moving Target Defense Strategy for Cloud Computing. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :573-578.

Moving Target Defence (MTD) has been recently proposed and is an emerging proactive approach which provides an asynchronous defensive strategies. Unlike traditional security solutions that focused on removing vulnerabilities, MTD makes a system dynamic and unpredictable by continuously changing attack surface to confuse attackers. MTD can be utilized in cloud computing to address the cloud's security-related problems. There are many literature proposing MTD methods in various contexts, but it still lacks approaches to evaluate the effectiveness of proposed MTD method. In this paper, we proposed a combination of Shuffle and Diversity MTD techniques and investigate on the effects of deploying these techniques from two perspectives lying on two groups of security metrics (i) system risk: which is the cloud providers' perspective and (ii) attack cost and return on attack: which are attacker's point of view. Moreover, we utilize a scalable Graphical Security Model (GSM) to enhance the security analysis complexity. Finally, we show that combining MTD techniques can improve both aforementioned two groups of security metrics while individual technique cannot.

2019-02-08
Enoch, Simon Yusuf, Hong, Jin B., Ge, Mengmeng, Alzaid, Hani, Kim, Dong Seong.  2018.  Automated Security Investment Analysis of Dynamic Networks. Proceedings of the Australasian Computer Science Week Multiconference. :6:1-6:10.
It is important to assess the cost benefits of IT security investments. Typically, this is done by manual risk assessment process. In this paper, we propose an approach to automate this using graphical security models (GSMs). GSMs have been used to assess the security of networked systems using various security metrics. Most of the existing GSMs assumed that networks are static, however, modern networks (e.g., Cloud and Software Defined Networking) are dynamic with changes. Thus, it is important to develop an approach that takes into account the dynamic aspects of networks. To this end, we automate security investments analysis of dynamic networks using a GSM named Temporal-Hierarchical Attack Representation Model (T-HARM) in order to automatically evaluate the security investments and their effectiveness for a given period of time. We demonstrate our approach via simulations.
Mukherjee, Preetam, Mazumdar, Chandan.  2018.  Attack Difficulty Metric for Assessment of Network Security. Proceedings of the 13th International Conference on Availability, Reliability and Security. :44:1-44:10.
In recent days, organizational networks are becoming target of sophisticated multi-hop attacks. Attack Graph has been proposed as a useful modeling tool for complex attack scenarios by combining multiple vulnerabilities in causal chains. Analysis of attack scenarios enables security administrators to calculate quantitative security measurements. These measurements justify security investments in the organization. Different security metrics based on attack graph have been introduced for evaluation of comparable security measurements. Studies show that difficulty of exploiting the same vulnerability changes with change of its position in the causal chains of attack graph. In this paper, a new security metric based on attack graph, namely Attack Difficulty has been proposed to include this position factor. The security metrics are classified in two major categories viz. counting metrics and difficulty-based metrics. The proposed Attack Difficulty Metric employs both categories of metrics as the basis for its measurement. Case studies have been presented for demonstrating applicability of the proposed metric. Comparison of this new metric with other attack graph based security metrics has also been included to validate its acceptance in real life situations.
2018-11-19
Culler, M., Davis, K..  2018.  Toward a Sensor Trustworthiness Measure for Grid-Connected IoT-Enabled Smart Cities. 2018 IEEE Green Technologies Conference (GreenTech). :168–171.

Traditional security measures for large-scale critical infrastructure systems have focused on keeping adversaries out of the system. As the Internet of Things (IoT) extends into millions of homes, with tens or hundreds of devices each, the threat landscape is complicated. IoT devices have unknown access capabilities with unknown reach into other systems. This paper presents ongoing work on how techniques in sensor verification and cyber-physical modeling and analysis on bulk power systems can be applied to identify malevolent IoT devices and secure smart and connected communities against the most impactful threats.

2018-09-05
Pasareanu, C..  2017.  Symbolic execution and probabilistic reasoning. 2017 32nd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS). :1–1.
Summary form only given. Symbolic execution is a systematic program analysis technique which explores multiple program behaviors all at once by collecting and solving symbolic path conditions over program paths. The technique has been recently extended with probabilistic reasoning. This approach computes the conditions to reach target program events of interest and uses model counting to quantify the fraction of the input domain satisfying these conditions thus computing the probability of event occurrence. This probabilistic information can be used for example to compute the reliability of an aircraft controller under different wind conditions (modeled probabilistically) or to quantify the leakage of sensitive data in a software system, using information theory metrics such as Shannon entropy. In this talk we review recent advances in symbolic execution and probabilistic reasoning and we discuss how they can be used to ensure the safety and security of software systems.
Turnley, J., Wachtel, A., Muñoz-Ramos, K., Hoffman, M., Gauthier, J., Speed, A., Kittinger, R..  2017.  Modeling human-technology interaction as a sociotechnical system of systems. 2017 12th System of Systems Engineering Conference (SoSE). :1–6.
As system of systems (SoS) models become increasingly complex and interconnected a new approach is needed to capture the effects of humans within the SoS. Many real-life events have shown the detrimental outcomes of failing to account for humans in the loop. This research introduces a novel and cross-disciplinary methodology for modeling humans interacting with technologies to perform tasks within an SoS specifically within a layered physical security system use case. Metrics and formulations developed for this new way of looking at SoS termed sociotechnical SoS allow for the quantification of the interplay of effectiveness and efficiency seen in detection theory to measure the ability of a physical security system to detect and respond to threats. This methodology has been applied to a notional representation of a small military Forward Operating Base (FOB) as a proof-of-concept.
Zhang, H., Lou, F., Fu, Y., Tian, Z..  2017.  A Conditional Probability Computation Method for Vulnerability Exploitation Based on CVSS. 2017 IEEE Second International Conference on Data Science in Cyberspace (DSC). :238–241.
Computing the probability of vulnerability exploitation in Bayesian attack graphs (BAGs) is a key process for the network security assessment. The conditional probability of vulnerability exploitation could be obtained from the exploitability of the NIST's Common Vulnerability Scoring System (CVSS). However, the method which N. Poolsappasit et al. proposed for computing conditional probability could be used only in the CVSS metric version v2.0, and can't be used in other two versions. In this paper, we present two methods for computing the conditional probability based on CVSS's other two metric versions, version 1.0 and version 3.0, respectively. Based on the CVSS, the conditional probability computation of vulnerability exploitation is complete by combining the method of N. Poolsappasit et al.
Teusner, R., Matthies, C., Giese, P..  2017.  Should I Bug You? Identifying Domain Experts in Software Projects Using Code Complexity Metrics 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :418–425.
In any sufficiently complex software system there are experts, having a deeper understanding of parts of the system than others. However, it is not always clear who these experts are and which particular parts of the system they can provide help with. We propose a framework to elicit the expertise of developers and recommend experts by analyzing complexity measures over time. Furthermore, teams can detect those parts of the software for which currently no, or only few experts exist and take preventive actions to keep the collective code knowledge and ownership high. We employed the developed approach at a medium-sized company. The results were evaluated with a survey, comparing the perceived and the computed expertise of developers. We show that aggregated code metrics can be used to identify experts for different software components. The identified experts were rated as acceptable candidates by developers in over 90% of all cases.
Wang, J., Shi, D., Li, Y., Chen, J., Duan, X..  2017.  Realistic measurement protection schemes against false data injection attacks on state estimators. 2017 IEEE Power Energy Society General Meeting. :1–5.
False data injection attacks (FDIA) on state estimators are a kind of imminent cyber-physical security issue. Fortunately, it has been proved that if a set of measurements is strategically selected and protected, no FDIA will remain undetectable. In this paper, the metric Return on Investment (ROI) is introduced to evaluate the overall returns of the alternative measurement protection schemes (MPS). By setting maximum total ROI as the optimization objective, the previously ignored cost-benefit issue is taken into account to derive a realistic MPS for power utilities. The optimization problem is transformed into the Steiner tree problem in graph theory, where a tree pruning based algorithm is used to reduce the computational complexity and find a quasi-optimal solution with acceptable approximations. The correctness and efficiency of the algorithm are verified by case studies.
Hossain, M. A., Merrill, H. M., Bodson, M..  2017.  Evaluation of metrics of susceptibility to cascading blackouts. 2017 IEEE Power and Energy Conference at Illinois (PECI). :1–5.
In this paper, we evaluate the usefulness of metrics that assess susceptibility to cascading blackouts. The metrics are computed using a matrix of Line Outage Distribution Factors (LODF, or DFAX matrix). The metrics are compared for several base cases with different load levels of the Western Interconnection (WI). A case corresponding to the September 8, 2011 pre-blackout state is used to compute these metrics and relate them to the origin of the cascading blackout. The correlation between the proposed metrics is determined to check redundancy. The analysis is also used to find vulnerable and critical hot spots in the power system.