Visible to the public Computing Theory and Security Metrics, 2014

SoS Newsletter- Advanced Book Block

 

 
SoS Logo

Computing Theory and Security Metrics, 2014


The works cited here combine research into computing theory with research into security metrics.  All were presented in 2014.


George Cybenko, Jeff Hughes. “No Free Lunch in Cyber Security.” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, vol. no., pp.1, 12. doi:10.1145/2663474.2663475
Abstract: Confidentiality, integrity and availability (CIA) are traditionally considered to be the three core goals of cyber security. By developing probabilistic models of these security goals we show that:
•    the CIA goals are actually specific operating points in a continuum of possible mission security requirements;
•    component diversity, including certain types of Moving Target Defenses, versus component hardening as security strategies
     can be quantitatively evaluated;
•    approaches for diversity can be formalized into a rigorous taxonomy.
Such considerations are particularly relevant for so-called Moving Target Defense (MTD approaches that seek to adapt or randomize computer resources in a way to delay or defeat attackers). In particular, we explore tradeoffs between confidentiality and availability in such systems that suggest improvements.
Keywords: availability; confidentiality; diversity; formal models; integrity; moving targets; security metrics (ID#: 15-5796)
URL:  http://doi.acm.org/10.1145/2663474.2663475

 

Benjamin D. Rodes, John C. Knight, Kimberly S. Wasson. “A Security Metric Based on Security Arguments.” WETSoM 2014 Proceedings of the 5th International Workshop on Emerging Trends in Software Metrics, June 2014, vol, no., pp. 66, 72. doi:10.1145/2593868.2593880
Abstract: Software security metrics that facilitate decision making at the enterprise design and operations levels are a topic of active research and debate. These metrics are desirable to support deployment decisions, upgrade decisions, and so on; however, no single metric or set of metrics is known to provide universally effective and appropriate measurements. Instead, engineers must choose, for each software system, what to measure, how and how much to measure, and must be able to justify the rationale for how these measurements are mapped to stakeholder security goals. An assurance argument for security (i.e., a security argument) provides comprehensive documentation of all evidence and rationales for justifying belief in a security claim about a software system. In this work, we motivate the need for security arguments to facilitate meaningful and comprehensive security metrics, and present a novel framework for assessing security arguments to generate and interpret security metrics.
Keywords: assurance case; confidence; security metrics (ID#: 15-5797)
URL: http://doi.acm.org/10.1145/2593868.2593880

 

Gaofeng Da, Maochao Xu, Shouhuai Xu. “A New Approach to Modeling and Analyzing Security of Networked Systems.” HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 6. doi:10.1145/2600176.2600184
Abstract: Modeling and analyzing security of networked systems is an important problem in the emerging Science of Security and has been under active investigation. In this paper, we propose a new approach towards tackling the problem. Our approach is inspired by the shock model and random environment techniques in the Theory of Reliability, while accommodating security ingredients. To the best of our knowledge, our model is the first that can accommodate a certain degree of adaptiveness of attacks, which substantially weakens the often-made independence and exponential attack inter-arrival time assumptions. The approach leads to a stochastic process model with two security metrics, and we attain some analytic results in terms of the security metrics.
Keywords: security analysis; security metrics; security modeling (ID#: 15-5798)
URL:  http://doi.acm.org/10.1145/2600176.2600184

 

Steven Noel, Sushil Jajodia. “Metrics Suite for Network Attack Graph Analytics.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, vol., no., pp. 5, 8. doi:10.1145/2602087.2602117
Abstract: We describe a suite of metrics for measuring network-wide cyber security risk based on a model of multi-step attack vulnerability (attack graphs). Our metrics are grouped into families, with family-level metrics combined into an overall metric for network vulnerability risk. The Victimization family measures risk in terms of key attributes of risk across all known network vulnerabilities. The Size family is an indication of the relative size of the attack graph. The Containment family measures risk in terms of minimizing vulnerability exposure across protection boundaries. The Topology family measures risk through graph theoretic properties (connectivity, cycles, and depth) of the attack graph. We display these metrics (at the individual, family, and overall levels) in interactive visualizations, showing multiple metrics trends over time.
Keywords: attack graphs; security metrics; topological vulnerability analysis (ID#: 15-5799)
URL:   http://doi.acm.org/10.1145/2602087.2602117

 

Shittu, R.; Healing, A.; Ghanea-Hercock, R.; Bloomfield, R.; Muttukrishnan, R., “OutMet: A New Metric for Prioritising Intrusion Alerts Using Correlation and Outlier Analysis,” Local Computer Networks (LCN), 2014 IEEE 39th Conference on, vol., no., pp. 322, 330, 8-11 Sept. 2014. doi:10.1109/LCN.2014.6925787
Abstract: In a medium sized network, an Intrusion Detection System (IDS) could produce thousands of alerts a day many of which may be false positives. In the vast number of triggered intrusion alerts, identifying those to prioritise is highly challenging. Alert correlation and prioritisation are both viable analytical methods which are commonly used to understand and prioritise alerts. However, to the author's knowledge, very few dynamic prioritisation metrics exist. In this paper, a new prioritisation metric - OutMet, which is based on measuring the degree to which an alert belongs to anomalous behaviour is proposed. OutMet combines alert correlation and prioritisation analysis. We illustrate the effectiveness of OutMet by testing its ability to prioritise alerts generated from a 2012 red-team cyber-range experiment that was carried out as part of the BT Saturn programme. In one of the scenarios, OutMet significantly reduced the false-positives by 99.3%.
Keywords: computer network security; correlation methods; graph theory; BT Saturn programme; IDS; OutMet; alert correlation and prioritisation analysis; correlation analysis; dynamic prioritisation metrics; intrusion alerts; intrusion detection system; medium sized network; outlier analysis; red-team cyber-range experiment; Cities and towns; Complexity theory; Context; Correlation; Educational institutions; IP networks; Measurement; Alert Correlation; Attack Scenario; Graph Mining; IDS Logs; Intrusion Alert Analysis; Intrusion Detection; Pattern Detection (ID#: 15-5800)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925787&isnumber=6925725

 

Desouky, A.F.; Beard, M.D.; Etzkorn, L.H., “A Qualitative Analysis of Code Clones and Object Oriented Runtime Complexity Based on Method Access Points,” Convergence of Technology (I2CT), 2014 International Conference for, vol., no., pp. 1, 5, 6-8 April 2014. doi:10.1109/I2CT.2014.7092292
Abstract: In this paper, we present a new object oriented complexity metric based on runtime method access points. Software engineering metrics have traditionally indicated the level of quality present in a software system. However, the analysis and measurement of quality has long been captured at compile time, rendering useful results, although potentially incomplete, since all source code is considered in metric computation, versus the subset of code that actually executes. In this study, we examine the runtime behavior of our proposed metric on an open source software package, Rhino 1.7R4. We compute and validate our metric by correlating it with code clones and bug data. Code clones are considered to make software more complex and harder to maintain. When cloned, a code fragment with an error quickly transforms into two (or more) errors, both of which can affect the software system in unique ways. Thus a larger number of code clones is generally considered to indicate poorer software quality. For this reason, we consider that clones function as an external quality factor, in addition to bugs, for metric validation.
Keywords: object-oriented programming; program verification; public domain software; security of data; software metrics; software quality; source code (software); Rhino 1.7R4; bug data; code clones; metric computation; metric validation; object oriented runtime complexity; open source software package; qualitative analysis; runtime method access points; software engineering metrics; software quality; source code; Cloning; Complexity theory; Computer bugs; Correlation; Measurement; Runtime; Software; Code Clones; Complexity; Object Behavior; Object Oriented Runtime Metrics; Software Engineering (ID#: 15-5801)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092292&isnumber=7092013

 

Bhuyan, M.H.; Bhattacharyya, D.K.; Kalita, J.K., “Information Metrics for Low-Rate DDoS Attack Detection: A Comparative Evaluation,” Contemporary Computing (IC3), 2014 Seventh International Conference on, vol., no., pp. 80, 84, 7-9 Aug. 2014. doi:10.1109/IC3.2014.6897151
Abstract: Invasion by Distributed Denial of Service (DDoS) is a serious threat to services offered on the Internet. A low-rate DDoS attack allows legitimate network traffic to pass and consumes low bandwidth. So, detection of this type of attacks is very difficult in high speed networks. Information theory is popular because it allows quantifications of the difference between malicious traffic and legitimate traffic based on probability distributions. In this paper, we empirically evaluate several information metrics, namely, Hartley entropy, Shannon entropy, Renyi's entropy and Generalized entropy in their ability to detect low-rate DDoS attacks. These metrics can be used to describe characteristics of network traffic and an appropriate metric facilitates building an effective model to detect low-rate DDoS attacks. We use MIT Lincoln Laboratory and CAIDA DDoS datasets to illustrate the efficiency and effectiveness of each metric for detecting mainly low-rate DDoS attacks.
Keywords: Internet; computer network security; entropy; statistical distributions; CAIDA DDoS dataset; Hartley entropy; Internet; MIT Lincoln Laboratory dataset; Renyi entropy; Shannon entropy; distributed denial-of-service; generalized entropy; information metrics; information theory; low-rate DDoS attack detection; network traffic; probability distributions; Computer crime; Entropy; Floods; Information entropy; Measurement; Probability distribution; Telecommunication traffic; DDoS attack; entropy; information metric; low-rate; network traffic (ID#: 15-5802)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6897151&isnumber=6897132

 

Bidi Ying; Makrakis, D., “Protecting Location Privacy with Clustering Anonymization in Vehicular Networks,” Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, vol., no., pp. 305, 310, April 27 2014 - May 2 2014. doi:10.1109/INFCOMW.2014.6849249
Abstract: Location privacy is an important issue in location-based services. A large number of location cloaking algorithms have been proposed for protecting location privacy of users. However, these algorithms cannot be used in vehicular networks due to constrained vehicular mobility. In this paper, we propose a new method named Protecting Location Privacy with Clustering Anonymization (PLPCA) for location-based services in vehicular networks. This PLPCA algorithm starts with a road network transforming to an edge-cluster graph in order to conceal road information and traffic information, and then provides a cloaking algorithm based on A-anonymity and l-diversity as privacy metrics to further enclose a target vehicle's location. Simulation analysis shows our PLPCA has good performances like the strength of hiding of road information & traffic information.
Keywords: data privacy; graph theory; mobility management (mobile radio); pattern clustering; telecommunication security; vehicular ad hoc networks; PLPCA algorithm; edge-cluster graph; k-anonymity; l-diversity; location based service; location cloaking algorithm; protecting location privacy with clustering anonymization; road information hiding; road network transforming; traffic information hiding; vehicular ad hoc network; vehicular mobility; Clustering algorithms; Conferences; Privacy; Roads; Social network services; Vehicle dynamics; Vehicles; cluster; location privacy; location-based services; vehicular networks (ID#: 15-5803)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849249&isnumber=6849127

 

Ateser, M.; Tanriover, O., “Investigation of the Cobit Framework's Inputoutput Relationships by Using Graph Metrics,” Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, vol., no., pp.1269, 1275, 7-10 Sept. 2014. doi:10.15439/2014F178
Abstract: The information technology (IT) governance initiatives are complex, time consuming and resource intensive. COBIT, (Control Objectives for Information Related Technology), provides an IT governance framework and supporting toolset to help an organization ensure alignment between use of information technology and its business goals. This paper presents an investigation of COBIT processes' and inputs/outputs relationships with graph analysis. Examining the relationships provides a deep understanding of COBIT structure and may guide for IT governance implementation and audit plans and initiatives. Graph metrics are used to identify the most influential/sensitive processes and relative importance for a given context. Hence, the analysis presented provide guidance to decision makers while developing improvement programs, audits and possibly maturity assessments based on COBIT framework.
Keywords: DP management; business data processing; graph theory; COBIT framework inputs-outputs relationships; Control Objectives for Information Related Technology; IT governance framework; business goals; graph analysis; graph metrics; Guidelines; Information technology; Measurement; Monitoring; Organizations; Portfolios (ID#: 15-5804)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933164&isnumber=6932982

 

Bou-Harb, E.; Debbabi, M.; Assi, C., “Behavioral Analytics for Inferring Large-Scale Orchestrated Probing Events,” Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, vol., no., pp. 506, 511, April 27 2014 – May 2 2014. doi:10.1109/INFCOMW.2014.6849283
Abstract: The significant dependence on cyberspace has indeed brought new risks that often compromise, exploit and damage invaluable data and systems. Thus, the capability to proactively infer malicious activities is of paramount importance. In this context, inferring probing events, which are commonly the first stage of any cyber attack, render a promising tactic to achieve that task. We have been receiving for the past three years 12 GB of daily malicious real darknet data (i.e., Internet traffic destined to half a million routable yet unallocated IP addresses) from more than 12 countries. This paper exploits such data to propose a novel approach that aims at capturing the behavior of the probing sources in an attempt to infer their orchestration (i.e., coordination) pattern. The latter defines a recently discovered characteristic of a new phenomenon of probing events that could be ominously leveraged to cause drastic Internet-wide and enterprise impacts as precursors of various cyber attacks. To accomplish its goals, the proposed approach leverages various signal and statistical techniques, information theoretical metrics, fuzzy approaches with real malware traffic and data mining methods. The approach is validated through one use case that arguably proves that a previously analyzed orchestrated probing event from last year is indeed still active, yet operating in a stealthy, very low rate mode. We envision that the proposed approach that is tailored towards darknet data, which is frequently, abundantly and effectively used to generate cyber threat intelligence, could be used by network security analysts, emergency response teams and/or observers of cyber events to infer large-scale orchestrated probing events for early cyber attack warning and notification.
Keywords: IP networks; Internet; computer network security; data mining; fuzzy set theory; information theory; invasive software; statistical analysis; telecommunication traffic; Internet traffic; coordination pattern; cyber attack; cyber threat intelligence; cyberspace; data mining methods; early cyber attack notification; early cyber attack warning; emergency response teams; fuzzy approaches; information theoretical metrics; large-scale orchestrated probing events; malicious activities; malicious real darknet data; malware traffic; network security analysts; orchestration pattern; routable unallocated IP addresses; signal techniques; statistical techniques; Conferences; IP networks; Internet; Malware; Probes (ID#: 15-5805)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849283&isnumber=6849127

 

Keramati, M.; Keramati, M., “Novel Security Metrics for Ranking Vulnerabilities in Computer Networks,Telecommunications (IST), 2014 7th International Symposium on, vol., no., pp. 883, 888, 9-11 Sept. 2014. doi:10.1109/ISTEL.2014.7000828
Abstract: By daily increasing appearance of vulnerabilities and various ways of intruding networks, one of the most important fields in network security will be doing network hardening and this can be possible by patching the vulnerabilities. But this action for all vulnerabilities may cause high cost in the network and so, we should try to eliminate only most perilous vulnerabilities of the network. CVSS itself can score vulnerabilities based on amount of damage they incur in the network but the main problem with CVSS is that, it can only score individual vulnerabilities without considering its relationship with other vulnerabilities of the network. So, in order to help fill this gap, in this paper we have defined some Attack graph and CVSS-based security metrics that can help us to prioritize vulnerabilities in the network by measuring the probability of exploiting them and also the amount of damage they will impose on the network. Proposed security metrics are defined by considering interaction between all vulnerabilities of the network. So our method can rank vulnerabilities based on the network they exist in. Results of applying these security metrics on one well-known network example are also shown that can demonstrates effectiveness of our approach.
Keywords: computer network security; matrix algebra; probability; CVSS-based security metrics; common vulnerability scoring system; computer network; intruding network security; probability; ranking vulnerability; Availability; Communication networks; Complexity theory; Computer networks; Educational institutions; Measurement; Security; Attack Graph; CVSS; Exploit; Network hardening; Security Metric; Vulnerability (ID#: 15-5806)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7000828&isnumber=7000650


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.