Biblio
The paper suggests several techniques for computer network risk assessment based on Common Vulnerability Scoring System (CVSS) and attack modeling. Techniques use a set of integrated security metrics and consider input data from security information and event management (SIEM) systems. Risk assessment techniques differ according to the used input data. They allow to get risk assessment considering requirements to the accuracy and efficiency. Input data includes network characteristics, attacks, attacker characteristics, security events and countermeasures. The tool that implements these techniques is presented. Experiments demonstrate operation of the techniques for different security situations.
In recent years, behavioral biometrics have become a popular approach to support continuous authentication systems. Most generally, a continuous authentication system can make two types of errors: false rejects and false accepts. Based on this, the most commonly reported metrics to evaluate systems are the False Reject Rate (FRR) and False Accept Rate (FAR). However, most papers only report the mean of these measures with little attention paid to their distribution. This is problematic as systematic errors allow attackers to perpetually escape detection while random errors are less severe. Using 16 biometric datasets we show that these systematic errors are very common in the wild. We show that some biometrics (such as eye movements) are particularly prone to systematic errors, while others (such as touchscreen inputs) show more even error distributions. Our results also show that the inclusion of some distinctive features lowers average error rates but significantly increases the prevalence of systematic errors. As such, blind optimization of the mean EER (through feature engineering or selection) can sometimes lead to lower security. Following this result we propose the Gini Coefficient (GC) as an additional metric to accurately capture different error distributions. We demonstrate the usefulness of this measure both to compare different systems and to guide researchers during feature selection. In addition to the selection of features and classifiers, some non- functional machine learning methodologies also affect error rates. The most notable examples of this are the selection of training data and the attacker model used to develop the negative class. 13 out of the 25 papers we analyzed either include imposter data in the negative class or randomly sample training data from the entire dataset, with a further 6 not giving any information on the methodology used. Using real-world data we show that both of these decisions lead to significant underestimation of error rates by 63% and 81%, respectively. This is an alarming result, as it suggests that researchers are either unaware of the magnitude of these effects or might even be purposefully attempting to over-optimize their EER without actually improving the system.
It is difficult to assess the security of modern enterprise networks because they are usually dynamic with configuration changes (such as changes in topology, firewall rules, etc). Graphical security models (e.g., Attack Graphs and Attack Trees) and security metrics (e.g., attack cost, shortest attack path) are widely used to systematically analyse the security posture of network systems. However, there are problems using them to assess the security of dynamic networks. First, the existing graphical security models are unable to capture dynamic changes occurring in the networks over time. Second, the existing security metrics are not designed for dynamic networks such that their effectiveness to the dynamic changes in the network is still unknown. In this paper, we conduct a comprehensive analysis via simulations to evaluate the effectiveness of security metrics using a Temporal Hierarchical Attack Representation Model. Further, we investigate the varying effects of security metrics when changes are observed in the dynamic networks. Our experimental analysis shows that different security metrics have varying security posture changes with respect to changes in the network.
It is difficult to assess the security of modern enterprise networks because they are usually dynamic with configuration changes (such as changes in topology, firewall rules, etc). Graphical security models (e.g., Attack Graphs and Attack Trees) and security metrics (e.g., attack cost, shortest attack path) are widely used to systematically analyse the security posture of network systems. However, there are problems using them to assess the security of dynamic networks. First, the existing graphical security models are unable to capture dynamic changes occurring in the networks over time. Second, the existing security metrics are not designed for dynamic networks such that their effectiveness to the dynamic changes in the network is still unknown. In this paper, we conduct a comprehensive analysis via simulations to evaluate the effectiveness of security metrics using a Temporal Hierarchical Attack Representation Model. Further, we investigate the varying effects of security metrics when changes are observed in the dynamic networks. Our experimental analysis shows that different security metrics have varying security posture changes with respect to changes in the network.
It is difficult to assess the security of modern enterprise networks because they are usually dynamic with configuration changes (such as changes in topology, firewall rules, etc). Graphical security models (e.g., Attack Graphs and Attack Trees) and security metrics (e.g., attack cost, shortest attack path) are widely used to systematically analyse the security posture of network systems. However, there are problems using them to assess the security of dynamic networks. First, the existing graphical security models are unable to capture dynamic changes occurring in the networks over time. Second, the existing security metrics are not designed for dynamic networks such that their effectiveness to the dynamic changes in the network is still unknown. In this paper, we conduct a comprehensive analysis via simulations to evaluate the effectiveness of security metrics using a Temporal Hierarchical Attack Representation Model. Further, we investigate the varying effects of security metrics when changes are observed in the dynamic networks. Our experimental analysis shows that different security metrics have varying security posture changes with respect to changes in the network.
This paper proposes a hybrid metric sorting method (HMS) of successive cancellation list decoders for polar codes, which plays a critical role in decoding process. We review the state-of-the-art metric sorting methods and combine the advantages of them to generate the proposed method. Due to the optimized architecture, the proposed HMS method reduces the number of comparing stages effectively with little increase in comparisons. Evaluation results show that about 25 percent of comparing stages can be removed by HMS, compared with state-of-the-art methods. The proposed method enjoys a latency reduction for hardware implementation.
The protection of confidential information has become very important with the increase of data sharing and storage on public domains. Data confidentiality is accomplished through the use of ciphers that encrypt and decrypt the data to impede unauthorized access. Emerging heterogeneous platforms provide an ideal environment to use hardware acceleration to improve application performance. In this paper, we explore the performance benefits of an AES hardware accelerator versus the software implementation for multiple cipher modes on the Zynq 7000 All-Programmable System-on-a-Chip (SoC). The accelerator is implemented on the FPGA fabric of the SoC and utilizes DMA for interfacing to the CPU. File encryption and decryption of varying file sizes are used as the workload, with execution time and throughput as the metrics for comparing the performance of the hardware and software implementations. The performance evaluations show that the accelerated AES operations achieve a speedup of 7 times relative to its software implementation and throughput upwards of 350 MB/s for the counter cipher mode, and modest improvements for other cipher modes.
With the increasing use of mobile phones in contemporary society, more and more networked computers are connected to each other. This has brought along security issues. To solve these issues, both research and development communities are trying to build more secure software. However, there is the question that how the secure software is defined and how the security could be measured. In this paper, we study this problem by studying what kinds of security measurement tools (i.e. metrics) are available, and what these tools and metrics reveal about the security of software. As the result of the study, we noticed that security verification activities fall into two main categories, evaluation and assurance. There exist 34 metrics for measuring the security, from which 29 are assurance metrics and 5 are evaluation metrics. Evaluating and studying these metrics, lead us to the conclusion that the general quality of the security metrics are not in a satisfying level that could be suitably used in daily engineering work flows. They have both theoretical and practical issues that require further research, and need to be improved.
Privacy is a very active subject of research and also of debate in the political circles. In order to make good decisions about privacy, we need measurement systems for privacy. Most of the traditional measures such as k-anonymity lack expressiveness in many cases. We present a privacy measuring framework, which can be used to measure the value of privacy to an individual and also to evaluate the efficacy of privacy enhancing technologies. Our method is centered on a subject, whose privacy can be measured through the amount and value of information learned about the subject by some observers. This gives rise to interesting probabilistic models for the value of privacy and measures for privacy enhancing technologies.
Attack graph technique is a common tool for the evaluation of network security. However, attack graphs are generally too large and complex to be understood and interpreted by security administrators. This paper proposes an analysis framework for security attack graphs for a given IT infrastructure system. First, in order to facilitate the discovery of interconnectivities among vulnerabilities in a network, multi-host multi-stage vulnerability analysis (MulVAL) is employed to generate an attack graph for a given network topology. Then a novel algorithm is applied to refine the attack graph and generate a simplified graph called a transition graph. Next, a Markov model is used to project the future security posture of the system. Finally, the framework is evaluated by applying it on a typical IT network scenario with specific services, network configurations, and vulnerabilities.
In this work we investigate existing and new metrics for evaluating transient stability of power systems to quantify the impact of distributed control schemes. Specifically, an energy storage system (ESS)-based control scheme that builds on feedback linearization theory is implemented in the power system to enhance its transient stability. We study the value of incorporating such ESS-based distributed control on specific transient stability metrics that include critical clearing time, critical control activation time, system stability time, rotor angle stability index, rotor speed stability index, rate of change of frequency, and control power. The stability metrics are evaluated using the IEEE 68-bus test power system. Numerical results demonstrate the value of the distributed control scheme in enhancing the transient stability metrics of power systems.
Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.
The unauthorized access or theft of sensitive, personal information is becoming a weekly news item. The illegal dissemination of proprietary information to media outlets or competitors costs industry untold millions in remediation costs and losses every year. The 2013 data breach at Target, Inc. that impacted 70 million customers is estimated to cost upwards of 1 billion dollars. Stolen information is also being used to damage political figures and adversely influence foreign and domestic policy. In this paper, we offer some techniques for better understanding the health and security of our networks. This understanding will help professionals to identify network behavior, anomalies and other latent, systematic issues in their networks. Software-Defined Networks (SDN) enable the collection of network operation and configuration metrics that are not readily available, if available at all, in traditional networks. SDN also enables the development of software protocols and tools that increases visibility into the network. By accumulating and analyzing a time series data repository (TSDR) of SDN and traditional metrics along with data gathered from our tools we can establish behavior and security patterns for SDN and SDN hybrid networks. Our research helps provide a framework for a range of techniques for administrators and automated system protection services that give insight into the health and security of the network. To narrow the scope of our research, this paper focuses on a subset of those techniques as they apply to the confidence analysis of a specific network path at the time of use or inspection. This confidence analysis allows users, administrators and autonomous systems to decide whether a network path is secure enough for sending their sensitive information. Our testing shows that malicious activity can be identified quickly as a single metric indicator and consistently within a multi-factor indicator analysis. Our research includes the implementation of - hese techniques in a network path confidence analysis service, called Confidence Assessment as a Service. Using our behavior and security patterns, this service evaluates a specific network path and provides a confidence score for that path before, during and after the transmission of sensitive data. Our research and tools give administrators and autonomous systems a much better understanding of the internal operation and configuration of their networks. Our framework will also provide other services that will focus on detecting latent, systemic network problems. By providing a better understanding of network configuration and operation our research enables a more secure and dependable network and helps prevent the theft of information by malicious actors.
Distributed storage systems and caching systems are becoming widespread, and this motivates the increasing interest on assessing their achievable performance in terms of reliability for legitimate users and security against malicious users. While the assessment of reliability takes benefit of the availability of well established metrics and tools, assessing security is more challenging. The classical cryptographic approach aims at estimating the computational effort for an attacker to break the system, and ensuring that it is far above any feasible amount. This has the limitation of depending on attack algorithms and advances in computing power. The information-theoretic approach instead exploits capacity measures to achieve unconditional security against attackers, but often does not provide practical recipes to reach such a condition. We propose a mixed cryptographic/information-theoretic approach with a twofold goal: estimating the levels of information-theoretic security and defining a practical scheme able to achieve them. In order to find optimal choices of the parameters of the proposed scheme, we exploit an effective probabilistic model checker, which allows us to overcome several limitations of more conventional methods.
Today, we witness the emergence of smart environments, where devices are able to connect independently without human- intervention. Mobile ad hoc networks are an example of smart environments that are widely deployed in public spaces. They offer great services and features compared with wired systems. However, these networks are more sensitive to malicious attacks because of the lack of infrastructure and the self-organizing nature of devices. Thus, communication between nodes is much more exposed to various security risks, than other networks. In this paper, we will present a synthetic study on security concept for MANETs, and then we will introduce a contribution based on evaluating link quality, using ETX metric, to enhance network availability.
Friendly jamming is a physical layer security technique that utilizes extra available nodes to jam any eavesdroppers. This paper considers the use of additional available nodes as friendly jammers in order to improve the security performance of a route through a wireless area network. One of the unresolved technical challenges is the combining of security metrics with typical service quality metrics. In this context, this paper considers the problem of routing through a D2D network while jointly minimizing the secrecy outage probability (SOP) and connection outage probability (COP), using friendly jamming to improve the SOP of each link. The jamming powers are determined to place nulls at friendly receivers while maximizing the power to eavesdroppers. Then the route metrics are derived, and the problem is framed as a convex optimization problem. We also consider that not all network users equally value SOP and COP, and so introduce an auxiliary variable to tune the optimization between the two metrics.
In this paper, we describe the results of several experiments designed to test two dynamic network moving target defenses against a propagating data exfiltration attack. We designed a collection of metrics to assess the costs to mission activities and the benefits in the face of attacks and evaluated the impacts of the moving target defenses in both areas. Experiments leveraged Siege's Cyber-Quantification Framework to automatically provision the networks used in the experiment, install the two moving target defenses, collect data, and analyze the results. We identify areas in which the costs and benefits of the two moving target defenses differ, and note some of their unique performance characteristics.
When reasoning about software security, researchers and practitioners use the phrase ``attack surface'' as a metaphor for risk. Enumerate and minimize the ways attackers can break in then risk is reduced and the system is better protected, the metaphor says. But software systems are much more complicated than their surfaces. We propose function- and file-level attack surface metrics–-proximity and risky walk–-that enable fine-grained risk assessment. Our risky walk metric is highly configurable: we use PageRank on a probability-weighted call graph to simulate attacker behavior of finding or exploiting a vulnerability. We provide evidence-based guidance for deploying these metrics, including an extensive parameter tuning study. We conducted an empirical study on two large open source projects, FFmpeg and Wireshark, to investigate the potential correlation between our metrics and historical post-release vulnerabilities. We found our metrics to be statistically significantly associated with vulnerable functions/files with a small-to-large Cohen's d effect size. Our prediction model achieved an increase of 36% (in FFmpeg) and 27% (in Wireshark) in the average value of F-measure over a base model built with SLOC and coupling metrics. Our prediction model outperformed comparable models from prior literature with notable improvements: 58% reduction in false negative rate, 81% reduction in false positive rate, and 548% increase in F-measure. These metrics advance vulnerability prevention by [(a)] being flexible in terms of granularity, performing better than vulnerability prediction literature, and being tunable so that practitioners can tailor the metrics to their products and better assess security risk.
After decades of cyber warfare, it is well-known that the static and predictable behavior of cyber configuration provides a great advantage to adversaries to plan and launch their attack successfully. At the same time, as cyber attacks are getting highly stealthy and more sophisticated, their detection and mitigation become much harder and expensive. We developed a new foundation for moving target defense (MTD) based on cyber mutation, as a new concept in cybersecurity to reverse this asymmetry in cyber warfare by embedding agility into cyber systems. Cyber mutation enables cyber systems to automatically change its configuration parameters in unpredictable, safe and adaptive manner in order to proactively achieve one or more of the following MTD goals: (1) deceiving attackers from reaching their goals, (2) disrupting their plans via changing adversarial behaviors, and (3) deterring adversaries by prohibitively increasing the attack effort and cost. In this talk, we will present the formal foundations, metrics and framework for developing effective cyber mutation techniques. The talk will also review several examples of developed techniques including Random Host Mutation, Random Rout Mutation, fingerprinting mutation, and mutable virtual networks. The talk will also address the evaluation and lessons learned for advancing the future research in this area.
This paper describes a data driven approach to studying the science of cyber security (SoS). It argues that science is driven by data. It then describes issues and approaches towards the following three aspects: (i) Data Driven Science for Attack Detection and Mitigation, (ii) Foundations for Data Trustworthiness and Policy-based Sharing, and (iii) A Risk-based Approach to Security Metrics. We believe that the three aspects addressed in this paper will form the basis for studying the Science of Cyber Security.
We present PrivInfer, an expressive framework for writing and verifying differentially private Bayesian machine learning algorithms. Programs in PrivInfer are written in a rich functional probabilistic programming language with constructs for performing Bayesian inference. Then, differential privacy of programs is established using a relational refinement type system, in which refinements on probability types are indexed by a metric on distributions. Our framework leverages recent developments in Bayesian inference, probabilistic programming languages, and in relational refinement types. We demonstrate the expressiveness of PrivInfer by verifying privacy for several examples of private Bayesian inference.
Heterogeneous face recognition aims to identify or verify person identity by matching facial images of different modalities. In practice, it is known that its performance is highly influenced by modality inconsistency, appearance occlusions, illumination variations and expressions. In this paper, a new method named as ensemble of sparse cross-modal metrics is proposed for tackling these challenging issues. In particular, a weak sparse cross-modal metric learning method is firstly developed to measure distances between samples of two modalities. It learns to adjust rank-one cross-modal metrics to satisfy two sets of triplet based cross-modal distance constraints in a compact form. Meanwhile, a group based feature selection is performed to enforce that features in the same position of two modalities are selected simultaneously. By neglecting features that attribute to "noise" in the face regions (eye glasses, expressions and so on), the performance of learned weak metrics can be markedly improved. Finally, an ensemble framework is incorporated to combine the results of differently learned sparse metrics into a strong one. Extensive experiments on various face datasets demonstrate the benefit of such feature selection especially when heavy occlusions exist. The proposed ensemble metric learning has been shown superiority over several state-of-the-art methods in heterogeneous face recognition.