Biblio
Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.
The protection of confidential information has become very important with the increase of data sharing and storage on public domains. Data confidentiality is accomplished through the use of ciphers that encrypt and decrypt the data to impede unauthorized access. Emerging heterogeneous platforms provide an ideal environment to use hardware acceleration to improve application performance. In this paper, we explore the performance benefits of an AES hardware accelerator versus the software implementation for multiple cipher modes on the Zynq 7000 All-Programmable System-on-a-Chip (SoC). The accelerator is implemented on the FPGA fabric of the SoC and utilizes DMA for interfacing to the CPU. File encryption and decryption of varying file sizes are used as the workload, with execution time and throughput as the metrics for comparing the performance of the hardware and software implementations. The performance evaluations show that the accelerated AES operations achieve a speedup of 7 times relative to its software implementation and throughput upwards of 350 MB/s for the counter cipher mode, and modest improvements for other cipher modes.
Fault attack becomes a serious threat to system security and requires to be evaluated in the design stage. Existing methods usually ignore the intrinsic uncertainty in attack process and suffer from low scalability. In this paper, we develop a general framework to evaluate system vulnerability against fault attack. A holistic model for fault injection is incorporated to capture the probabilistic nature of attack process. Based on the probabilistic model, a security metric named as System Security Factor (SSF) is defined to measure the system vulnerability. In the framework, a Monte Carlo method is leveraged to enable a feasible evaluation of SSF for different systems, security policies, and attack techniques. We enhance the framework with a novel system pre-characterization procedure, based on which an importance sampling strategy is proposed. Experimental results on a commercial processor demonstrate that compared to random sampling, a 2500X speedup is achieved with the proposed sampling strategy. Meanwhile, 3% registers are identified to contribute to more than 95% SSF. By hardening these registers, a 6.5X security improvement can be achieved with less than 2% area overhead.
In this work we investigate existing and new metrics for evaluating transient stability of power systems to quantify the impact of distributed control schemes. Specifically, an energy storage system (ESS)-based control scheme that builds on feedback linearization theory is implemented in the power system to enhance its transient stability. We study the value of incorporating such ESS-based distributed control on specific transient stability metrics that include critical clearing time, critical control activation time, system stability time, rotor angle stability index, rotor speed stability index, rate of change of frequency, and control power. The stability metrics are evaluated using the IEEE 68-bus test power system. Numerical results demonstrate the value of the distributed control scheme in enhancing the transient stability metrics of power systems.
This paper proposes a hybrid metric sorting method (HMS) of successive cancellation list decoders for polar codes, which plays a critical role in decoding process. We review the state-of-the-art metric sorting methods and combine the advantages of them to generate the proposed method. Due to the optimized architecture, the proposed HMS method reduces the number of comparing stages effectively with little increase in comparisons. Evaluation results show that about 25 percent of comparing stages can be removed by HMS, compared with state-of-the-art methods. The proposed method enjoys a latency reduction for hardware implementation.
Today, we witness the emergence of smart environments, where devices are able to connect independently without human- intervention. Mobile ad hoc networks are an example of smart environments that are widely deployed in public spaces. They offer great services and features compared with wired systems. However, these networks are more sensitive to malicious attacks because of the lack of infrastructure and the self-organizing nature of devices. Thus, communication between nodes is much more exposed to various security risks, than other networks. In this paper, we will present a synthetic study on security concept for MANETs, and then we will introduce a contribution based on evaluating link quality, using ETX metric, to enhance network availability.
Privacy is a very active subject of research and also of debate in the political circles. In order to make good decisions about privacy, we need measurement systems for privacy. Most of the traditional measures such as k-anonymity lack expressiveness in many cases. We present a privacy measuring framework, which can be used to measure the value of privacy to an individual and also to evaluate the efficacy of privacy enhancing technologies. Our method is centered on a subject, whose privacy can be measured through the amount and value of information learned about the subject by some observers. This gives rise to interesting probabilistic models for the value of privacy and measures for privacy enhancing technologies.
In recent years, behavioral biometrics have become a popular approach to support continuous authentication systems. Most generally, a continuous authentication system can make two types of errors: false rejects and false accepts. Based on this, the most commonly reported metrics to evaluate systems are the False Reject Rate (FRR) and False Accept Rate (FAR). However, most papers only report the mean of these measures with little attention paid to their distribution. This is problematic as systematic errors allow attackers to perpetually escape detection while random errors are less severe. Using 16 biometric datasets we show that these systematic errors are very common in the wild. We show that some biometrics (such as eye movements) are particularly prone to systematic errors, while others (such as touchscreen inputs) show more even error distributions. Our results also show that the inclusion of some distinctive features lowers average error rates but significantly increases the prevalence of systematic errors. As such, blind optimization of the mean EER (through feature engineering or selection) can sometimes lead to lower security. Following this result we propose the Gini Coefficient (GC) as an additional metric to accurately capture different error distributions. We demonstrate the usefulness of this measure both to compare different systems and to guide researchers during feature selection. In addition to the selection of features and classifiers, some non- functional machine learning methodologies also affect error rates. The most notable examples of this are the selection of training data and the attacker model used to develop the negative class. 13 out of the 25 papers we analyzed either include imposter data in the negative class or randomly sample training data from the entire dataset, with a further 6 not giving any information on the methodology used. Using real-world data we show that both of these decisions lead to significant underestimation of error rates by 63% and 81%, respectively. This is an alarming result, as it suggests that researchers are either unaware of the magnitude of these effects or might even be purposefully attempting to over-optimize their EER without actually improving the system.
The paper suggests several techniques for computer network risk assessment based on Common Vulnerability Scoring System (CVSS) and attack modeling. Techniques use a set of integrated security metrics and consider input data from security information and event management (SIEM) systems. Risk assessment techniques differ according to the used input data. They allow to get risk assessment considering requirements to the accuracy and efficiency. Input data includes network characteristics, attacks, attacker characteristics, security events and countermeasures. The tool that implements these techniques is presented. Experiments demonstrate operation of the techniques for different security situations.
Attack graph technique is a common tool for the evaluation of network security. However, attack graphs are generally too large and complex to be understood and interpreted by security administrators. This paper proposes an analysis framework for security attack graphs for a given IT infrastructure system. First, in order to facilitate the discovery of interconnectivities among vulnerabilities in a network, multi-host multi-stage vulnerability analysis (MulVAL) is employed to generate an attack graph for a given network topology. Then a novel algorithm is applied to refine the attack graph and generate a simplified graph called a transition graph. Next, a Markov model is used to project the future security posture of the system. Finally, the framework is evaluated by applying it on a typical IT network scenario with specific services, network configurations, and vulnerabilities.
It is difficult to assess the security of modern enterprise networks because they are usually dynamic with configuration changes (such as changes in topology, firewall rules, etc). Graphical security models (e.g., Attack Graphs and Attack Trees) and security metrics (e.g., attack cost, shortest attack path) are widely used to systematically analyse the security posture of network systems. However, there are problems using them to assess the security of dynamic networks. First, the existing graphical security models are unable to capture dynamic changes occurring in the networks over time. Second, the existing security metrics are not designed for dynamic networks such that their effectiveness to the dynamic changes in the network is still unknown. In this paper, we conduct a comprehensive analysis via simulations to evaluate the effectiveness of security metrics using a Temporal Hierarchical Attack Representation Model. Further, we investigate the varying effects of security metrics when changes are observed in the dynamic networks. Our experimental analysis shows that different security metrics have varying security posture changes with respect to changes in the network.
It is difficult to assess the security of modern enterprise networks because they are usually dynamic with configuration changes (such as changes in topology, firewall rules, etc). Graphical security models (e.g., Attack Graphs and Attack Trees) and security metrics (e.g., attack cost, shortest attack path) are widely used to systematically analyse the security posture of network systems. However, there are problems using them to assess the security of dynamic networks. First, the existing graphical security models are unable to capture dynamic changes occurring in the networks over time. Second, the existing security metrics are not designed for dynamic networks such that their effectiveness to the dynamic changes in the network is still unknown. In this paper, we conduct a comprehensive analysis via simulations to evaluate the effectiveness of security metrics using a Temporal Hierarchical Attack Representation Model. Further, we investigate the varying effects of security metrics when changes are observed in the dynamic networks. Our experimental analysis shows that different security metrics have varying security posture changes with respect to changes in the network.
It is difficult to assess the security of modern enterprise networks because they are usually dynamic with configuration changes (such as changes in topology, firewall rules, etc). Graphical security models (e.g., Attack Graphs and Attack Trees) and security metrics (e.g., attack cost, shortest attack path) are widely used to systematically analyse the security posture of network systems. However, there are problems using them to assess the security of dynamic networks. First, the existing graphical security models are unable to capture dynamic changes occurring in the networks over time. Second, the existing security metrics are not designed for dynamic networks such that their effectiveness to the dynamic changes in the network is still unknown. In this paper, we conduct a comprehensive analysis via simulations to evaluate the effectiveness of security metrics using a Temporal Hierarchical Attack Representation Model. Further, we investigate the varying effects of security metrics when changes are observed in the dynamic networks. Our experimental analysis shows that different security metrics have varying security posture changes with respect to changes in the network.
Distributed storage systems and caching systems are becoming widespread, and this motivates the increasing interest on assessing their achievable performance in terms of reliability for legitimate users and security against malicious users. While the assessment of reliability takes benefit of the availability of well established metrics and tools, assessing security is more challenging. The classical cryptographic approach aims at estimating the computational effort for an attacker to break the system, and ensuring that it is far above any feasible amount. This has the limitation of depending on attack algorithms and advances in computing power. The information-theoretic approach instead exploits capacity measures to achieve unconditional security against attackers, but often does not provide practical recipes to reach such a condition. We propose a mixed cryptographic/information-theoretic approach with a twofold goal: estimating the levels of information-theoretic security and defining a practical scheme able to achieve them. In order to find optimal choices of the parameters of the proposed scheme, we exploit an effective probabilistic model checker, which allows us to overcome several limitations of more conventional methods.
With the increasing use of mobile phones in contemporary society, more and more networked computers are connected to each other. This has brought along security issues. To solve these issues, both research and development communities are trying to build more secure software. However, there is the question that how the secure software is defined and how the security could be measured. In this paper, we study this problem by studying what kinds of security measurement tools (i.e. metrics) are available, and what these tools and metrics reveal about the security of software. As the result of the study, we noticed that security verification activities fall into two main categories, evaluation and assurance. There exist 34 metrics for measuring the security, from which 29 are assurance metrics and 5 are evaluation metrics. Evaluating and studying these metrics, lead us to the conclusion that the general quality of the security metrics are not in a satisfying level that could be suitably used in daily engineering work flows. They have both theoretical and practical issues that require further research, and need to be improved.
Friendly jamming is a physical layer security technique that utilizes extra available nodes to jam any eavesdroppers. This paper considers the use of additional available nodes as friendly jammers in order to improve the security performance of a route through a wireless area network. One of the unresolved technical challenges is the combining of security metrics with typical service quality metrics. In this context, this paper considers the problem of routing through a D2D network while jointly minimizing the secrecy outage probability (SOP) and connection outage probability (COP), using friendly jamming to improve the SOP of each link. The jamming powers are determined to place nulls at friendly receivers while maximizing the power to eavesdroppers. Then the route metrics are derived, and the problem is framed as a convex optimization problem. We also consider that not all network users equally value SOP and COP, and so introduce an auxiliary variable to tune the optimization between the two metrics.
This paper describes a data driven approach to studying the science of cyber security (SoS). It argues that science is driven by data. It then describes issues and approaches towards the following three aspects: (i) Data Driven Science for Attack Detection and Mitigation, (ii) Foundations for Data Trustworthiness and Policy-based Sharing, and (iii) A Risk-based Approach to Security Metrics. We believe that the three aspects addressed in this paper will form the basis for studying the Science of Cyber Security.
Given the growing sophistication of cyber attacks, designing a perfectly secure system is not generally possible. Quantitative security metrics are thus needed to measure and compare the relative security of proposed security designs and policies. Since the investigation of security breaches has shown a strong impact of human errors, ignoring the human user in computing these metrics can lead to misleading results. Despite this, and although security researchers have long observed the impact of human behavior on system security, few improvements have been made in designing systems that are resilient to the uncertainties in how humans interact with a cyber system. In this work, we develop an approach for including models of user behavior, emanating from the fields of social sciences and psychology, in the modeling of systems intended to be secure. We then illustrate how one of these models, namely general deterrence theory, can be used to study the effectiveness of the password security requirements policy and the frequency of security audits in a typical organization. Finally, we discuss the many challenges that arise when adopting such a modeling approach, and then present our recommendations for future work.
The unauthorized access or theft of sensitive, personal information is becoming a weekly news item. The illegal dissemination of proprietary information to media outlets or competitors costs industry untold millions in remediation costs and losses every year. The 2013 data breach at Target, Inc. that impacted 70 million customers is estimated to cost upwards of 1 billion dollars. Stolen information is also being used to damage political figures and adversely influence foreign and domestic policy. In this paper, we offer some techniques for better understanding the health and security of our networks. This understanding will help professionals to identify network behavior, anomalies and other latent, systematic issues in their networks. Software-Defined Networks (SDN) enable the collection of network operation and configuration metrics that are not readily available, if available at all, in traditional networks. SDN also enables the development of software protocols and tools that increases visibility into the network. By accumulating and analyzing a time series data repository (TSDR) of SDN and traditional metrics along with data gathered from our tools we can establish behavior and security patterns for SDN and SDN hybrid networks. Our research helps provide a framework for a range of techniques for administrators and automated system protection services that give insight into the health and security of the network. To narrow the scope of our research, this paper focuses on a subset of those techniques as they apply to the confidence analysis of a specific network path at the time of use or inspection. This confidence analysis allows users, administrators and autonomous systems to decide whether a network path is secure enough for sending their sensitive information. Our testing shows that malicious activity can be identified quickly as a single metric indicator and consistently within a multi-factor indicator analysis. Our research includes the implementation of - hese techniques in a network path confidence analysis service, called Confidence Assessment as a Service. Using our behavior and security patterns, this service evaluates a specific network path and provides a confidence score for that path before, during and after the transmission of sensitive data. Our research and tools give administrators and autonomous systems a much better understanding of the internal operation and configuration of their networks. Our framework will also provide other services that will focus on detecting latent, systemic network problems. By providing a better understanding of network configuration and operation our research enables a more secure and dependable network and helps prevent the theft of information by malicious actors.
This paper introduces combined data integrity and availability attacks to expand the attack scenarios against power system state estimation. The goal of the adversary, who uses the combined attack, is to perturb the state estimates while remaining hidden from the observer. We propose security metrics that quantify vulnerability of power grids to combined data attacks under single and multi-path routing communication models. In order to evaluate the proposed security metrics, we formulate them as mixed integer linear programming (MILP) problems. The relation between the security metrics of combined data attacks and pure data integrity attacks is analyzed, based on which we show that, when data availability and data integrity attacks have the same cost, the two metrics coincide. When data availability attacks have a lower cost than data integrity attacks, we show that a combined data attack could be executed with less attack resources compared to pure data integrity attacks. Furthermore, it is shown that combined data attacks would bypass integrity-focused mitigation schemes. These conclusions are supported by the results obtained on a power system model with and without a communication model with single or multi-path routing.
The Software Assurance Metrics and Tool Evaluation (SAMATE) project at the National Institute of Standards and Technology (NIST) has created the Software Assurance Reference Dataset (SARD) to provide researchers and software security assurance tool developers with a set of known security flaws. As part of an empirical evaluation of a runtime monitoring framework, two test suites were executed and monitored, revealing deficiencies which led to a collaboration with the NIST SAMATE team to provide replacements. Test Suites 45 and 46 are analyzed, discussed, and updated to improve accuracy, consistency, preciseness, and automation. Empirical results show metrics such as recall, precision, and F-Measure are all impacted by invalid base assumptions regarding the test suites.
Human mobility is one of the key topics to be considered in the networks of the future, both by industrial and research communities that are already focused on multidisciplinary applications and user-centric systems. If the rapid proliferation of networks and high-tech miniature sensors makes this reality possible, the ever-growing complexity of the metrics and parameters governing such systems raises serious issues in terms of privacy, security and computing capability. In this demonstration, we show a new system, able to estimate a user's mobility profile based on anonymized and lightweight smartphone data. In particular, this system is composed of (1) a web analytics platform, able to analyze multimodal sensing traces and improve our understanding of complex mobility patterns, and (2) a smartphone application, able to show a user's profile generated locally in the form of a spider graph. In particular, this application uses anonymized and privacy-friendly data and methods, obtained thanks to the combination of Wi-Fi traces, activity detection and graph theory, made available independent of any personal information. A video showing the different interfaces to be presented is available online.
We present PrivInfer, an expressive framework for writing and verifying differentially private Bayesian machine learning algorithms. Programs in PrivInfer are written in a rich functional probabilistic programming language with constructs for performing Bayesian inference. Then, differential privacy of programs is established using a relational refinement type system, in which refinements on probability types are indexed by a metric on distributions. Our framework leverages recent developments in Bayesian inference, probabilistic programming languages, and in relational refinement types. We demonstrate the expressiveness of PrivInfer by verifying privacy for several examples of private Bayesian inference.
The Software Assurance Metrics and Tool Evaluation (SAMATE) project at the National Institute of Standards and Technology (NIST) has created the Software Assurance Reference Dataset (SARD) to provide researchers and software security assurance tool developers with a set of known security flaws. As part of an empirical evaluation of a runtime monitoring framework, two test suites were executed and monitored, revealing deficiencies which led to a collaboration with the NIST SAMATE team to provide replacements. Test Suites 45 and 46 are analyzed, discussed, and updated to improve accuracy, consistency, preciseness, and automation. Empirical results show metrics such as recall, precision, and F-Measure are all impacted by invalid base assumptions regarding the test suites.