Biblio
Experimentation tools facilitate exploration of Tor performance and security research problems and allow researchers to safely and privately conduct Tor experiments without risking harm to real Tor users. However, researchers using these tools configure them to generate network traffic based on simplifying assumptions and outdated measurements and without understanding the efficacy of their configuration choices. In this work, we design a novel technique for dynamically learning Tor network traffic models using hidden Markov modeling and privacy-preserving measurement techniques. We conduct a safe but detailed measurement study of Tor using 17 relays (\textasciitilde2% of Tor bandwidth) over the course of 6 months, measuring general statistics and models that can be used to generate a sequence of streams and packets. We show how our measurement results and traffic models can be used to generate traffic flows in private Tor networks and how our models are more realistic than standard and alternative network traffic generation\textasciitildemethods.
Due to the evolution of programming languages, interpreted languages have gained widespread use in scientific and research computing. Interpreted languages excel at being portable, easy to use, and fast in prototyping than their ahead-of-time (AOT) counterparts, including C, C++, and Fortran. While traditionally considered as slow to execute, advancements in Just-in-Time (JIT) compilation techniques have significantly improved the execution speed of interpreted languages and in some cases outperformed AOT languages. In this paper, we explore some challenges and design strategies in developing a high performance parallel discrete event simulation engine, called Simian, written with interpreted languages with JIT capabilities, including Python, Lua, and Javascript. Our results show that Simian with JIT performs similarly to AOT simulators, such as MiniSSF and ROSS. We expect that with features like good performance, userfriendliness, and portability, the just-in-time parallel simulation will become a common choice for modeling and simulation in the near future.
Given the growing sophistication of cyber attacks, designing a perfectly secure system is not generally possible. Quantitative security metrics are thus needed to measure and compare the relative security of proposed security designs and policies. Since the investigation of security breaches has shown a strong impact of human errors, ignoring the human user in computing these metrics can lead to misleading results. Despite this, and although security researchers have long observed the impact of human behavior on system security, few improvements have been made in designing systems that are resilient to the uncertainties in how humans interact with a cyber system. In this work, we develop an approach for including models of user behavior, emanating from the fields of social sciences and psychology, in the modeling of systems intended to be secure. We then illustrate how one of these models, namely general deterrence theory, can be used to study the effectiveness of the password security requirements policy and the frequency of security audits in a typical organization. Finally, we discuss the many challenges that arise when adopting such a modeling approach, and then present our recommendations for future work.
This brief paper reports on an early stage ongoing PhD project in the field of cyber-physical security in health care critical infrastructures. The research overall aims to develop a methodology that will increase the ability of secure recovery of health critical infrastructures. This ambitious or reckless attempt, as it is currently at an early stage, in this paper, tries to answer why cyber-physical security for health care infrastructures is important and of scientific interest. An initial PhD project methodology and expected outcomes are also discussed. The report concludes with challenges that emerge and possible future directions.
Network systems, such as transportation systems and water supply systems, play important roles in our daily life and industrial production. However, a variety of disruptive events occur during their life time, causing a series of serious losses. Due to the inevitability of disruption, we should not only focus on improving the reliability or the resistance of the system, but also pay attention to the ability of the system to response timely and recover rapidly from disruptive events. That is to say we need to pay more attention to the resilience. In this paper, we describe two resilience models, quotient resilience and integral resilience, to measure the final recovered performance and the performance cumulative process during recovery respectively. Based on these two models, we implement the optimization of the system recovery strategies after disruption, focusing on the repair sequence of the damaged components and the allocation scheme of resource. The proposed research in this paper can serve as guidance to prioritize repair tasks and allocate resource reasonably.
Customers need to know how reliable a new release is, and whether or not the new release has substantially different, either better or worse, reliability than the one currently in production. Customers are demanding quantitative evidence, based on pre-release metrics, to help them decide whether or not to upgrade (and thereby offer new features and capabilities to their customers). Finding ways to estimate future reliability performance is not easy - we have evaluated many prerelease development and test metrics in search of reliability predictors that are sufficiently accurate and also apply to a broad range of software products. This paper describes a successful model that has resulted from these efforts, and also presents both a functional extension and a further conceptual simplification of the extended model that enables us to better communicate key release information to internal stakeholders and customers, without sacrificing predictive accuracy or generalizability. Work remains to be done, but the results of the original model, the extended model, and the simplified version are encouraging and are currently being applied across a range of products and releases. To evaluate whether or not these early predictions are accurate, and also to compare releases that are available to customers, we use a field software reliability assessment mechanism that incorporates two types of customer experience metrics: field bug encounters normalized by usage, and field bug counts, also normalized by usage. Our 'release-overrelease' strategy combines the 'maturity assessment' component (i.e., estimating reliability prior to release to the field) and the 'reliability assessment' component (i.e., gauging actual reliability after release to the field). This overall approach enables us to both predict reliability and compare reliability results for recent releases for a product.
Sophisticated cyber attacks by state-sponsored and criminal actors continue to plague government and industrial infrastructure. Intuitively, partitioning cyber systems into survivable, intrusion tolerant compartments is a good idea. This prevents witting and unwitting insiders from moving laterally and reaching back to their command and control (C2) servers. However, there is a lack of artifacts that can predict the effectiveness of this approach in a realistic setting. We extend earlier work by relaxing simplifying assumptions and providing a new attacker-facing metric. In this article, we propose new closed-form mathematical models and a discrete time simulation to predict three critical statistics: probability of compromise, probability of external host compromise and probability of reachback. The results of our new artifacts agree with one another and with previous work, which suggests they are internally valid and a viable method to evaluate the effectiveness of cyber zone defense.
Tracing and integrating security requirements throughout the development process is a key challenge in security engineering. In socio-technical systems, security requirements for the organizational and technical aspects of a system are currently dealt with separately, giving rise to substantial misconceptions and errors. In this paper, we present a model-based security engineering framework for supporting the system design on the organizational and technical level. The key idea is to allow the involved experts to specify security requirements in the languages they are familiar with: business analysts use BPMN for procedural system descriptions; system developers use UML to design and implement the system architecture. Security requirements are captured via the language extensions SecBPMN2 and UMLsec. We provide a model transformation to bridge the conceptual gap between SecBPMN2 and UMLsec. Using UMLsec policies, various security properties of the resulting architecture can be verified. In a case study featuring an air traffic management system, we show how our framework can be practically applied.
In this paper, we describe the results of several experiments designed to test two dynamic network moving target defenses against a propagating data exfiltration attack. We designed a collection of metrics to assess the costs to mission activities and the benefits in the face of attacks and evaluated the impacts of the moving target defenses in both areas. Experiments leveraged Siege's Cyber-Quantification Framework to automatically provision the networks used in the experiment, install the two moving target defenses, collect data, and analyze the results. We identify areas in which the costs and benefits of the two moving target defenses differ, and note some of their unique performance characteristics.
Security attacks often exploit flaws that are not anticipated in an abstract design, but are introduced inadvertently when high-level interactions in the design are mapped to low-level behaviors in the supporting platform. This paper proposes a multi-representational approach to security analysis, where models capturing distinct (but possibly overlapping) views of a system are automatically composed in order to enable an end-to-end analysis. This approach allows the designer to incrementally explore the impact of design decisions on security, and discover attacks that span multiple layers of the system. This paper describes Poirot, a prototype implementation of the approach, and reports on our experience on applying Poirot to detect previously unknown security flaws in publicly deployed systems.
Dagger is a modeling and visualization framework that addresses the challenge of representing knowledge and information for decision-makers, enabling them to better comprehend the operational context of network security data. It allows users to answer critical questions such as “Given that I care about mission X, is there any reason I should be worried about what is going on in cyberspace?” or “If this system fails, will I still be able to accomplish my mission?”.
Cyber-Physical Systems (CPS) are systems with seamless integration of physical, computational and networking components. These systems can potentially have an impact on the physical components, hence it is critical to safeguard them against a wide range of attacks. In this paper, it is argued that an effective approach to achieve this goal is to systematically identify the potential threats at the design phase of building such systems, commonly achieved via threat modeling. In this context, a tool to perform systematic analysis of threat modeling for CPS is proposed. A real-world wireless railway temperature monitoring system is used as a case study to validate the proposed approach. The threats identified in the system are subsequently mitigated using National Institute of Standards and Technology (NIST) standards.
Reliability block diagram (RBD) models are a commonly used reliability analysis method. For static RBD models, combinatorial solution techniques are easy and efficient. However, static RBDs are limited in their ability to express varying system state, dependent events, and non-series-parallel topologies. A recent extension to RBDs, called Dynamic Reliability Block Diagrams (DRBD), has eliminated those limitations. This tool paper details the RBD implementation in the M¨obius modeling framework and provides technical details for using RBDs independently or in composition with other M¨obius modeling formalisms. The paper explains how the graphical front-end provides a user-friendly interface for specifying RBD models. The back-end implementation that interfaces with the M¨obius AFI to define and generate executable models that the M¨obius tool uses to evaluate system metrics is also detailed.
The cyber security exposure of resilient systems is frequently described as an attack surface. A larger surface area indicates increased exposure to threats and a higher risk of compromise. Ad-hoc addition of dynamic proactive defenses to distributed systems may inadvertently increase the attack surface. This can lead to cyber friendly fire, a condition in which adding superfluous or incorrectly configured cyber defenses unintentionally reduces security and harms mission effectiveness. Examples of cyber friendly fire include defenses which themselves expose vulnerabilities (e.g., through an unsecured admin tool), unknown interaction effects between existing and new defenses causing brittleness or unavailability, and new defenses which may provide security benefits, but cause a significant performance impact leading to mission failure through timeliness violations. This paper describes a prototype service capability for creating semantic models of attack surfaces and using those models to (1) automatically quantify and compare cost and security metrics across multiple surfaces, covering both system and defense aspects, and (2) automatically identify opportunities for minimizing attack surfaces, e.g., by removing interactions that are not required for successful mission execution.
Intro: Computer network defense has models for attacks and incidents comprised of multiple attacks after the fact. However, we lack an evidence-based model the likelihood and intensity of attacks and incidents. Purpose: We propose a model of global capability advancement, the adversarial capability chain (ACC), to fit this need. The model enables cyber risk analysis to better understand the costs for an adversary to attack a system, which directly influences the cost to defend it. Method: The model is based on four historical studies of adversarial capabilities: capability to exploit Windows XP, to exploit the Android API, to exploit Apache, and to administer compromised industrial control systems. Result: We propose the ACC with five phases: Discovery, Validation, Escalation, Democratization, and Ubiquity. We use the four case studies as examples as to how the ACC can be applied and used to predict attack likelihood and intensity.
In this paper, a new method for quantitative evaluation of the security of cyber-physical systems (CPSs) is proposed. The proposed method models the different classes of adversarial attacks against CPSs, including cross-domain attacks, i.e., cyber-to-cyber and cyber-to-physical attacks. It also takes the secondary consequences of attacks on CPSs into consideration. The intrusion process of attackers has been modeled using attack graph and the consequence estimation process of the attack has been investigated using process model. The security attributes and the special parameters involved in the security analysis of CPSs, have been identified and considered. The quantitative evaluation has been done using the probability of attacks, time-to-shutdown of the system and security risks. The validation phase of the proposed model is performed as a case study by applying it to a boiling water power plant and estimating the suitable security measures.
Security features are often hardwired into software applications, making it difficult to adapt security responses to reflect changes in runtime context and new attacks. In prior work, we proposed the idea of architecture-based self-protection as a way of separating adaptation logic from application logic and providing a global perspective for reasoning about security adaptations in the context of other business goals. In this paper, we present an approach, based on this idea, for combating denial-of-service (DoS) attacks. Our approach allows DoS-related tactics to be composed into more sophisticated mitigation strategies that encapsulate possible responses to a security problem. Then, utility-based reasoning can be used to consider different business contexts and qualities. We describe how this approach forms the underpinnings of a scientific approach to self-protection, allowing us to reason about how to make the best choice of mitigation at runtime. Moreover, we also show how formal analysis can be used to determine whether the mitigations cover the range of conditions the system is likely to encounter, and the effect of mitigations on other quality attributes of the system. We evaluate the approach using the Rainbow self-adaptive framework and show how Rainbow chooses DoS mitigation tactics that are sensitive to different business contexts.
Moving Target Defense (MTD) can enhance the resilience of cyber systems against attacks. Although there have been many MTD techniques, there is no systematic understanding and quantitative characterization of the power of MTD. In this paper, we propose to use a cyber epidemic dynamics approach to characterize the power of MTD. We define and investigate two complementary measures that are applicable when the defender aims to deploy MTD to achieve a certain security goal. One measure emphasizes the maximum portion of time during which the system can afford to stay in an undesired configuration (or posture), without considering the cost of deploying MTD. The other measure emphasizes the minimum cost of deploying MTD, while accommodating that the system has to stay in an undesired configuration (or posture) for a given portion of time. Our analytic studies lead to algorithms for optimally deploying MTD.
We explore the emerging field of Cybersecurity Dynamics, a candidate foundation for the Science of Cybersecurity.
Trust is a necessary component in cybersecurity. It is a common task for a system to make a decision about whether or not to trust the credential of an entity from another domain, issued by a third party. Generally, in the cyberspace, connected and interacting systems largely rely on each other with respect to security, privacy, and performance. In their interactions, one entity or system needs to trust others, and this "trust" frequently becomes a vulnerability of that system. Aiming at mitigating the vulnerability, we are developing a computational theory of trust, as a part of our efforts towards Science of Security. Previously, we developed a formal-semantics-based calculus of trust [3, 2], in which trust can be calculated based on a trustor's direct observation on the performance of the trustee, or based on a trust network. In this paper, we construct a framework for making trust reasoning based on the observed evidence. We take privacy in cloud computing as a driving application case [5].
In this study, we present a control theoretic technique to model routing in wireless multihop networks. We model ad hoc wireless networks as stochastic dynamical systems where, as a base case, a centralized controller pre-computes optimal paths to the destination. The usefulness of this approach lies in the fact that it can help obtain bounds on reliability of end-to-end packet transmissions. We compare this approach with the reliability achieved by some of the widely used routing techniques in multihop networks.
The Symposium and Bootcamp on the Science of Security (HotSoS), is a research event centered on the Science of Security (SoS). Following a successful invitational SoS Community Meeting in December 2012, HotSoS 2014 was the first open research event in what we expect will be a continuing series of such events. The key motivation behind developing a Science of Security is to address the fundamental problems of cybersecurity in a principled manner. Security has been intensively studied, but a lot of previous research emphasizes the engineering of specific solutions without first developing the scientific understanding of the problem domain. All too often, security research conveys the flavor of identifying specific threats and removing them in an apparently ad hoc manner. The motivation behind the nascent Science of Security is to understand how computing systems are architected, built, used, and maintained with a view to understanding and addressing security challenges systematically across their life cycle. In particular, two features distinguish the Science of Security from previous research programs on cybersecurity. Scope. The Science of Security considers not just computational artifacts but also incorporates the human, social, and organizational aspects of computing within its purview. Approach. The Science of Security takes a decidedly scientific approach, based on the understanding of empirical evaluation and theoretical foundations as developed in the natural and social sciences, but adapted as appropriate for the "artificial science" (paraphrasing Herb Simon's term) that is computing.