A Hypothesis Testing Framework for Network Security - January 2015
Public Audience
Purpose: To highlight project progress. Information is generally at a higher level which is accessible to the interested public. All information contained in the report (regions 1-3) is a Government Deliverable/CDRL.
PI(s): P. Brighten Godfrey
Co-PI(s): Matthew Caesar, David Nicol, Bill Sanders, and Kevin Jin (Illinois Institute of Technology)
HARD PROBLEM(S) ADDRESSED
This refers to Hard Problems, released November 2012.
- Scalability and composability
- Policy-governed secure collaboration
- Predictive security metrics
- Resilient architectures
PUBLICATIONS
Papers published in this quarter as a result of this research. Include title, author(s), venue published/presented, and a short description or abstract. Identify which hard problem(s) the publication addressed. Papers that have not yet been published should be reported in region 2 below.
Current quarter:
[1} Wenxuan Zhou, Matthew Caesar, Brighten Godfrey, and Dong (Kevin) Jin, "Enforcing Generalized Consistency Properties in Software-Defined Networks", to appear, USENIX Symposium on Networked Systems Design and Implementation (NSDI 2015).
Abstract: It is critical to ensure that network policy remains consistent as network configuration is changed, either automatically or manually. However, existing policy-consistency techniques impose a high cost in update delay and/or routers' forwarding table space. We propose the Generalized Consistency Constructor (GCC), a fast and generic framework to support customizable consistency policies during network updates. GCC effectively reduces the task of synthesizing an update plan under the constraint of a given consistency policy to a verification problem, by checking whether a particular update can safely be installed in the network at a particular time, and greedily processing network state transitions to heuristically minimize transition delay. We show a large class of consistency policies are guaranteed by this greedy heuristic alone; in addition, GCC makes judicious use of existing heavier-weight network update mechanisms to provide guarantees when necessary. As such, GCC nearly achieves the "best of both worlds": the efficiency of simply passing through updates in most cases, with the consistency guarantees of more heavyweight techniques. Mininet and physical testbed evaluations demonstrate GCC's capability to achieve various types of consistency, such as path and bandwidth properties, typically with close to zero switch memory overhead and up to a factor 3x delay reduction compared to previous solutions.
Past quarters:
[2] Soudeh Ghorbani and Brighten Godfrey, "Towards Correct Network Virtualization", ACM Workshop on Hot Topics in Software Defined Networks (HotSDN), August 2014. (Won the best paper award.)
Abstract: In SDN, the underlying infrastructure is usually abstracted for applications that can treat the network as a logical or virtual entity. Commonly, the "mappings" between virtual abstractions and their actual physical implementations are not one-to-one, e.g., a single "big switch" abstract object might be implemented using a distributed set of physical devices. A key question is, what abstractions could be mapped to multiple physical elements while faithfully preserving their native semantics? E.g., can an application developer always expect her abstract "big switch" to act exactly as a physical big switch, despite being implemented using multiple physical switches in reality? We show that the answer to that question is "no" for existing virtual-to-physical mapping techniques: behavior can differ between the virtual "big switch" and the physical network, providing incorrect application-level behavior.
We also show that that those incorrect behaviors occur despite the fact that the most pervasive correctness invariants, such as per-packet consistency, are preserved throughout. These examples demonstrate that for practical notions of correctness, new systems and a new analytical framework are needed. We take the first steps by defining end-to-end correctness, a correctness condition that focuses on applications only, and outline a research vision to obtain virtualization systems with correct virtual to physical mappings.
[3] Dong Jin and Yi Ning, "Securing Industrial Control Systems with a Simulation-based Verification System", 2014 ACM SIGSIM Conference on Principles of Advanced Discrete Simulation, Denver, CO, May 2014 (Work-in-Progress Paper).
Abstract: Today's quality of life is highly dependent on the successful operation of many large-scale industrial control systems. To enhance their protection against cyber-attacks and operational errors, we develop a simulation-based verification framework with cross-layer verification techniques that allow comprehensive analysis of the entire ICS-specific stack, including application, protocol, and network layers.
ACCOMPLISHMENT HIGHLIGHTS
The increasingly complex, large-scale nature of networks makes it difficult for network architects, security officers, and operators to understand their own networks' behavior. Even very simple behaviors -- such as whether it is possible for any packet (however unusual) to flow between two devices -- are difficult for operators to test, and synthesizing these low-level behaviors into a high-level quantitative understanding of network security has been beyond reach. This project is developing the analysis methodology needed to support scientific reasoning about the security of networks, with a particular focus on information and data flow security. The core of this vision is Network Hypothesis Testing Methodology (NetHTM), a set of techniques for performing and integrating security analyses applied at different network layers, in different ways, to pose and rigorously answer quantitative hypotheses about the end-to-end security of a network. This project is building both theoretical underpinnings and a practical realization of Science of Security.