Biblio
Reliability block diagram (RBD) models are a commonly used reliability analysis method. For static RBD models, combinatorial solution techniques are easy and efficient. However, static RBDs are limited in their ability to express varying system state, dependent events, and non-series-parallel topologies. A recent extension to RBDs, called Dynamic Reliability Block Diagrams (DRBD), has eliminated those limitations. This tool paper details the RBD implementation in the M¨obius modeling framework and provides technical details for using RBDs independently or in composition with other M¨obius modeling formalisms. The paper explains how the graphical front-end provides a user-friendly interface for specifying RBD models. The back-end implementation that interfaces with the M¨obius AFI to define and generate executable models that the M¨obius tool uses to evaluate system metrics is also detailed.
Reliability block diagram (RBD) models are a commonly used reliability analysis method. For static RBD models, combinatorial solution techniques are easy and efficient. However, static RBDs are limited in their ability to express varying system state, dependent events, and non-series-parallel topologies. A recent extension to RBDs, called Dynamic Reliability Block Diagrams (DRBD), has eliminated those limitations. This tool paper details the RBD implementation in the M¨obius modeling framework and provides technical details for using RBDs independently or in composition with other M¨obius modeling formalisms. The paper explains how the graphical front-end provides a user-friendly interface for specifying RBD models. The back-end implementation that interfaces with the M¨obius AFI to define and generate executable models that the M¨obius tool uses to evaluate system metrics is also detailed.
Commercial networks today have diverse security policies, defined by factors such as the type of traffic they carry, nature of applications they support, access control objectives, organizational principles etc. Ideally, the wide diversity in SDN controller frameworks should prove helpful in correctly and efficiently enforcing these policies. However, this has not been the case so far. By requiring the administrators to implement both security as well as performance objectives in the SDN controller, these frameworks have made the task of security policy enforcement in SDNs a challenging one. We observe that by separating security policy enforcement from performance optimization, we can facilitate the use of SDN for flexible policy management. To this end, we propose Oreo, a transparent performance enhancement layer for SDNs. Oreo allows SDN controllers to focus entirely on a correct security policy enforcement, and transparently optimizes the dataplane thus defined, reducing path stretch, switch memory consumption etc. Optimizations are performed while guaranteeing that end-to-end reachability characteristics are preserved – meaning that the security policies defined by the controller are not violated. Oreo performs these optimizations by first constructing a network-wide model describing the behavior of all traffic, and then optimizing the paths observed in the model by solving a multi-objective optimization problem. Initial experiments suggest that the techniques used by Oreo is effective, fast, and can scale to commercial-sized networks.
Presented at NSA SoS Quarterly Meeting, July 2016 and November 2016
Modern industrial control systems (ICSes) are increasingly adopting Internet technology to boost control efficiency, which unfortunately opens up a new frontier for cyber-security. People have typically applied existing Internet security techniques, such as firewalls, or anti-virus or anti-spyware software. However, those security solutions can only provide fine-grained protection at single devices. To address this, we design a novel software-defined networking (SDN) architecture that offers the global visibility of a control network infrastructure, and we investigate innovative SDN-based applications with the focus of ICS security, such as network verification and self-healing phasor measurement unit (PMU) networks. We are also conducting rigorous evaluation using the IIT campus microgrid as well as a high-fidelity testbed combining network emulation and power system simulation.
Illinois Lablet Information Trust Institute, Joint Trust and Security/Science of Security Seminar, by Dong (Kevin) Jin, March 15, 2016.
Modern industrial control systems (ICSes) are increasingly adopting Internet technology to boost control efficiency, which unfortunately opens up a new frontier for cyber-security. People have typically applied existing Internet security techniques, such as firewalls, or anti-virus or anti-spyware software. However, those security solutions can only provide fine-grained protection at single devices. To address this, we design a novel software-defined networking (SDN) architecture that offers the global visibility of a control network infrastructure, and we investigate innovative SDN-based applications with the focus of ICS security, such as network verification and self-healing phasor measurement unit (PMU) networks. We are also conducting rigorous evaluation using the IIT campus microgrid as well as a high-fidelity testbed combining network emulation and power system simulation.
Presented at the Illinois ITI Trust and Security/Science of Security Seminar, March 15, 2016.
We rely on network infrastructure to deliver critical services and ensure security. Yet networks today have reached a level of complexity that is far beyond our ability to have confidence in their correct behavior – resulting in significant time investment and security vulnerabilities that can cost millions of dollars, or worse. Motivated by this need for rigorous understanding of complex networks, I will give an overview of our or Science of Security lablet project, A Hypothesis Testing Framework for Network Security.
First, I will discuss the emerging field of network verification, which transforms network security by rigorously checking that intended behavior is correctly realized across the live running network. Our research developed a technique called data plane verification, which has discovered problems in operational environments and can verify hypotheses and security policies with millisecond-level latency in dynamic networks. In just a few years, data plane verification has moved from early research prototypes to production deployment. We have built on this technique to reason about hypotheses even under the temporal uncertainty inherent in a large distributed network. Second, I will discuss a new approach to reasoning about networks as databases that we can query to determine answers to behavioral questions and to actively control the network. This talk will span work by a large group of folks, including Anduo Wang, Wenxu an Zhou, Dong Jin, Jason Croft, Matthew Caesar, Ahmed Khurshid, and Xuan Zou.
Presented at the Illinois ITI Joint Trust and Security/Science of Security Seminar, September 15, 2015.
Presented to the Illinois SoS Bi-weekly Meeting, April 2015.
Presented at the Illinois SoS Bi-Weekly Meeting, February 2015.
Presented as part of the Illinois SoS Bi-weekly Meeting, October 2014.
Best Poster Award, Workshop on Science of Security through Software-Defined Networking, Chicago, IL, June 16-17, 2016.
Presented at the NSA Science of Security Quarterly Meeting, July 2016.
In this paper we explore the differential perceptions of cybersecurity professionals and general users regarding access rules and passwords. We conducted a preliminary survey involving 28 participants: 15 cybersecurity professionasl and 13 general users. We present our preliminary findings and explain how such survey data might be used to improve security in practice. We focus on user fatigue with access rules and passwords.
Presented at the NSA Science of Security Quarterly Meeting, July 2016.
The United States is losing the cyberwar. We are losing the cyberwar because cyber defenses apply the wrong philosophy to the wrong operating environment. In order to be effective, future cyber defenses must be viewed in the context of an engagement between human adversaries.
We present a technique for bounded invariant verification of nonlinear networked dynamical systems with delayed interconnections. The underlying problem in precise boundedtime verification lies with computing bounds on the sensitivity of trajectories (or solutions) to changes in initial states and inputs of the system. For large networks, computing this sensitivity
with precision guarantees is challenging. We introduce the notion of input-to-state (IS) discrepancy of each module or subsystem in a larger nonlinear networked dynamical system. The IS discrepancy bounds the distance between two solutions or trajectories of a module in terms of their initial states and their inputs. Given the IS discrepancy functions of the modules, we show that it is possible to effectively construct a reduced (low dimensional) time-delayed dynamical system, such that the trajectory of this reduced model precisely bounds the distance between the trajectories of the complete network with changed initial states. Using the above results we develop a sound and relatively complete algorithm for bounded invariant verification of networked dynamical systems consisting of nonlinear modules interacting through possibly delayed signals. Finally, we introduce a local version of IS discrepancy and show that it is possible to compute them using only the Lipschitz constant and the Jacobian of the dynamic function of the modules.
The successful operations of modern power grids are highly dependent on a reliable and ecient underlying communication network. Researchers and utilities have started to explore the opportunities and challenges of applying the emerging software-de ned networking (SDN) technology to enhance eciency and resilience of the Smart Grid. This trend calls for a simulation-based platform that provides sufcient exibility and controllability for evaluating network application designs, and facilitating the transitions from inhouse research ideas to real productions. In this paper, we present DSSnet, a hybrid testing platform that combines a power distribution system simulator with an SDN emulator to support high delity analysis of communication network applications and their impacts on the power systems. Our contributions lay in the design of a virtual time system with the tight controllability on the execution of the emulation system, i.e., pausing and resuming any speci ed container processes in the perception of their own virtual clocks, with little overhead scaling to 500 emulated hosts with an average of 70 ms overhead; and also lay in the ecient synchronization of the two sub-systems based on the virtual time. We evaluate the system performance of DSSnet, and also demonstrate the usability through a case study by evaluating a load shifting algorithm.
Best Poster Award, Illinois Institute of Technology Research Day, April 11, 2016.
While there have been various studies identifying and classifying Android malware, there is limited discussion of the broader class of apps that fall in a gray area. Mobile grayware is distinct from PC grayware due to differences in operating system properties. Due to mobile grayware’s subjective nature, it is difficult to identify mobile grayware via program analysis alone. Instead, we hypothesize enhancing analysis with text analytics can effectively reduce human effort when triaging grayware. In this paper, we design and implement heuristics for seven main categories of grayware.We then use these heuristics to simulate grayware triage on a large set of apps from Google Play. We then present the results of our empirical study, demonstrating a clear problem of grayware. In doing so, we show how even relatively simple heuristics can quickly triage apps that take advantage of users in an undesirable way.
In recent years, online programming and software engineering education via information technology has gained a lot of popularity. Typically, popular courses often have hundreds or thousands of students but only a few course sta members. Tool automation is needed to maintain the quality of education. In this paper, we envision that the capability of quantifying behavioral similarity between programs is helpful for teaching and learning programming and software engineering, and propose three metrics that approximate the computation of behavioral similarity. Speci cally, we leverage random testing and dynamic symbolic execution (DSE) to generate test inputs, and run programs on these test inputs to compute metric values of the behavioral similarity. We evaluate our metrics on three real-world data sets from the Pex4Fun platform (which so far has accumulated more than 1.7 million game-play interactions). The results show that our metrics provide highly accurate approximation to the behavioral similarity. We also demonstrate a number of practical applications of our metrics including hint generation, progress indication, and automatic grading.
Anonymous messaging platforms like Whisper and Yik Yak allow users to spread messages over a network (e.g., a social network) without revealing message authorship to other users. The spread of messages on these platforms can be modeled by a diffusion process over a graph. Recent advances in network analysis have revealed that such diffusion processes are vulnerable to author deanonymization by adversaries with access to metadata, such as timing information. In this work, we ask the fundamental question of how to propagate anonymous messages over a graph to make it difficult for adversaries to infer the source. In particular, we study the performance of a message propagation protocol called adaptive diffusion introduced in (Fanti et al., 2015). We prove that when the adversary has access to metadata at a fraction of corrupted graph nodes, adaptive diffusion achieves asymptotically optimal source-hiding and significantly outperforms standard diffusion. We further demonstrate empirically that adaptive diffusion hides the source effectively on real social networks.
Anonymous messaging applications have recently gained popularity as a means for sharing opinions without fear of judgment or repercussion. These messages propagate anonymously over a network, typically de ned by social connections or physical proximity. However, recent advances in rumor source detection show that the source of such an anonymous message can be inferred by certain statistical inference attacks. Adaptive di usion was recently proposed as a solution that achieves optimal source obfuscation over regular trees. However, in real social networks, the degrees difer from node to node, and adaptive di usion can be signicantly sub-optimal. This gap increases as the degrees become more irregular.
In order to quantify this gap, we model the underlying network as coming from standard branching processes with i.i.d. degree distributions. Building upon the analysis techniques from branching processes, we give an analytical characterization of the dependence of the probability of detection achieved by adaptive di usion on the degree distribution. Further, this analysis provides a key insight: passing a rumor to a friend who has many friends makes the source more ambiguous. This leads to a new family of protocols that we call Preferential Attachment Adaptive Di usion (PAAD). When messages are propagated according to PAAD, we give both the MAP estimator for nding the source and also an analysis of the probability of detection achieved by this adversary. The analytical results are not directly comparable, since the adversary's observed information has a di erent distribution under adaptive di usion than under PAAD. Instead, we present results from numerical experiments that suggest that PAAD achieves a lower probability of detection, at the cost of increased communication for coordination.
This paper presents an approach for securing software application chains in cloud environments. We use the concept of workflow management systems to explain the model. Our prototype is based on the Kepler scientific workflow system enhanced with a security analytics package. This model can be applied to other cloud based systems. Depending on the information being received from the cloud, this approach can also offer information about internal states of the resources in
the cloud. The approach we use hinges on (1) an ability to limit attacks to Input, Remote, and Output channels (or flows), and (2) validate the flows using operational profile (OP) or certification based signals. OP based validation is a statistical approach and may miss some of the attacks. However, where enumeration is possible (e.g., static web sites), this approach can offer high assurances of validity of the flows. It is also assumed that workflow components are sound so long as the input flows are limited to operational profile. Other acceptance testing approaches could be used to validate the flows. Work in progress has two thrusts: (1) using cloud-based Kepler workflows to probe and assess security states and operation of cloud resources (specifically VMs) under different workloads leveraging DACSA sensors; and (2) analyzing effectiveness of the proposed approach in securing workflows.