Biblio
Data-driven verification methods utilize execution data together with models for establishing safety requirements. These are often the only tools available for analyzing complex, nonlinear cyber-physical systems, for which purely model-based analysis is currently infeasible. In this chapter, we outline the key concepts and algorithmic approaches for data-driven verification and discuss the guarantees they provide. We introduce some of the software tools that embody these ideas and present several practical case studies demonstrating their application in safety analysis of autonomous vehicles, advanced driver assist systems (ADAS), satellite control, and engine control systems.
Software-defined networking (SDN) enables efficient networkmanagement. As the technology matures, utilities are looking to integrate those benefits to their operations technology (OT) networks. To help the community to better understand and evaluate the effects of such integration, we develop DSSnet, a testing platform that combines a power distribution system simulator and an SDN-based network emulator for smart grid planning and evaluation. DSSnet relies on a container-based virtual time system to achieve efficient synchronization between the simulation and emulation systems. To enhance the system scalability and usability, we extend DSSnet to support a distributed controller environment. To enhance system fidelity, we extend the virtual time system to support kernel-based switches. We also evaluate the system performance of DSSnet and demonstrate the usability of DSSnet with a resilient demand response application case study.
Network operators face a challenge of ensuring correctness as networks grow more complex, in terms of scale and increasingly in terms of diversity of software components. Network-wide verification approaches can spot errors, but assume a simplified abstraction of the functionality of individual network devices, which may deviate from the real implementation. In this paper, we propose a technique for high-coverage testing of end-to-end network correctness using the real software that is deployed in these networks. Our design is effectively a hybrid, using an explicit-state model checker to explore all network-wide execution paths and event orderings, but executing real software as subroutines for each device. We show that this approach can detect correctness issues that would be missed both by existing verification and testing approaches, and a prototype implementation suggests the technique can scale to larger networks
with reasonable performance.
Abstract—Network intrusion detection systems (NIDS) are essential security building-blocks for today’s organizations to ensure safe and trusted communication of information. In this paper, we study the feasibility of off-line deep learning based NIDSes by constructing the detection engine with multiple advanced deep learning models and conducting a quantitative and comparative evaluation of those models. We first introduce the general deep learning methodology and its potential implication on the network intrusion detection problem. We then review multiple machine learning solutions to two network intrusion detection tasks (NSL-KDD and UNSW-NB15 datasets). We develop a TensorFlow-based deep learning library, called NetLearner, and implement a handful of cutting-edge deep learning models for NIDS. Finally, we conduct a quantitative and comparative performance evaluation of those models using NetLearner.
Abstract—In this work, we study the problem of keeping the objective functions of individual agents "-differentially private in cloud-based distributed optimization, where agents are subject to global constraints and seek to minimize local objective functions. The communication architecture between agents is cloud-based – instead of communicating directly with each other, they oordinate by sharing states through a trusted cloud computer. In this problem, the difficulty is twofold: the objective functions are used repeatedly in every iteration, and the influence of erturbing them extends to other agents and lasts over time. To solve the problem, we analyze the propagation of perturbations on objective functions over time, and derive an upper bound on them. With the upper bound, we design a noise-adding mechanism that randomizes the cloudbased distributed optimization algorithm to keep the individual objective functions "-differentially private. In addition, we study the trade-off between the privacy of objective functions and the performance of the new cloud-based distributed optimization algorithm with noise. We present simulation results to numerically verify the theoretical results presented.
Presented at the UIUC/R2 Monthly Meeting on September 18, 2017.
Recent years have seen significant advancement in the field of formal network verification. Tools have been proposed for offline data plane verification, real-time data plane verification and configuration verification under arbitrary, but static sets of failures. However, due to the fundamental limitation of not treating the network as an evolving system, current verification platforms have significant constraints in terms of scope. In real-world networks, correctness policies may be violated only through a particular combination of environment events and protocol actions, possibly in a non-deterministic sequence. Moreover, correctness specifications themselves may often correlate multiple data plane states, particularly when dynamic data plane elements are present. Tools in existence today are not capable of reasoning about all the possible network events, and all the subsequent execution paths that are enabled by those events. We propose Plankton, a verification platform for identifying undesirable evolutions of networks. By combining symbolic modeling of data plane and control plane with explicit state exploration, Plankton
performs a goal-directed search on a finite-state transition system that captures the behavior of the network as well as the various events that can influence it. In this way, Plankton can automatically find policy violations that can occur due to a sequence of network events, starting from the current state. Initial experiments have successfully predicted scenarios like BGP Wedgies.
Poster presented at the 2017 Science of Security UIUC Lablet Summer Internship Poster Session held on July 27, 2017 in Urbana, IL.
Poster presented at the 2017 Science of Security UIUC Lablet Summer Internship Poster Session held on July 27, 2017 in Urbana, IL.
Poster presented at the 2017 Science of Security UIUC Lablet Summer Internship Poster Session held on July 27, 2017 in Urbana, IL.
Poster presented at the 2017 Science of Security UIUC Lablet Summer Internship Poster Session held on July 27, 2017 in Urbana, IL.
In this paper, we improve recent results on the decentralized switched control problem to include the moving horizon case and apply it to a testbed system. Using known derivations for a centralized controller with look-ahead, we were able to extend the decentralized problem with finite memory to include receding horizon modal information. We then compare the performance of a switched controller with finite memory and look-ahead horizon to that of a linear time independent (LTI) controller using a simulation. The decentralized controller is further tested with a real-world system comprised of multiple model-sized hovercrafts.
State estimation is a fundamental problem for monitoring and controlling systems. Engineering systems interconnect sensing and computing devices over a shared bandwidth-limited channels, and therefore, estimation algorithms should strive to use bandwidth optimally. We present a notion of entropy for state estimation of switched nonlinear dynamical systems, an upper bound for it and a state estimation algorithm for the case when the switching signal is unobservable. Our approach relies on the notion of topological entropy and uses techniques from the theory for control under limited information. We show that the average bit rate used is optimal in the sense that, the eciency gap of the algorithm is within an additive constant of the gap between estimation entropy of the system and its known upper-bound. We apply the algorithm to two system models and discuss the performance implications of the number of tracked modes.
In distributed control systems with shared resources, participating agents can improve the overall performance of the system by sharing data about their personal preferences. In this paper, we formulate and study a natural tradeoff arising in these problems between the privacy of the agent’s data and the performance of the control system.We formalize privacy in terms of differential privacy of agents’ preference vectors. The overall control system consists of N agents with linear discrete-time coupled dynamics, each controlled to track its preference vector. Performance of the system is measured by the mean squared tracking error.We present a mechanism that achieves differential privacy by adding Laplace noise to the shared information in a way that depends on the sensitivity of the control system to the private data. We show that for stable systems the performance cost of using this type of privacy preserving mechanism grows as O(T 3/Nε2 ), where T is the time horizon and ε is the privacy parameter. For unstable systems, the cost grows exponentially with time. From an estimation point of view, we establish a lower-bound for the entropy of any unbiased estimator of the private data from any noise-adding mechanism that gives ε-differential privacy.We show that the mechanism achieving this lower-bound is a randomized mechanism that also uses Laplace noise.
Mobile applications frequently request sensitive data. While prior work has focused on analyzing sensitive-data uses originating from well-dened API calls in the system, the security and privacy implications of inputs requested via application user interfaces have been widely unexplored. In this paper, our goal is to understand the broad implications of such requests in terms of the type of sensitive data being requested by applications.
To this end, we propose UiRef (User Input REsolution Framework), an automated approach for resolving the semantics of user inputs requested by mobile applications. UiRef’s design includes a number of novel techniques for extracting and resolving user interface labels and addressing ambiguity in semantics, resulting in signicant improvements over prior work.We apply UiRef to 50,162 Android applications from Google Play and use outlier analysis to triage applications with questionable input requests. We identify concerning developer practices, including insecure exposure of account passwords and non-consensual input disclosures to third parties. These ndings demonstrate the importance of user-input semantics when protecting end users.
We discuss our ongoing work with an agent-based password simulation which models how site-enforced password requirements a ect aggregate security when people interact with multiple authentication systems. We model two password memorization techniques: passphrase generation and spaced repetition. Our simulation suggests system-generated passphrases lead to lower aggregate security across services that enforce even moderate password requirements. Furthermore, allowing users to expand their password length over time via spaced repetition increases aggregate security.
Monkey, a random testing tool from Google, has been popularly used in industrial practices for automatic test input generation for Android due to its applicability to a variety of application settings, e.g., ease of use and compatibility with different Android platforms. Recently, Monkey has been under the spotlight of the research community: recent studies found out that none of the studied tools from the academia were actually better than Monkey when applied on a set of open source Android apps. Our recent efforts performed the first case study of applying Monkey on WeChat, a popular messenger app with over 800 million monthly active users, and revealed many limitations of Monkey along with developing our improved approach to alleviate some of these limitations. In this paper, we explore two optimization techniques to improve the effectiveness and efficiency of our previous approach. We also conduct manual categorization of not-covered activities and two automatic coverage-analysis techniques to provide insightful information about the not-covered code entities. Lastly, we present findings of our empirical studies of conducting automatic random testing on WeChat with the preceding techniques.
The existence of and market for notebooks designedfor users to write down passwords illuminates a sharp contrast: what is often prescribed as proper password behavior—e.g., never write down passwords—differs from what many users actually do. These password logbooks and their reviews provide many unique and surprising insights into their users’ beliefs, motivations, and behaviors. We examine the password logbooks and analyze, using grounded theory, their reviews, to better understand how these users think and behave with respectto password authentication. Several themes emerge including: previous password management strategies, gifting, organizational strategies, password sharing, and dubious security advice. Some users argue these books enhance security.
Software-defined networking (SDN) technology promises centralized and rapid network provisioning, holistic management, low operational cost, and improved network visibility. Researchers have developed multiple SDN simulation and emulation platforms to expedite the adoption of many emerging SDN-based applications to production systems. However, the scalability of those platforms is often limited by the underlying physical hardware resources, which inevitably affects the simulation delity in large-scale network settings. In this paper, we present a model abstraction technique that e ectively transforms the network devices in an SDN-based network to one virtualized switch model. While signi cantly reducing the model execution time and enabling the real-time simulation capability, our abstracted model also preserves the end-to-end forwarding behavior of the original network. To achieve this, we first classify packets with the same forwarding behavior into smaller and disjoint Equivalence Classes (ECes) by analyzing the OpenFlow rules installed on the SDN devices. We then create a graph model representing the forwarding behavior of each EC. By traversing those graphs, we nally construct the rules of the big-switch model to e ectively preserve the original network's end-to-end forwarding behavior. Experimental results demonstrate that the network forwarding logic equivalence is well preserved between the abstracted model and the original SDN network. The model abstraction process is fast, e.g., 3.15 seconds to transform a medium-scale tree network consisting of 53,260 rules. The big-switch model is able to speed up the simulation by 4.3 times in average and up to 6.69 times among our evaluation experiments.
A key use of software-defined networking is to enable scaleout of network data plane elements. Naively scaling networking elements, however, can cause incorrect behavior. For example, we show that an IDS system which operates correctly as a single network element can erroneously and permanently block hosts when it is replicated.
In this paper, we provide a system, COCONUT, for seamless scale-out of network forwarding elements; that is, an SDN application programmer can program to what functionally appears to be a single forwarding element, but whichmay be replicated behind the scenes. To do this, we identifythe key property for seamless scale out, weak causality,and guarantee it through a practical and scalable implementation of vector clocks in the data plane. We prove that COCONUT enables seamless scale out of networking elements, i.e., the user-perceived behavior of any COCONUT element implemented with a distributed set of concurrent replicas is provably indistinguishable from its singleton implementation. Finally, we build a prototype of COCONUT and experimentally demonstrate its correct behavior. We also show that its abstraction enables a more efficient implementation of seamless scale-out compared to a naive baseline.
Presented at the Symposium and Bootcamp in the Science of Security (HotSoS 2017), poster session in Hanover, MD, April 4-5, 2017.
Presented at the Symposium and Bootcamp in the Science of Security (HotSoS 2017), poster session in Hanover, MD, April 4-5, 2017.
Presented at ITI Joint Trust and Security/Science of Security Seminar, February 21, 2017.