Biblio
State estimation is a fundamental problem for monitoring and controlling systems. Engineering systems interconnect sensing and computing devices over a shared bandwidth-limited channels, and therefore, estimation algorithms should strive to use bandwidth optimally. We present a notion of entropy for state estimation of switched nonlinear dynamical systems, an upper bound for it and a state estimation algorithm for the case when the switching signal is unobservable. Our approach relies on the notion of topological entropy and uses techniques from the theory for control under limited information. We show that the average bit rate used is optimal in the sense that, the eciency gap of the algorithm is within an additive constant of the gap between estimation entropy of the system and its known upper-bound. We apply the algorithm to two system models and discuss the performance implications of the number of tracked modes.
In distributed control systems with shared resources, participating agents can improve the overall performance of the system by sharing data about their personal preferences. In this paper, we formulate and study a natural tradeoff arising in these problems between the privacy of the agent’s data and the performance of the control system.We formalize privacy in terms of differential privacy of agents’ preference vectors. The overall control system consists of N agents with linear discrete-time coupled dynamics, each controlled to track its preference vector. Performance of the system is measured by the mean squared tracking error.We present a mechanism that achieves differential privacy by adding Laplace noise to the shared information in a way that depends on the sensitivity of the control system to the private data. We show that for stable systems the performance cost of using this type of privacy preserving mechanism grows as O(T 3/Nε2 ), where T is the time horizon and ε is the privacy parameter. For unstable systems, the cost grows exponentially with time. From an estimation point of view, we establish a lower-bound for the entropy of any unbiased estimator of the private data from any noise-adding mechanism that gives ε-differential privacy.We show that the mechanism achieving this lower-bound is a randomized mechanism that also uses Laplace noise.
Mobile applications frequently request sensitive data. While prior work has focused on analyzing sensitive-data uses originating from well-dened API calls in the system, the security and privacy implications of inputs requested via application user interfaces have been widely unexplored. In this paper, our goal is to understand the broad implications of such requests in terms of the type of sensitive data being requested by applications.
To this end, we propose UiRef (User Input REsolution Framework), an automated approach for resolving the semantics of user inputs requested by mobile applications. UiRef’s design includes a number of novel techniques for extracting and resolving user interface labels and addressing ambiguity in semantics, resulting in signicant improvements over prior work.We apply UiRef to 50,162 Android applications from Google Play and use outlier analysis to triage applications with questionable input requests. We identify concerning developer practices, including insecure exposure of account passwords and non-consensual input disclosures to third parties. These ndings demonstrate the importance of user-input semantics when protecting end users.
We discuss our ongoing work with an agent-based password simulation which models how site-enforced password requirements a ect aggregate security when people interact with multiple authentication systems. We model two password memorization techniques: passphrase generation and spaced repetition. Our simulation suggests system-generated passphrases lead to lower aggregate security across services that enforce even moderate password requirements. Furthermore, allowing users to expand their password length over time via spaced repetition increases aggregate security.
The existence of and market for notebooks designedfor users to write down passwords illuminates a sharp contrast: what is often prescribed as proper password behavior—e.g., never write down passwords—differs from what many users actually do. These password logbooks and their reviews provide many unique and surprising insights into their users’ beliefs, motivations, and behaviors. We examine the password logbooks and analyze, using grounded theory, their reviews, to better understand how these users think and behave with respectto password authentication. Several themes emerge including: previous password management strategies, gifting, organizational strategies, password sharing, and dubious security advice. Some users argue these books enhance security.
Software-defined networking (SDN) technology promises centralized and rapid network provisioning, holistic management, low operational cost, and improved network visibility. Researchers have developed multiple SDN simulation and emulation platforms to expedite the adoption of many emerging SDN-based applications to production systems. However, the scalability of those platforms is often limited by the underlying physical hardware resources, which inevitably affects the simulation delity in large-scale network settings. In this paper, we present a model abstraction technique that e ectively transforms the network devices in an SDN-based network to one virtualized switch model. While signi cantly reducing the model execution time and enabling the real-time simulation capability, our abstracted model also preserves the end-to-end forwarding behavior of the original network. To achieve this, we first classify packets with the same forwarding behavior into smaller and disjoint Equivalence Classes (ECes) by analyzing the OpenFlow rules installed on the SDN devices. We then create a graph model representing the forwarding behavior of each EC. By traversing those graphs, we nally construct the rules of the big-switch model to e ectively preserve the original network's end-to-end forwarding behavior. Experimental results demonstrate that the network forwarding logic equivalence is well preserved between the abstracted model and the original SDN network. The model abstraction process is fast, e.g., 3.15 seconds to transform a medium-scale tree network consisting of 53,260 rules. The big-switch model is able to speed up the simulation by 4.3 times in average and up to 6.69 times among our evaluation experiments.
A key use of software-defined networking is to enable scaleout of network data plane elements. Naively scaling networking elements, however, can cause incorrect behavior. For example, we show that an IDS system which operates correctly as a single network element can erroneously and permanently block hosts when it is replicated.
In this paper, we provide a system, COCONUT, for seamless scale-out of network forwarding elements; that is, an SDN application programmer can program to what functionally appears to be a single forwarding element, but whichmay be replicated behind the scenes. To do this, we identifythe key property for seamless scale out, weak causality,and guarantee it through a practical and scalable implementation of vector clocks in the data plane. We prove that COCONUT enables seamless scale out of networking elements, i.e., the user-perceived behavior of any COCONUT element implemented with a distributed set of concurrent replicas is provably indistinguishable from its singleton implementation. Finally, we build a prototype of COCONUT and experimentally demonstrate its correct behavior. We also show that its abstraction enables a more efficient implementation of seamless scale-out compared to a naive baseline.
Scientific advancement is fueled by solid fundamental research, followed by replication, meta-analysis, and theory building. To support such advancement, researchers and government agencies have been working towards a "science of security". As in other sciences, security science requires high-quality fundamental research addressing important problems and reporting approaches that capture the information necessary for replication, meta-analysis, and theory building. The goal of this paper is to aid security researchers in establishing a baseline of the state of scientific reporting in security through an analysis of indicators of scientific research as reported in top security conferences, specifically the 2015 ACM CCS and 2016 IEEE S&P proceedings. To conduct this analysis, we employed a series of rubrics to analyze the completeness of information reported in papers relative to the type of evaluation used (e.g. empirical study, proof, discussion). Our findings indicated some important information is often missing from papers, including explicit documentation of research objectives and the threats to validity. Our findings show a relatively small number of replications reported in the literature. We hope that this initial analysis will serve as a baseline against which we can measure the advancement of the science of security.
Presented at the Symposium and Bootcamp in the Science of Security (HotSoS 2017), poster session in Hanover, MD, April 4-5, 2017.
Presented at the Symposium and Bootcamp in the Science of Security (HotSoS 2017), poster session in Hanover, MD, April 4-5, 2017.
Presented at ITI Joint Trust and Security/Science of Security Seminar, February 21, 2017.
Presented at NSA SoS Quarterly Meeting, February 2, 2017.
Presented at NSA SoS Quarterly Meeting, February 2, 2017
Presented at the SoS Lablet/R2 Monthly Meeting, January 2017.
State estimation is a fundamental problem for monitoring and controlling systems. Engineering systems interconnect sensing and computing devices over a shared bandwidth-limited channels, and therefore, estimation algorithms should strive to use bandwidth optimally. We present a notion of entropy for state estimation of switched nonlinear dynamical systems, an upper bound for it and a state estimation algorithm for the case when the switching signal is unobservable. Our approach relies on the notion of topological entropy and uses techniques from the theory for control under limited information. We show that the average bit rate used is optimal in the sense that, the efficiency gap of the algorithm is within an additive constant of the gap between estimation entropy of the system and its known upper-bound. We apply the algorithm to two system models and discuss the performance implications of the number of tracked modes.
In distributed control systems with shared resources, participating agents can improve the overall performance of the system by sharing data about their personal references. In this paper, we formulate and study a natural tradeoff arising in these problems between the privacy of the agent’s data and the performance of the control system.We formalize privacy in terms of differential privacy of agents’ preference vectors. The overall control system consists of N agents with linear discrete-time coupled dynamics, each controlled to track its preference vector. Performance of the system is measured by the mean squared tracking error. We present a mechanism that achieves differential privacy by adding Laplace noise to the shared information in a way that depends on the sensitivity of the control system to the private data. We show that for stable systems the performance cost of using this type of privacy preserving mechanism grows as O(T3 /Nε2), where T is the time horizon and ε is the privacy parameter. For unstable systems, the cost grows exponentially with time. From an estimation point of view, we establish a lower-bound for the entropy of any unbiased estimator of the private data from any noise-adding mechanism that gives ε-differential privacy. We show that the mechanism achieving this lower-bound is a randomized mechanism that also uses Laplace noise.
Enterprise networks today have highly diverse correctness requirements and relatively common performance objectives. As a result, preferred abstractions for enterprise networks are those which allow matching correctness specification, while transparently managing performance. Existing SDN network management architectures, however, bundle correctness and performance as a single abstraction. We argue that this creates an SDN ecosystem that is unnecessarily hard to build, maintain and evolve. We advocate a separation of the diverse correctness abstractions from generic performance optimization, to enable easier evolution of SDN controllers and platforms. We propose Oreo, a first step towards a common and relatively transparent performance optimization layer for SDN. Oreo performs the optimization by first building a model that describes every flow in the network, and then performing network-wide, multi-objective optimization based on this model without disrupting higher level correctness.
Software defined networking is a rapidly expanding networking paradigm that aims to separate the control logic from the forwarding devices. Through centralized control, network operators are able to deploy and manage more efficient forwarding strategies. Traditionally, when the network undergoes a change through maintenance, failure, or cyber attack, the centralized controller processes these events and deploys new forwarding rules reactively. This work provides a strategy that does not require a controller in order to maintain connectivity while only using features within the existing OpenFlow protocol version 1.3 or greater. In this paper we illustrate why forwarding resiliency is desired in OpenFlow networks and provide an algorithm that computes the flow entries required to achieve maximal forwarding resiliency in presence of both multiple link and controller failures on any arbitrary network.
Scientific advancement is fueled by solid fundamental research, followed by replication, meta-analysis, and theory building. To support such advancement, researchers and government agencies have been working towards a "science of security". As in other sciences, security science requires high-quality fundamental research addressing important problems and reporting approaches that capture the information necessary for replication, meta-analysis, and theory building. The goal of this paper is to aid security researchers in establishing a baseline of the state of scientific reporting in security through an analysis of indicators of scientific research as reported in top security conferences, specifically the 2015 ACM CCS and 2016 IEEE S&P proceedings. To conduct this analysis, we employed a series of rubrics to analyze the completeness of information reported in papers relative to the type of evaluation used (e.g. empirical study, proof, discussion). Our findings indicated some important information is often missing from papers, including explicit documentation of research objectives and the threats to validity. Our findings show a relatively small number of replications reported in the literature. We hope that this initial analysis will serve as a baseline against which we can measure the advancement of the science of security.
Presented at the Symposium and Bootcamp in the Science of Security (HotSoS 2017), poster session in Hanover, MD, April 4-5, 2017.
Presented at the Symposium and Bootcamp in the Science of Security (HotSoS 2017), poster session in Hanover, MD, April 4-5, 2017.
Presented at the Symposium and Bootcamp in the Science of Security (HotSoS 2017), poster session in Hanover, MD, April 4-5, 2017.
Presented at the Symposium and Bootcamp in the Science of Security (HotSoS 2017), poster session in Hanover, MD, April 4-5, 2017.
Attack graphs used in network security analysis are analyzed to determine sequences of exploits that lead to successful acquisition of privileges or data at critical assets. An attack graph edge corresponds to a vulnerability, tacitly assuming a connection exists and tacitly assuming the vulnerability is known to exist. In this paper we explore use of uncertain graphs to extend the paradigm to include lack of certainty in connection and/or existence of a vulnerability. We extend the standard notion of uncertain graph (where the existence of each edge is probabilistically independent) however, as signicant correlations on edge existence probabilities exist in practice, owing to common underlying causes for dis-connectivity and/or presence of vulnerabilities. Our extension describes each edge probability as a Boolean expression of independent indicator random variables. This paper (i) shows that this formalism is maximally descriptive in the sense that it can describe any joint probability distribution function of edge existence, (ii) shows that when these Boolean expressions are monotone then we can easily perform uncertainty analysis of edge probabilities, and (iii) uses these results to model a partial attack graph of the Stuxnet worm and a small enterprise network and to answer important security-related questions in a probabilistic manner.
Poster presented at the Symposium and Bootcamp in the Science of Security in Hanover, MD, April 4-5, 2017.