Biblio

Filters: Author is Zhenqi Huang, University of Illinois at Urbana-Champaign  [Clear All Filters]
2017-07-19
Yu Wang, University of Illinois at Urbana-Champaign, Zhenqi Huang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2017.  Differential Privacy and Entropy in Distributed Feedback Systems: Minimizing Mechanisms and Performance Trade-offs. IEEE Transactions on Network Control Systems. 4(1)

In distributed control systems with shared resources, participating agents can improve the overall performance of the system by sharing data about their personal preferences. In this paper, we formulate and study a natural tradeoff arising in these problems between the privacy of the agent’s data and the performance of the control system.We formalize privacy in terms of differential privacy of agents’ preference vectors. The overall control system consists of N agents with linear discrete-time coupled dynamics, each controlled to track its preference vector. Performance of the system is measured by the mean squared tracking error.We present a mechanism that achieves differential privacy by adding Laplace noise to the shared information in a way that depends on the sensitivity of the control system to the private data. We show that for stable systems the performance cost of using this type of privacy preserving mechanism grows as O(T 3/Nε2 ), where T is the time horizon and ε is the privacy parameter. For unstable systems, the cost grows exponentially with time. From an estimation point of view, we establish a lower-bound for the entropy of any unbiased estimator of the private data from any noise-adding mechanism that gives ε-differential privacy.We show that the mechanism achieving this lower-bound is a randomized mechanism that also uses Laplace noise.

2017-04-21
Yu Wang, University of Illinois at Urbana-Champaign, Zhenqi Huang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2017.  Differential Privacy in Linear Distributed Control Systems: Entropy Minimizing Mechanisms and Performance Tradeoffs. IEEE Transactions on Network Control Systems. 4(1)

In distributed control systems with shared resources, participating agents can improve the overall performance of the system by sharing data about their personal references. In this paper, we formulate and study a natural tradeoff arising in these problems between the privacy of the agent’s data and the performance of the control system.We formalize privacy in terms of differential privacy of agents’ preference vectors. The overall control system consists of N agents with linear discrete-time coupled dynamics, each controlled to track its preference vector. Performance of the system is measured by the mean squared tracking error. We present a mechanism that achieves differential privacy by adding Laplace noise to the shared information in a way that depends on the sensitivity of the control system to the private data. We show that for stable systems the performance cost of using this type of privacy preserving mechanism grows as O(T/Nε2), where T is the time horizon and ε is the privacy parameter. For unstable systems, the cost grows exponentially with time. From an estimation point of view, we establish a lower-bound for the entropy of any unbiased estimator of the private data from any noise-adding mechanism that gives ε-differential privacy. We show that the mechanism achieving this lower-bound is a randomized mechanism that also uses Laplace noise.

2016-10-24
2016-12-14
2016-07-13
Zhenqi Huang, University of Illinois at Urbana-Champaign, Chuchu Fan, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign.  2016.  Bounced Invariant Verification for Time-delayed Nonlinear Networked Dynamical Systems. Journal of the IFAC, International Federation of Automatic Control, Nonlinear Analysis: Hybrid Systems.

We present a technique for bounded invariant verification of nonlinear networked dynamical systems with delayed interconnections. The underlying problem in precise boundedtime verification lies with computing bounds on the sensitivity of trajectories (or solutions) to changes in initial states and inputs of the system. For large networks, computing this sensitivity
with precision guarantees is challenging. We introduce the notion of input-to-state (IS) discrepancy of each module or subsystem in a larger nonlinear networked dynamical system. The IS discrepancy bounds the distance between two solutions or trajectories of a module in terms of their initial states and their inputs. Given the IS discrepancy functions of the modules, we show that it is possible to effectively construct a reduced (low dimensional) time-delayed dynamical system, such that the trajectory of this reduced model precisely bounds the distance between the trajectories of the complete network with changed initial states. Using the above results we develop a sound and relatively complete algorithm for bounded invariant verification of networked dynamical systems consisting of nonlinear modules interacting through possibly delayed signals. Finally, we introduce a local version of IS discrepancy and show that it is possible to compute them using only the Lipschitz constant and the Jacobian of the dynamic function of the modules.
 

2016-12-14
Zhenqi Huang, University of Illinois at Urbana-Champaign, Yu Wang, University of Illinois at Urbana-Champaign.  2016.  Differential Privacy, Entropy and Security in Distributed Control of Cyber Physical Systems.

The concept of differential privacy stems from the study of private query of datasets. In this work, we apply this concept to discrete-time, linear distributed control systems in which agents need to maintain privacy of certain preferences, while sharing information for better system-level performance. The system has N agents operating in a shared environment that couples their dynamics. We show that for stable systems the performance grows as O(T3/Nε2), where T is the time horizon and ε is the differential privacy parameter. Next, we study lower-bounds in terms of the Shannon entropy of the minimal mean square estimate of the system’s private initial state from noisy communications between an agent and the server. We show that for any of noise-adding differentially private mechanism, then the Shannon entropy is at least nN(1−ln(ε/2)), where n is the dimension of the system, and t he lower bound is achieved by a Laplace-noise-adding mechanism. Finally, we study the problem of keeping the objective functions of individual agents differentially private in the context of cloud-based distributed optimization. The result shows a trade-off between the privacy of objective functions and the performance of the distributed optimization algorithm with noise.

Presented at the Joint Trust and Security/Science of Security Seminar, April 26, 2016.

2016-10-24
2016-12-14
2017-03-03
Zhenqi Huang, University of Illinois at Urbana-Champaign, Yu Wang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2015.  Analyzing the Cost of Securing Control Systems. The Next Wave: The National Security Agency's Review of Emerging Technologies. 21(1)

This article describes our recent progress on the development of rigorous analytical metrics for assessing the threat-performance trade-off in control systems. Computing systems that monitor and control physical processes are now pervasive, yet their security is frequently an afterthought rather than a first-order design consideration. We investigate a rational basis for deciding—at the design level—how much investment should be made to secure the system.

2015-11-17
Zhenqi Huang, University of Illinois at Urbana-Champaign, Yu Wang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2015.  Controller Synthesis for Linear Time-varying Systems with Adversaries.

We present a controller synthesis algorithm for a discrete time reach-avoid problem in the presence of adversaries. Our model of the adversary captures typical malicious attacks en- visioned on cyber-physical systems such as sensor spoofing, controller corruption, and actuator intrusion. After formu- lating the problem in a general setting, we present a sound and complete algorithm for the case with linear dynamics and an adversary with a budget on the total L2-norm of its actions. The algorithm relies on a result from linear control theory that enables us to decompose and precisely compute the reachable states of the system in terms of a symbolic simulation of the adversary-free dynamics and the total uncertainty induced by the adversary. With this de- composition, the synthesis problem eliminates the universal quantifier on the adversary’s choices and the symbolic con- troller actions can be effectively solved using an SMT solver. The constraints induced by the adversary are computed by solving second-order cone programmings. The algorithm is later extended to synthesize state-dependent controller and to generate attacks for the adversary. We present prelimi- nary experimental results that show the effectiveness of this approach on several example problems.

Zhenqi Huang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Nitin Vaidya, University of Illinois at Urbana-Champaign.  2015.  Differentially Private Distributed Optimization. IEEE International Conference on Distributed Computing and Networks (ICDCN 2015), .

In distributed optimization and iterative consensus literature, a standard problem is for N agents to minimize a function f over a subset of Rn, where the cost function is expressed as Σ fi . In this paper, we study the private distributed optimization (PDOP) problem with the additional requirement that the cost function of the individual agents should remain differentially private.  The adversary attempts to infer information about the private cost functions from the messages that the agents exchange. Achieving differential privacy requires that any change of an individual’s cost function only results in unsubstantial changes in the statistics of the messages. We propose a class of iterative algorithms for solving PDOP, which achieves differential privacy and convergence to the optimal value.  Our analysis reveals the dependence of the achieved accuracy and the privacy levels on the the parameters of the algorithm.

Zhenqi Huang, University of Illinois at Urbana-Champaign, Chuchu Fan, University of Illinois at Urbana-Champaign, Alexandru Mereacre, University of Oxford, Sayan Mitra, University of Illinois at Urbana-Champaign, Marta Kwiatkowska, University of Oxford.  2015.  Simulation-based Verification of Cardiac Pacemakers with Guaranteed Coverage. Special Issue of IEEE Design and Test. 32(5)

Design and testing of pacemaker is challenging because of the need to capture the interaction between the physical processes (e.g. voltage signal in cardiac tissue) and the embedded software (e.g. a pacemaker). At the same time, there is a growing need for design and certification methodologies that can provide quality assurance for the embedded software. We describe recent progress in simulation-based techniques that are capable of ensuring guaranteed coverage. Our methods employ discrep- ancy functions, which impose bounds on system dynamics, and proceed through iteratively constructing over-approximations of the reachable set of states. We are able to prove time bounded safety or produce counterexamples. We illustrate the techniques by analyzing a family of pacemaker designs against time duration requirements and synthesize safe parameter ranges. We conclude by outlining the potential uses of this technology to improve the safety of medical device designs.

2016-12-14
Zhenqi Huang, University of Illinois at Urbana-Champaign, Yu Wang, University of Illinois at Urbana-Champaign.  2015.  SMT-Based Controller Synthesis for Linear Dynamical Systems with Adversary.

We present a controller synthesis algorithm for a discrete time reach-avoid problem in the presence of adversaries. Our model of the adversary captures typical malicious attacks envisioned on cyber-physical systems such as sensor spoofing, controller corruption, and actuator intrusion. After formulating the problem in a general setting, we present a sound and complete algorithm for the case with linear dynamics and an adversary with a budget on the total L2-norm of its actions. The algorithm relies on a result from linear control theory that enables us to decompose and precisely compute the reachable states of the system in terms of a symbolic simulation of the adversary-free dynamics and the total uncertainty induced by the adversary. We provide constraint-based synthesis algorithms for synthesizing open-loop and a closed-loop controllers using SMT solvers.

Prestented at the Joint Trust and Security/Science of Security Seminar, November 3, 2015.

2015-11-17
Zhenqi Huang, University of Illinois at Urbana-Champaign, Yu Wang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2014.  On the Cost of Privacy in Distributed Control Systems. 3rd ACM International Conference on High Confidence Networked Systems (HiCoNS).

Individuals sharing information can improve the cost or performance of a distributed control system. But, sharing may also violate privacy. We develop a general framework for studying the cost of differential privacy in systems where a collection of agents, with coupled dynamics, communicate for sensing their shared environment while pursuing individ- ual preferences. First, we propose a communication strategy that relies on adding carefully chosen random noise to agent states and show that it preserves differential privacy. Of course, the higher the standard deviation of the noise, the higher the cost of privacy. For linear distributed control systems with quadratic cost functions, the standard deviation becomes independent of the number agents and it decays with the maximum eigenvalue of the dynamics matrix. Furthermore, for stable dynamics, the noise to be added is independent of the number of agents as well as the time horizon up to which privacy is desired.

2016-12-13
2015-11-17
Yu Wang, University of Illinois at Urbana-Champaign, Zhenqi Huang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2014.  Entropy-minimizing Mechanism for Differential Privacy of Discrete-time Linear Feedback Systems. 53rd IEEE Conference on Decision and Control (CDC 2014).

The concept of differential  privacy stems from the study of private query of datasets.  In  this work, we apply this concept  to metric spaces  to study a  mechanism  that randomizes a deterministic query by adding  mean-zero  noise to keep differential  privacy.

Zhenqi Huang, University of Illinois at Urbana-Champaign, Chuchu Fan, University of Illinois at Urbana-Champaign, Alexandru Mereacre, University of Oxford, Sayan Mitra, University of Illinois at Urbana-Champaign, Marta Kwiatkowska, University of Oxford.  2014.  Invariant Verification of Nonlinear Hybrid Automata Networks of Cardiac Cells. 26th International Conference on Computer Aided Verification (CAV 2014).

Verification algorithms for networks of nonlinear hybrid automata (HA) can aid us understand and control biological processes such as cardiac arrhythmia, formation of memory, and genetic regulation. We present an algorithm for over-approximating reach sets of networks of nonlinear HA which can be used for sound and relatively complete invariant checking. First, it uses automatically computed input-to-state discrepancy functions for the individual automata modules in the network A for constructing a low-dimensional model M. Simulations of both A and M are then used to compute the reach tubes for A. These techniques enable us to handle a challenging verification problem involving a network of cardiac cells, where each cell has four continuous variables and 29 locations. Our prototype tool can check bounded-time invariants for networks with 5 cells (20 continuous variables, 295 locations) typically in less than 15 minutes for up to reasonable time horizons. From the computed reach tubes we can infer biologically relevant properties of the network from a set of initial states.

Zhenqi Huang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign.  2014.  Proofs from Simulations and Modular Annotations. 17th International Conference on Hybrid Systems: Computation and Control (HSCC 2014).

We present a modular technique for simulation-based bounded verification for nonlinear dynamical systems. We introduce the notion of input-to-state discrepancy of each subsystem Ai in a larger nonlinear dynamical system A which bounds the distance between two (possibly diverging) trajectories of Ai in terms of their initial states and inputs. Using the IS discrepancy functions, we construct a low dimensional deter- ministic dynamical system M (δ). For any two trajectories of A starting δ distance apart, we show that one of them bloated by a factor determined by the trajectory of M con- tains the other. Further, by choosing appropriately small δ’s the overapproximations computed by the above method can be made arbitrarily precise. Using the above results we de- velop a sound and relatively complete algorithm for bounded safety verification of nonlinear ODEs. Our preliminary ex- periments with a prototype implementation of the algorithm show that the approach can be effective for verification of nonlinear models.

2016-12-13
2017-02-03
Stanley Bak, University of Illinois at Urbana-Champaign, Fardin Abdi, University of Illinois at Urbana-Champaign, Zhenqi Huang, University of Illinois at Urbana-Champaign, Marco Caccamo, University of Illinois at Urbana-Champaign.  2013.  Using Run-Time Checking to Provide Safety and Progress for Distributed Cyber-Physical Systems. 2013 IEEE 19th International Conference on Embedded and Real-Time Computing Systems and Applications.

Cyber-physical systems (CPS) may interact and manipulate objects in the physical world, and therefore ideally would have formal guarantees about their behavior. Performing statictime proofs of safety invariants, however, may be intractable for systems with distributed physical-world interactions. This is further complicated when realistic communication models are considered, for which there may not be bounds on message delays, or even that messages will eventually reach their destination. In this work, we address the challenge of proving safety and progress in distributed CPS communicating over an unreliable communication layer. This is done in two parts. First, we show that system safety can be verified by partially relying upon runtime checks, and that dropping messages if the run-time checks fail will maintain safety. Second, we use a notion of compatible action chains to guarantee system progress, despite unbounded message delays.We demonstrate the effectiveness of our approach on a multi-agent vehicle flocking system, and show that the overhead of the proposed run-time checks is not overbearing.

2015-11-17
Zhenqi Huang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2012.  Differentially Private Iterative Synchronous Consensus. Workshop on Privacy in the Electronic Society (WPES 2012).

The iterative consensus problem requires a set of processes or agents with different initial values, to interact and update their states to eventually converge to a common value. Pro- tocols solving iterative consensus serve as building blocks in a variety of systems where distributed coordination is re- quired for load balancing, data aggregation, sensor fusion, filtering, and synchronization. In this paper, we introduce the private iterative consensus problem where agents are re- quired to converge while protecting the privacy of their ini- tial values from honest but curious adversaries. Protecting the initial states, in many applications, suffice to protect all subsequent states of the individual participants.

We adapt the notion of differential privacy in this setting of iterative computation. Next, we present (i) a server-based and (ii) a completely distributed randomized mechanism for solving differentially private iterative consensus with adver- saries who can observe the messages as well as the internal states of the server and a subset of the clients. Our analysis establishes the tradeoff between privacy and the accuracy.