Visible to the public Biblio

Filters: Author is Yu Wang, University of Illinois at Urbana-Champaign  [Clear All Filters]
2017-10-24
Yu Wang, University of Illinois at Urbana-Champaign, Matthew Hale, University of Illinois at Urbana-Champaign, Magnus Egerstedt, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2017.  Differentially Private Objective Functions in Distributed Cloud-based Optimization. 20th World Congress of the International Federations of Automatic Control (IFAC 2017 World Congress).

Abstract—In this work, we study the problem of keeping the objective functions of individual agents "-differentially private in cloud-based distributed optimization, where agents are subject to global constraints and seek to minimize local objective functions. The communication architecture between agents is cloud-based – instead of communicating directly with each other, they oordinate by sharing states through a trusted cloud computer. In this problem, the difficulty is twofold: the objective functions are used repeatedly in every iteration, and the influence of  erturbing them extends to other agents and lasts over time. To solve the problem, we analyze the propagation of perturbations on objective functions over time, and derive an upper bound on them. With the upper bound, we design a noise-adding mechanism that randomizes the cloudbased distributed optimization algorithm to keep the individual objective functions "-differentially private. In addition, we study the trade-off between the privacy of objective functions and the performance of the new cloud-based distributed optimization algorithm with noise. We present simulation results to numerically verify the theoretical results presented.

2017-07-19
Yu Wang, University of Illinois at Urbana-Champaign, Zhenqi Huang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2017.  Differential Privacy and Entropy in Distributed Feedback Systems: Minimizing Mechanisms and Performance Trade-offs. IEEE Transactions on Network Control Systems. 4(1)

In distributed control systems with shared resources, participating agents can improve the overall performance of the system by sharing data about their personal preferences. In this paper, we formulate and study a natural tradeoff arising in these problems between the privacy of the agent’s data and the performance of the control system.We formalize privacy in terms of differential privacy of agents’ preference vectors. The overall control system consists of N agents with linear discrete-time coupled dynamics, each controlled to track its preference vector. Performance of the system is measured by the mean squared tracking error.We present a mechanism that achieves differential privacy by adding Laplace noise to the shared information in a way that depends on the sensitivity of the control system to the private data. We show that for stable systems the performance cost of using this type of privacy preserving mechanism grows as O(T 3/Nε2 ), where T is the time horizon and ε is the privacy parameter. For unstable systems, the cost grows exponentially with time. From an estimation point of view, we establish a lower-bound for the entropy of any unbiased estimator of the private data from any noise-adding mechanism that gives ε-differential privacy.We show that the mechanism achieving this lower-bound is a randomized mechanism that also uses Laplace noise.

2017-04-21
Yu Wang, University of Illinois at Urbana-Champaign, Zhenqi Huang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2017.  Differential Privacy in Linear Distributed Control Systems: Entropy Minimizing Mechanisms and Performance Tradeoffs. IEEE Transactions on Network Control Systems. 4(1)

In distributed control systems with shared resources, participating agents can improve the overall performance of the system by sharing data about their personal references. In this paper, we formulate and study a natural tradeoff arising in these problems between the privacy of the agent’s data and the performance of the control system.We formalize privacy in terms of differential privacy of agents’ preference vectors. The overall control system consists of N agents with linear discrete-time coupled dynamics, each controlled to track its preference vector. Performance of the system is measured by the mean squared tracking error. We present a mechanism that achieves differential privacy by adding Laplace noise to the shared information in a way that depends on the sensitivity of the control system to the private data. We show that for stable systems the performance cost of using this type of privacy preserving mechanism grows as O(T/Nε2), where T is the time horizon and ε is the privacy parameter. For unstable systems, the cost grows exponentially with time. From an estimation point of view, we establish a lower-bound for the entropy of any unbiased estimator of the private data from any noise-adding mechanism that gives ε-differential privacy. We show that the mechanism achieving this lower-bound is a randomized mechanism that also uses Laplace noise.

2017-03-03
Zhenqi Huang, University of Illinois at Urbana-Champaign, Yu Wang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2015.  Analyzing the Cost of Securing Control Systems. The Next Wave: The National Security Agency's Review of Emerging Technologies. 21(1)

This article describes our recent progress on the development of rigorous analytical metrics for assessing the threat-performance trade-off in control systems. Computing systems that monitor and control physical processes are now pervasive, yet their security is frequently an afterthought rather than a first-order design consideration. We investigate a rational basis for deciding—at the design level—how much investment should be made to secure the system.

2016-12-14
Zhenqi Huang, University of Illinois at Urbana-Champaign, Yu Wang, University of Illinois at Urbana-Champaign.  2016.  Differential Privacy, Entropy and Security in Distributed Control of Cyber Physical Systems.

The concept of differential privacy stems from the study of private query of datasets. In this work, we apply this concept to discrete-time, linear distributed control systems in which agents need to maintain privacy of certain preferences, while sharing information for better system-level performance. The system has N agents operating in a shared environment that couples their dynamics. We show that for stable systems the performance grows as O(T3/Nε2), where T is the time horizon and ε is the differential privacy parameter. Next, we study lower-bounds in terms of the Shannon entropy of the minimal mean square estimate of the system’s private initial state from noisy communications between an agent and the server. We show that for any of noise-adding differentially private mechanism, then the Shannon entropy is at least nN(1−ln(ε/2)), where n is the dimension of the system, and t he lower bound is achieved by a Laplace-noise-adding mechanism. Finally, we study the problem of keeping the objective functions of individual agents differentially private in the context of cloud-based distributed optimization. The result shows a trade-off between the privacy of objective functions and the performance of the distributed optimization algorithm with noise.

Presented at the Joint Trust and Security/Science of Security Seminar, April 26, 2016.

Zhenqi Huang, University of Illinois at Urbana-Champaign, Yu Wang, University of Illinois at Urbana-Champaign.  2015.  SMT-Based Controller Synthesis for Linear Dynamical Systems with Adversary.

We present a controller synthesis algorithm for a discrete time reach-avoid problem in the presence of adversaries. Our model of the adversary captures typical malicious attacks envisioned on cyber-physical systems such as sensor spoofing, controller corruption, and actuator intrusion. After formulating the problem in a general setting, we present a sound and complete algorithm for the case with linear dynamics and an adversary with a budget on the total L2-norm of its actions. The algorithm relies on a result from linear control theory that enables us to decompose and precisely compute the reachable states of the system in terms of a symbolic simulation of the adversary-free dynamics and the total uncertainty induced by the adversary. We provide constraint-based synthesis algorithms for synthesizing open-loop and a closed-loop controllers using SMT solvers.

Prestented at the Joint Trust and Security/Science of Security Seminar, November 3, 2015.

2016-12-13
2015-11-17
Yu Wang, University of Illinois at Urbana-Champaign, Zhenqi Huang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2014.  Entropy-minimizing Mechanism for Differential Privacy of Discrete-time Linear Feedback Systems. 53rd IEEE Conference on Decision and Control (CDC 2014).

The concept of differential  privacy stems from the study of private query of datasets.  In  this work, we apply this concept  to metric spaces  to study a  mechanism  that randomizes a deterministic query by adding  mean-zero  noise to keep differential  privacy.

Zhenqi Huang, University of Illinois at Urbana-Champaign, Yu Wang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2015.  Controller Synthesis for Linear Time-varying Systems with Adversaries.

We present a controller synthesis algorithm for a discrete time reach-avoid problem in the presence of adversaries. Our model of the adversary captures typical malicious attacks en- visioned on cyber-physical systems such as sensor spoofing, controller corruption, and actuator intrusion. After formu- lating the problem in a general setting, we present a sound and complete algorithm for the case with linear dynamics and an adversary with a budget on the total L2-norm of its actions. The algorithm relies on a result from linear control theory that enables us to decompose and precisely compute the reachable states of the system in terms of a symbolic simulation of the adversary-free dynamics and the total uncertainty induced by the adversary. With this de- composition, the synthesis problem eliminates the universal quantifier on the adversary’s choices and the symbolic con- troller actions can be effectively solved using an SMT solver. The constraints induced by the adversary are computed by solving second-order cone programmings. The algorithm is later extended to synthesize state-dependent controller and to generate attacks for the adversary. We present prelimi- nary experimental results that show the effectiveness of this approach on several example problems.