Visible to the public Foundations of a CPS Resilience - April 2022Conflict Detection Enabled

PI: Xenofon Koutsoukos

HARD PROBLEM(S) ADDRESSED

The goals of this project are to develop the principles and methods for designing and analyzing resilient CPS architectures that deliver required service in the face of compromised components. A fundamental challenge is to understand the basic tenets of CPS resilience and how they can be used in developing resilient architectures. The primary hard problem addressed is resilient architectures. In addition, the work addresses scalability and composability as well as metrics and evaluation. 

PUBLICATIONS

[1]    Waseem Abbas, Mudassir Shabbir, Jiani Li, and Xenofon Koutsoukos, “Resilient Distributed Vector Consensus in High Dimensions Using Centerpoint”, Automatica.Volume 136, February 2022, 110046.

[2]    Bradley Potteiger, Zhenkai Zhang, Long Cheng, and Xenofon Koutsoukos. “A Tutorial on Moving Target Defense Approaches within Automotive Cyber-Physical Systems”, Frontiers in Future Transportation, Connected Mobility and Automation. 07 February 2022.

[3]    Dimitrios Boursinos and Xenofon Koutsoukos. "Selective Classification of Sequential Data Using Inductive Conformal Prediction", 2022 IEEE International Conference on Assured Autonomy (ICAA'22). March 22-23, 2022.

[4]    Chandreyee Bhowmick, Mudassir Shabbir, Waseem Abbas, and Xenofon Koutsoukos. "Resilient Multi-Agent Reinforcement Learning Using Medoid and Soft-Medoid Based Aggregation", 2022 IEEE International Conference on Assured Autonomy (ICAA'22). March 22-23, 2022.

KEY HIGHLIGHTS

This quarterly report presents two key highlight that demonstrate: (1) Resilient distributed vector consensus using centerpoint and (2) Resilient Multi-Agent Reinforcement Learning Using Medoid and Soft-Medoid Based Aggregation.

Highlight 1: Resilient distributed vector consensus using centerpoint

We study the resilient vector consensus problem in networks with adversarial agents and improve resilience guarantees of existing algorithms. A common approach to achieving resilient vector consensus is that every non-adversarial (or normal) agent in the network updates its state by moving towards a point in the convex hull of its normal neighbors’ states. Since an agent cannot distinguish between its normal and adversarial neighbors, computing such a point, often called safe point, is a challenging task. To compute a safe point, we propose to use the notion of centerpoint, which is an extension of the median in higher dimensions, instead of the Tverberg partition of points, which is often used for this purpose. We discuss that the notion of centerpoint provides a complete characterization of safe points in a d-dimensional space. In particular, we show that a safe point is essentially an interior centerpoint if the number of adversaries in the neighborhood of a normal agent i is less than Ni/(d+1), where d is the dimension of the state vector and Ni is the total number of agents in the neighborhood of i. Consequently, we obtain necessary and sufficient conditions on the number of adversarial agents to guarantee resilient vector consensus. Further, by considering the complexity of computing centerpoints, we discuss improvements in the resilience guarantees of vector consensus algorithms and compare with the other existing approaches. Finally, we numerically evaluate our approach.Our results are presented in [1]. 

[1]    Waseem Abbas, Mudassir Shabbir, Jiani Li, and Xenofon Koutsoukos, “Resilient Distributed Vector Consensus in High Dimensions Using Centerpoint”, Automatica.Volume 136, February 2022, 110046.

Highlight 2: Resilient Multi-Agent Reinforcement Learning Using Medoid and Soft-Medoid Based Aggregation

A network of reinforcement learning (RL) agents that cooperate with each other by sharing information can improve learning performance of control and coordination tasks when compared to non-cooperative agents. However, networked Multi-agent Reinforcement Learning (MARL) is vulnerable to adversarial agents that can compromise some agents and send malicious information to the network. In this work, we consider the problem of resilient MARL in the presence of adversarial agents that aim to compromise the learning algorithm. First, the paper presents an attack model which aims to degrade the performance of a target agent by modifying the parameters shared by an attacked agent. In order to improve resilience, we develop aggregation methods using medoid and soft-medoid functions. Our analysis shows that the medoid-based MARL algorithms converge to an optimal solution given standard assumptions and improve the overall learning performance and robustness. Simulation results show the effectiveness of the aggregation methods compared with average and median-based aggregation. Our results are presented in [2]. 

[2] Chandreyee Bhowmick, Mudassir Shabbir, Waseem Abbas, and Xenofon Koutsoukos. "Resilient Multi-Agent Reinforcement Learning Using Medoid and Soft-Medoid Based Aggregation", 2022 IEEE International Conference on Assured Autonomy (ICAA'22). March 22-23, 2022.

COMMUNITY ENGAGEMENTS

  • Our research was presented in the following conference: 2022 IEEE Conference on Assured Autonomy (ICAA).
  • PI Xenofon Koutsoukos was guest editor in a special issue on Artificial Intelligence/Machine Learning Enabled Reconfigurable Wireless Networks appeared in the IEEE Transactions of Network Science and Engineering, 9(1), 2022.