Visible to the public VU SoS Lablet Quarterly Executive Summary - Jan 2021Conflict Detection Enabled

A. Fundamental Research

The Science of Security for Cyber-Physical Systems (CPS) Lablet focuses on (1) Foundations of CPS Resilience, (2) Analytics for CPS Cybersecurity, (3) Development of a Multi-model Testbed for Simulation–based Evaluation of Resilience, and (4) Mixed Initiative and Collaborative Learning in Adversarial Environments.  

  • In foundations of CPS resilience, we study distributed multi-task learning provides significant advantages in multi-agent networks with heterogeneous data sources where agents aim to learn distinct but correlated models simultaneously. However, distributed algorithms for learning relatedness among tasks are not resilient in the presence of Byzantine agents. In this work, we present an approach for Byzantine resilient distributed multi-task learning. We propose an efficient online weight assignment rule by measuring the accumulated loss using an agent’s data and its neighbors’ models. A small accumulated loss indicates a large similarity between the two tasks. In order to ensure the Byzantine resilience of the aggregation at a normal agent, we introduce a step for filtering out larger losses. We analyze the approach for convex models and show that normal agents converge resiliently towards the global minimum. Further, aggregation with the proposed weight assignment rule always results in an improved expected regret than the non-cooperative case. Finally, we demonstrate the approach using three case studies, including regression and classification problems, and show that our method exhibits good empirical performance for non-convex models, such as convolutional neural networks.
  • In the multi-model testbed effort, we are working on a probabilistic framework for modeling and simulating attacks in power systems. Due to the increased deployment of novel communication, control, and protection functions, the power grid has become vulnerable to a variety of attacks. Designing robust machine learning-based attack detection and mitigation algorithms require large amounts of data that rely heavily on a representative environment, where different attacks can be simulated. We developed a comprehensive tool-chain for modeling and simulating attacks in power systems using probabilistic domain-specific language to define multiple attack scenarios and simulation configuration parameters. We extended the PyPower-dynamics simulator with protection system components to simulate cyber attacks in control and protection layers of the power system. We demonstrated the effectiveness of the proposed tool-chain with a case study based on IEEE 39 bus system.
  • In the area of mixed initiative and collaborative learning in adversarial environments, a canonical problem for unmanned vehicles (ground, air and water borne) is to provide provably safe algorithms for not running into moving obstacles, including other agents. A huge number of probabilistic complete algorithms called RRT* has been proposed for this purpose. But they are basically unusable in practice for such elementary applications as driving in cluttered environments because they are centralized and kinematic (that is that they are based on computational geometry rather than dynamics of the agents). This glaring omission has resulted in problems for air space management (sometimes called UAS for UAVs), and also for the certification of driving cars. The resilience aspect of these algorithms is that they need to be designed to be robust to adversarial attack by rogue vehicles. The most common approach here has been a game theoretic approach with inverse reinforcement learning to determine adversarial intent. The practical problems here are these solutions are usually so conservative as to be useless in real world scenarios. A possible way around this is the use of Model Predictive and Learning Games. We expect this project to be a long standing one with several papers to follow.

B. Community Engagement(s)

  • Shankar Sastry launched a new Institute entitled the C3 Digital Transformation Institute (https://c3dti.ai) a partnership of Berkeley, UIUC (co-leads) with UChicago, CMU, MIT, Princeton, Stanford to develop the science and technology of Digital Transformation. The private philanthropy that supports this institute was very much leveraged on the support of Federal research such as this SoS lablet.  We have been furthering the agenda of SoS in the workshops that this institute has run in the Fall, see https://c3dti.ai/events/workshops. Some spectacular connections between the spread of fake news, wireless networks, and pandemic spread were reported in the first workshop in September 2020. The third workshop in December 2020 featured a tour de force of new methods in Robust and Provably Safe Autonomy co-organized by Professor Claire Tomlin.

C. Educational Advances

  • Professor Tomlin and Sastry have taken the lead in revamping large amounts of the undergraduate and graduate curriculum to feature the recent confluence of AI/ML, robotics, and control. In the Fall, Sastry taught his course on Introduction to Robotics (a mezzanine course for undergrads and Masters students) for about 150 students (see https://ucb-ee106.github.io/106a-fa20site/) for the resources associated with this class. In a partnership with OSD’s National Security Innovation Network (NSIN) we are placing the top students from this class at various labs in the DoD in Summer 2021 (this was piloted successfully in Summer 2020, see https://blumcenter.berkeley.edu/news-posts/national-security-innovation-with-uc-berkeley/.