Visible to the public Resilient Control of Cyber-Physical Systems with Distributed LearningConflict Detection Enabled

Project Details

Lead PI

Performance Period

Jan 01, 2018 - Jan 01, 2018

Institution(s)

University of Illinois at Urbana-Champaign
The University of Texas at Austin

Sponsor(s)

National Security Agency

Ranked 68 out of 118 Group Projects in this group.
4195 related hits.

Investigators: Sayan Mitra, Geir Dullerud, and Sanjay Shakkotai

Researchers: Pulkit Katdare and Negin Musavi

Critical cyber and cyber-physical systems (CPS) are beginning to use predictive AI models. These models help to expand, customize, and optimize the capabilities of the systems, but are also vulnerable to a new and imminent class of attacks. This project will develop foundations and methodologies to make such systems resilient. Our focus is on control systems that utilize large-scale, crowd-sourced data collection to train predictive AI models, which are then used to control and optimize the system's performance. Consider the examples of congestion-aware traffic routing and autonomous vehicles; to design controllers for such systems, large amounts of user data are being collected to train AI models that predict network congestion dynamics and human driving behaviors, respectively, and these models are used to guide the overall closed-loop control system.

Although our current understanding of AI models is very limited, they are already known to have serious vulnerabilities. For example, so-called "adversarial examples" can be generated algorithmically for defeating neural network models while appearing indistinguishable to human senses [73]. This can cause an autonomous vehicle to crash, facial recognition to fail, and illegal content to bypass filters, and the attacks may be impossible to detect. A second type of vulnerability arises when the adversary provides malicious training samples that may spoil the fidelity of the learned model. A third vulnerability is the potential violation of the privacy of individuals (e.g., drivers) who provide the training data. More generally, the space of vulnerabilities and their impact on the overall control system are not well-understood. This project will address this new and challenging landscape, and develop the mathematical foundations for reasoning about such systems and attacks. These foundations will then be the basis for automatically synthesizing monitoring and control algorithms needed for resilience. The project aligns with the SoS community's goal of creating resilient cyber-physical systems, and the approaches developed here will contribute towards development of a new compositional reasoning framework for CPS that combines traditional controls with AI models.

Our approach will take a broad view in developing a mathematical framework while simultaneously creating algorithms and tools that will be tested on benchmarks and real data. The theoretical aspects of the project will draw on the team's expertise in learning theory, formal methods, and robust control. The resulting resilient monitoring, detection, and control synthesis approaches will be tested on data, scenarios, and models from the CommonRoad project, Udacity, and OpenPilot.