Visible to the public Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression

TitleDetecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression
Publication TypeConference Paper
Year of Publication2020
AuthorsCai, Feiyang, Li, Jiani, Koutsoukos, Xenofon
Conference Name2020 IEEE Security and Privacy Workshops (SPW)
Keywordsadversarial example detection, Autonomous automobiles, composability, CPS modeling, CPS Modeling and Simulation, cps privacy, Cyber-physical systems, delays, inductive conformal prediction, Neural networks, Predictive Metrics, Predictive models, pubcrawl, Resiliency, self-driving vehicles, simulation, Training, Uncertainty, VAE based regression
Abstract

Learning-enabled components (LECs) are widely used in cyber-physical systems (CPS) since they can handle the uncertainty and variability of the environment and increase the level of autonomy. However, it has been shown that LECs such as deep neural networks (DNN) are not robust and adversarial examples can cause the model to make a false prediction. The paper considers the problem of efficiently detecting adversarial examples in LECs used for regression in CPS. The proposed approach is based on inductive conformal prediction and uses a regression model based on variational autoencoder. The architecture allows to take into consideration both the input and the neural network prediction for detecting adversarial, and more generally, out-of-distribution examples. We demonstrate the method using an advanced emergency braking system implemented in an open source simulator for self-driving cars where a DNN is used to estimate the distance to an obstacle. The simulation results show that the method can effectively detect adversarial examples with a short detection delay.

DOI10.1109/SPW50608.2020.00050
Citation Keycai_detecting_2020