Visible to the public Biblio

Filters: Author is Alfred Chen, Qi  [Clear All Filters]
2022-09-20
Chen, Tong, Xiang, Yingxiao, Li, Yike, Tian, Yunzhe, Tong, Endong, Niu, Wenjia, Liu, Jiqiang, Li, Gang, Alfred Chen, Qi.  2021.  Protecting Reward Function of Reinforcement Learning via Minimal and Non-catastrophic Adversarial Trajectory. 2021 40th International Symposium on Reliable Distributed Systems (SRDS). :299—309.
Reward functions are critical hyperparameters with commercial values for individual or distributed reinforcement learning (RL), as slightly different reward functions result in significantly different performance. However, existing inverse reinforcement learning (IRL) methods can be utilized to approximate reward functions just based on collected expert trajectories through observing. Thus, in the real RL process, how to generate a polluted trajectory and perform an adversarial attack on IRL for protecting reward functions has become the key issue. Meanwhile, considering the actual RL cost, generated adversarial trajectories should be minimal and non-catastrophic for ensuring normal RL performance. In this work, we propose a novel approach to craft adversarial trajectories disguised as expert ones, for decreasing the IRL performance and realize the anti-IRL ability. Firstly, we design a reward clustering-based metric to integrate both advantages of fine- and coarse-grained IRL assessment, including expected value difference (EVD) and mean reward loss (MRL). Further, based on such metric, we explore an adversarial attack based on agglomerative nesting algorithm (AGNES) clustering and determine targeted states as starting states for reward perturbation. Then we employ the intrinsic fear model to predict the probability of imminent catastrophe, supporting to generate non-catastrophic adversarial trajectories. Extensive experiments of 7 state-of-the-art IRL algorithms are implemented on the Object World benchmark, demonstrating the capability of our proposed approach in (a) decreasing the IRL performance and (b) having minimal and non-catastrophic adversarial trajectories.