Visible to the public Biblio

Filters: Author is Gao, Song  [Clear All Filters]
2023-05-11
Zhu, Lei, Huang, He, Gao, Song, Han, Jun, Cai, Chao.  2022.  False Data Injection Attack Detection Method Based on Residual Distribution of State Estimation. 2022 12th International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER). :724–728.
While acquiring precise and intelligent state sensing and control capabilities, the cyber physical power system is constantly exposed to the potential cyber-attack threat. False data injection (FDI) attack attempts to disrupt the normal operation of the power system through the coupling of cyber side and physical side. To deal with the situation that stealthy FDI attack can bypass the bad data detection and thus trigger false commands, a system feature extraction method in state estimation is proposed, and the corresponding FDI attack detection method is presented. Based on the principles of state estimation and stealthy FDI attack, we analyze the impacts of FDI attack on measurement residual. Gaussian fitting method is used to extract the characteristic parameters of residual distribution as the system feature, and attack detection is implemented in a sliding time window by comparison. Simulation results prove that the proposed attack detection method is effectiveness and efficiency.
ISSN: 2642-6633
2021-10-12
Zhao, Haojun, Lin, Yun, Gao, Song, Yu, Shui.  2020.  Evaluating and Improving Adversarial Attacks on DNN-Based Modulation Recognition. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–5.
The discovery of adversarial examples poses a serious risk to the deep neural networks (DNN). By adding a subtle perturbation that is imperceptible to the human eye, a well-behaved DNN model can be easily fooled and completely change the prediction categories of the input samples. However, research on adversarial attacks in the field of modulation recognition mainly focuses on increasing the prediction error of the classifier, while ignores the importance of decreasing the perceptual invisibility of attack. Aiming at the task of DNNbased modulation recognition, this study designs the Fitting Difference as a metric to measure the perturbed waveforms and proposes a new method: the Nesterov Adam Iterative Method to generate adversarial examples. We show that the proposed algorithm not only exerts excellent white-box attacks but also can initiate attacks on a black-box model. Moreover, our method decreases the waveform perceptual invisibility of attacks to a certain degree, thereby reducing the risk of an attack being detected.