Visible to the public Biblio

Filters: Keyword is augmentation method  [Clear All Filters]
2020-12-21
Cheng, Z., Chow, M.-Y..  2020.  An Augmented Bayesian Reputation Metric for Trustworthiness Evaluation in Consensus-based Distributed Microgrid Energy Management Systems with Energy Storage. 2020 2nd IEEE International Conference on Industrial Electronics for Sustainable Energy Systems (IESES). 1:215–220.
Consensus-based distributed microgrid energy management system is one of the most used distributed control strategies in the microgrid area. To improve its cybersecurity, the system needs to evaluate the trustworthiness of the participating agents in addition to the conventional cryptography efforts. This paper proposes a novel augmented reputation metric to evaluate the agents' trustworthiness in a distributed fashion. The proposed metric adopts a novel augmentation method to substantially improve the trust evaluation and attack detection performance under three typical difficult-to-detect attack patterns. The proposed metric is implemented and validated on a real-time HIL microgrid testbed.
2020-12-11
Abusnaina, A., Khormali, A., Alasmary, H., Park, J., Anwar, A., Mohaisen, A..  2019.  Adversarial Learning Attacks on Graph-based IoT Malware Detection Systems. 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). :1296—1305.

IoT malware detection using control flow graph (CFG)-based features and deep learning networks are widely explored. The main goal of this study is to investigate the robustness of such models against adversarial learning. We designed two approaches to craft adversarial IoT software: off-the-shelf methods and Graph Embedding and Augmentation (GEA) method. In the off-the-shelf adversarial learning attack methods, we examine eight different adversarial learning methods to force the model to misclassification. The GEA approach aims to preserve the functionality and practicality of the generated adversarial sample through a careful embedding of a benign sample to a malicious one. Intensive experiments are conducted to evaluate the performance of the proposed method, showing that off-the-shelf adversarial attack methods are able to achieve a misclassification rate of 100%. In addition, we observed that the GEA approach is able to misclassify all IoT malware samples as benign. The findings of this work highlight the essential need for more robust detection tools against adversarial learning, including features that are not easy to manipulate, unlike CFG-based features. The implications of the study are quite broad, since the approach challenged in this work is widely used for other applications using graphs.