Title | FALIoTSE: Towards Federated Adversarial Learning for IoT Search Engine Resiliency |
Publication Type | Conference Paper |
Year of Publication | 2021 |
Authors | Tian, Pu, Hatcher, William Grant, Liao, Weixian, Yu, Wei, Blasch, Erik |
Conference Name | 2021 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech) |
Keywords | Adversarial Machine Learning, deep generative model, federated learning, IoT search engine (IoTSE), neural network resiliency, Perturbation methods, pubcrawl, Recurrent neural networks, resilience, Resiliency, search engines, Sensor systems, Time series analysis, Training, White-Box attack |
Abstract | To improve efficiency and resource usage in data retrieval, an Internet of Things (IoT) search engine organizes a vast amount of scattered data and responds to client queries with processed results. Machine learning provides a deep understanding of complex patterns and enables enhanced feedback to users through well-trained models. Nonetheless, machine learning models are prone to adversarial attacks via the injection of elaborate perturbations, resulting in subverted outputs. Particularly, adversarial attacks on time-series data demand urgent attention, as sensors in IoT systems are collecting an increasing volume of sequential data. This paper investigates adversarial attacks on time-series analysis in an IoT search engine (IoTSE) system. Specifically, we consider the Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) as our base model, implemented in a simulated federated learning scheme. We propose the Federated Adversarial Learning for IoT Search Engine (FALIoTSE) that exploits the shared parameters of the federated model as the target for adversarial example generation and resiliency. Using a real-world smart parking garage dataset, the impact of an attack on FALIoTSE is demonstrated under various levels of perturbation. The experiments show that the training error increases significantly with noises from the gradient. |
DOI | 10.1109/DASC-PICom-CBDCom-CyberSciTech52372.2021.00058 |
Citation Key | tian_faliotse_2021 |