Visible to the public Detecting AI Trojans Using Meta Neural Analysis

TitleDetecting AI Trojans Using Meta Neural Analysis
Publication TypeConference Paper
Year of Publication2021
AuthorsXu, Xiaojun, Wang, Qi, Li, Huichen, Borisov, Nikita, Gunter, Carl A., Li, Bo
Conference Name2021 IEEE Symposium on Security and Privacy (SP)
KeywordsAI Poisoning, Data models, Human Behavior, machine learning, Natural languages, Neural networks, Pipelines, Predictive models, privacy, pubcrawl, resilience, Resiliency, Scalability
AbstractIn machine learning Trojan attacks, an adversary trains a corrupted model that obtains good performance on normal data but behaves maliciously on data samples with certain trigger patterns. Several approaches have been proposed to detect such attacks, but they make undesirable assumptions about the attack strategies or require direct access to the trained models, which restricts their utility in practice.This paper addresses these challenges by introducing a Meta Neural Trojan Detection (MNTD) pipeline that does not make assumptions on the attack strategies and only needs black-box access to models. The strategy is to train a meta-classifier that predicts whether a given target model is Trojaned. To train the meta-model without knowledge of the attack strategy, we introduce a technique called jumbo learning that samples a set of Trojaned models following a general distribution. We then dynamically optimize a query set together with the meta-classifier to distinguish between Trojaned and benign models.We evaluate MNTD with experiments on vision, speech, tabular data and natural language text datasets, and against different Trojan attacks such as data poisoning attack, model manipulation attack, and latent attack. We show that MNTD achieves 97% detection AUC score and significantly outperforms existing detection approaches. In addition, MNTD generalizes well and achieves high detection performance against unforeseen attacks. We also propose a robust MNTD pipeline which achieves around 90% detection AUC even when the attacker aims to evade the detection with full knowledge of the system.
DOI10.1109/SP40001.2021.00034
Citation Keyxu_detecting_2021