Visible to the public Toward Effective Moving Target Defense Against Adversarial AI

TitleToward Effective Moving Target Defense Against Adversarial AI
Publication TypeConference Paper
Year of Publication2021
AuthorsMartin, Peter, Fan, Jian, Kim, Taejin, Vesey, Konrad, Greenwald, Lloyd
Conference NameMILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM)
KeywordsAdaptation models, Adversarial Machine Learning, Conferences, cybersecurity, Deep Learning, face recognition, machine learning security, Metrics, military communication, moving target defense, object detection, pubcrawl, resilience, Resiliency, Scalability, Training
AbstractDeep learning (DL) models have been shown to be vulnerable to adversarial attacks. DL model security against adversarial attacks is critical to using DL-trained models in forward deployed systems, e.g. facial recognition, document characterization, or object detection. We provide results and lessons learned applying a moving target defense (MTD) strategy against iterative, gradient-based adversarial attacks. Our strategy involves (1) training a diverse ensemble of DL models, (2) applying randomized affine input transformations to inputs, and (3) randomizing output decisions. We report a primary lesson that this strategy is ineffective against a white-box adversary, which could completely circumvent output randomization using a deterministic surrogate. We reveal how our ensemble models lacked the diversity necessary for effective MTD. We also evaluate our MTD strategy against a black-box adversary employing an ensemble surrogate model. We conclude that an MTD strategy against black-box adversarial attacks crucially depends on lack of transferability between models.
DOI10.1109/MILCOM52596.2021.9652915
Citation Keymartin_toward_2021