Title | Toward Effective Moving Target Defense Against Adversarial AI |
Publication Type | Conference Paper |
Year of Publication | 2021 |
Authors | Martin, Peter, Fan, Jian, Kim, Taejin, Vesey, Konrad, Greenwald, Lloyd |
Conference Name | MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM) |
Keywords | Adaptation models, Adversarial Machine Learning, Conferences, cybersecurity, Deep Learning, face recognition, machine learning security, Metrics, military communication, moving target defense, object detection, pubcrawl, resilience, Resiliency, Scalability, Training |
Abstract | Deep learning (DL) models have been shown to be vulnerable to adversarial attacks. DL model security against adversarial attacks is critical to using DL-trained models in forward deployed systems, e.g. facial recognition, document characterization, or object detection. We provide results and lessons learned applying a moving target defense (MTD) strategy against iterative, gradient-based adversarial attacks. Our strategy involves (1) training a diverse ensemble of DL models, (2) applying randomized affine input transformations to inputs, and (3) randomizing output decisions. We report a primary lesson that this strategy is ineffective against a white-box adversary, which could completely circumvent output randomization using a deterministic surrogate. We reveal how our ensemble models lacked the diversity necessary for effective MTD. We also evaluate our MTD strategy against a black-box adversary employing an ensemble surrogate model. We conclude that an MTD strategy against black-box adversarial attacks crucially depends on lack of transferability between models. |
DOI | 10.1109/MILCOM52596.2021.9652915 |
Citation Key | martin_toward_2021 |