Visible to the public Biblio

Filters: Author is Vesey, Konrad  [Clear All Filters]
2022-02-22
Martin, Peter, Fan, Jian, Kim, Taejin, Vesey, Konrad, Greenwald, Lloyd.  2021.  Toward Effective Moving Target Defense Against Adversarial AI. MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM). :993—998.
Deep learning (DL) models have been shown to be vulnerable to adversarial attacks. DL model security against adversarial attacks is critical to using DL-trained models in forward deployed systems, e.g. facial recognition, document characterization, or object detection. We provide results and lessons learned applying a moving target defense (MTD) strategy against iterative, gradient-based adversarial attacks. Our strategy involves (1) training a diverse ensemble of DL models, (2) applying randomized affine input transformations to inputs, and (3) randomizing output decisions. We report a primary lesson that this strategy is ineffective against a white-box adversary, which could completely circumvent output randomization using a deterministic surrogate. We reveal how our ensemble models lacked the diversity necessary for effective MTD. We also evaluate our MTD strategy against a black-box adversary employing an ensemble surrogate model. We conclude that an MTD strategy against black-box adversarial attacks crucially depends on lack of transferability between models.