Martin, Peter, Fan, Jian, Kim, Taejin, Vesey, Konrad, Greenwald, Lloyd.
2021.
Toward Effective Moving Target Defense Against Adversarial AI. MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM). :993—998.
Deep learning (DL) models have been shown to be vulnerable to adversarial attacks. DL model security against adversarial attacks is critical to using DL-trained models in forward deployed systems, e.g. facial recognition, document characterization, or object detection. We provide results and lessons learned applying a moving target defense (MTD) strategy against iterative, gradient-based adversarial attacks. Our strategy involves (1) training a diverse ensemble of DL models, (2) applying randomized affine input transformations to inputs, and (3) randomizing output decisions. We report a primary lesson that this strategy is ineffective against a white-box adversary, which could completely circumvent output randomization using a deterministic surrogate. We reveal how our ensemble models lacked the diversity necessary for effective MTD. We also evaluate our MTD strategy against a black-box adversary employing an ensemble surrogate model. We conclude that an MTD strategy against black-box adversarial attacks crucially depends on lack of transferability between models.
Jenkins, Chris, Vugrin, Eric, Manickam, Indu, Troutman, Nicholas, Hazelbaker, Jacob, Krakowiak, Sarah, Maxwell, Josh, Brown, Richard.
2021.
Moving Target Defense for Space Systems. 2021 IEEE Space Computing Conference (SCC). :60—71.
Space systems provide many critical functions to the military, federal agencies, and infrastructure networks. Nation-state adversaries have shown the ability to disrupt critical infrastructure through cyber-attacks targeting systems of networked, embedded computers. Moving target defenses (MTDs) have been proposed as a means for defending various networks and systems against potential cyber-attacks. MTDs differ from many cyber resilience technologies in that they do not necessarily require detection of an attack to mitigate the threat. We devised a MTD algorithm and tested its application to a real-time network. We demonstrated MTD usage with a real-time protocol given constraints not typically found in best-effort networks. Second, we quantified the cyber resilience benefit of MTD given an exfiltration attack by an adversary. For our experiment, we employed MTD which resulted in a reduction of adversarial knowledge by 97%. Even when the adversary can detect when the address changes, there is still a reduction in adversarial knowledge when compared to static addressing schemes. Furthermore, we analyzed the core performance of the algorithm and characterized its unpredictability using nine different statistical metrics. The characterization highlighted the algorithm has good unpredictability characteristics with some opportunity for improvement to produce more randomness.
Mingyang, Qiu, Qingwei, Meng, Yan, Fu, Xikang, Wang.
2021.
Analysis of Zero-Day Virus Suppression Strategy based on Moving Target Defense. 2021 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). :1—4.
In order to suppress the spread of zero-day virus in the network effectively, a zero-day virus suppression strategy was proposed. Based on the mechanism of zero-day virus transmission and the idea of platform dynamic defense, the corresponding methods of virus transmission suppression are put forward. By changing the platform switching frequency, the scale of zero-day virus transmission and its inhibition effect are simulated in a small-world network model. Theory and computer simulation results show that the idea of platform switching can effectively restrain the spread of virus.
Qiu, Yihao, Wu, Jun, Mumtaz, Shahid, Li, Jianhua, Al-Dulaimi, Anwer, Rodrigues, Joel J. P. C..
2021.
MT-MTD: Muti-Training based Moving Target Defense Trojaning Attack in Edged-AI network. ICC 2021 - IEEE International Conference on Communications. :1—6.
The evolution of deep learning has promoted the popularization of smart devices. However, due to the insufficient development of computing hardware, the ability to conduct local training on smart devices is greatly restricted, and it is usually necessary to deploy ready-made models. This opacity makes smart devices vulnerable to deep learning backdoor attacks. Some existing countermeasures against backdoor attacks are based on the attacker’s ignorance of defense. Once the attacker knows the defense mechanism, he can easily overturn it. In this paper, we propose a Trojaning attack defense framework based on moving target defense(MTD) strategy. According to the analysis of attack-defense game types and confrontation process, the moving target defense model based on signaling game was constructed. The simulation results show that in most cases, our technology can greatly increase the attack cost of the attacker, thereby ensuring the availability of Deep Neural Networks(DNN) and protecting it from Trojaning attacks.
Duvalsaint, Danielle, Blanton, R. D. Shawn.
2021.
Characterizing Corruptibility of Logic Locks using ATPG. 2021 IEEE International Test Conference (ITC). :213–222.
The outsourcing of portions of the integrated circuit design chain, mainly fabrication, to untrusted parties has led to an increasing concern regarding the security of fabricated ICs. To mitigate these concerns a number of approaches have been developed, including logic locking. The development of different logic locking methods has influenced research looking at different security evaluations, typically aimed at uncovering a secret key. In this paper, we make the case that corruptibility for incorrect keys is an important metric of logic locking. To measure corruptibility for circuits too large to exhaustively simulate, we describe an ATPG-based method to measure the corruptibility of incorrect keys. Results from applying the method to various circuits demonstrate that this method is effective at measuring the corruptibility for different locks.
Wang, Mingzhe, Liang, Jie, Zhou, Chijin, Chen, Yuanliang, Wu, Zhiyong, Jiang, Yu.
2021.
Industrial Oriented Evaluation of Fuzzing Techniques. 2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST). :306–317.
Fuzzing is a promising method for discovering vulnerabilities. Recently, various techniques are developed to improve the efficiency of fuzzing, and impressive gains are observed in evaluation results. However, evaluation is complex, as many factors affect the results, for example, test suites, baseline and metrics. Even more, most experiment setups are lab-oriented, lacking industrial settings such as large code-base and parallel runs. The correlation between the academic evaluation results and the bug-finding ability in real industrial settings has not been sufficiently studied. In this paper, we test representative fuzzing techniques to reveal their efficiency in industrial settings. First, we apply typical fuzzers on academic widely used small projects from LAVAM suite. We also apply the same fuzzers on large practical projects from Google's fuzzer-test-suite, which is rarely used in academic settings. Both experiments are performed in both single and parallel run. By analyzing the results, we found that most optimizations working well on LAVA-M suite fail to achieve satisfying results on Google's fuzzer-test-suite (e.g. compared to AFL, QSYM detects 82x more synthesized bugs in LAVA-M, but only detects 26% real bugs in Google's fuzzer-test-suite), and the original AFL even outperforms most academic optimization variants in industry widely used parallel runs (e.g. AFL covers 13% more paths than AFLFast). Then, we summarize common pitfalls of those optimizations, analyze the corresponding root causes, and propose potential directions such as orchestrations and synchronization to overcome the problems. For example, when running in parallel on those large practical projects, the proposed horizontal orchestration could cover 36%-82% more paths, and discover 46%-150% more unique crashes or bugs, compared to fuzzers such as AFL, FairFuzz and QSYM.
Lanus, Erin, Freeman, Laura J., Richard Kuhn, D., Kacker, Raghu N..
2021.
Combinatorial Testing Metrics for Machine Learning. 2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). :81–84.
This paper defines a set difference metric for comparing machine learning (ML) datasets and proposes the difference between datasets be a function of combinatorial coverage. We illustrate its utility for evaluating and predicting performance of ML models. Identifying and measuring differences between datasets is of significant value for ML problems, where the accuracy of the model is heavily dependent on the degree to which training data are sufficiently representative of data encountered in application. The method is illustrated for transfer learning without retraining, the problem of predicting performance of a model trained on one dataset and applied to another.