Visible to the public Biblio

Filters: Keyword is Adversarial robustness  [Clear All Filters]
2023-02-02
Oakley, Lisa, Oprea, Alina, Tripakis, Stavros.  2022.  Adversarial Robustness Verification and Attack Synthesis in Stochastic Systems. 2022 IEEE 35th Computer Security Foundations Symposium (CSF). :380–395.

Probabilistic model checking is a useful technique for specifying and verifying properties of stochastic systems including randomized protocols and reinforcement learning models. However, these methods rely on the assumed structure and probabilities of certain system transitions. These assumptions may be incorrect, and may even be violated by an adversary who gains control of some system components. In this paper, we develop a formal framework for adversarial robustness in systems modeled as discrete time Markov chains (DTMCs). We base our framework on existing methods for verifying probabilistic temporal logic properties and extend it to include deterministic, memoryless policies acting in Markov decision processes (MDPs). Our framework includes a flexible approach for specifying structure-preserving and non structure-preserving adversarial models. We outline a class of threat models under which adversaries can perturb system transitions, constrained by an ε ball around the original transition probabilities. We define three main DTMC adversarial robustness problems: adversarial robustness verification, maximal δ synthesis, and worst case attack synthesis. We present two optimization-based solutions to these three problems, leveraging traditional and parametric probabilistic model checking techniques. We then evaluate our solutions on two stochastic protocols and a collection of Grid World case studies, which model an agent acting in an environment described as an MDP. We find that the parametric solution results in fast computation for small parameter spaces. In the case of less restrictive (stronger) adversaries, the number of parameters increases, and directly computing property satisfaction probabilities is more scalable. We demonstrate the usefulness of our definitions and solutions by comparing system outcomes over various properties, threat models, and case studies.

2022-01-25
Sun, Hao, Xu, Yanjie, Kuang, Gangyao, Chen, Jin.  2021.  Adversarial Robustness Evaluation of Deep Convolutional Neural Network Based SAR ATR Algorithm. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS. :5263–5266.
Robustness, both to accident and to malevolent perturbations, is a crucial determinant of the successful deployment of deep convolutional neural network based SAR ATR systems in various security-sensitive applications. This paper performs a detailed adversarial robustness evaluation of deep convolutional neural network based SAR ATR models across two public available SAR target recognition datasets. For each model, seven different adversarial perturbations, ranging from gradient based optimization to self-supervised feature distortion, are generated for each testing image. Besides adversarial average recognition accuracy, feature attribution techniques have also been adopted to analyze the feature diffusion effect of adversarial attacks, which promotes the understanding of vulnerability of deep learning models.
2021-11-08
Varshney, Kush R..  2020.  On Mismatched Detection and Safe, Trustworthy Machine Learning. 2020 54th Annual Conference on Information Sciences and Systems (CISS). :1–4.
Instilling trust in high-stakes applications of machine learning is becoming essential. Trust may be decomposed into four dimensions: basic accuracy, reliability, human interaction, and aligned purpose. The first two of these also constitute the properties of safe machine learning systems. The second dimension, reliability, is mainly concerned with being robust to epistemic uncertainty and model mismatch. It arises in the machine learning paradigms of distribution shift, data poisoning attacks, and algorithmic fairness. All of these problems can be abstractly modeled using the theory of mismatched hypothesis testing from statistical signal processing. By doing so, we can take advantage of performance characterizations in that literature to better understand the various machine learning issues.
2020-12-28
Raju, R. S., Lipasti, M..  2020.  BlurNet: Defense by Filtering the Feature Maps. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :38—46.

Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adversarial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations (RP2), generates adversarial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the RP2 attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by the RP2 algorithm. To remove the high frequency noise, we introduce a depthwise convolution layer of standard blur kernels after the first layer. We perform a blackbox transfer attack to show that low-pass filtering the feature maps is more beneficial than filtering the input. We then present various regularization schemes to incorporate this lowpass filtering behavior into the training regime of the network and perform white-box attacks. We conclude with an adaptive attack evaluation to show that the success rate of the attack drops from 90% to 20% with total variation regularization, one of the proposed defenses.

2020-04-17
Xie, Cihang, Wu, Yuxin, Maaten, Laurens van der, Yuille, Alan L., He, Kaiming.  2019.  Feature Denoising for Improving Adversarial Robustness. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :501—509.

Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 — it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by 10%. Code is available at https://github.com/facebookresearch/ImageNet-Adversarial-Training.