Biblio
In unsecured communications settings, ascertaining the trustworthiness of received information, called authentication, is paramount. We consider keyless authentication over an arbitrarily-varying channel, where channel states are chosen by a malicious adversary with access to noisy versions of transmitted sequences. We have shown previously that a channel condition termed U-overwritability is a sufficient condition for zero authentication capacity over such a channel, and also that with a deterministic encoder, a sufficiently clear-eyed adversary is essentially omniscient. In this paper, we show that even if the authentication capacity with a deterministic encoder and an essentially omniscient adversary is zero, allowing a stochastic encoder can result in a positive authentication capacity. Furthermore, the authentication capacity with a stochastic encoder can be equal to the no-adversary capacity of the underlying channel in this case. We illustrate this for a binary channel model, which provides insight into the more general case.
Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adversarial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations (RP2), generates adversarial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the RP2 attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by the RP2 algorithm. To remove the high frequency noise, we introduce a depthwise convolution layer of standard blur kernels after the first layer. We perform a blackbox transfer attack to show that low-pass filtering the feature maps is more beneficial than filtering the input. We then present various regularization schemes to incorporate this lowpass filtering behavior into the training regime of the network and perform white-box attacks. We conclude with an adaptive attack evaluation to show that the success rate of the attack drops from 90% to 20% with total variation regularization, one of the proposed defenses.