Biblio
Filters: Author is Kiya, Hitoshi [Clear All Filters]
Ensemble of Key-Based Models: Defense Against Black-Box Adversarial Attacks. 2021 IEEE 10th Global Conference on Consumer Electronics (GCCE). :95–98.
.
2021. We propose a voting ensemble of models trained by using block-wise transformed images with secret keys against black-box attacks. Although key-based adversarial defenses were effective against gradient-based (white-box) attacks, they cannot defend against gradient-free (black-box) attacks without requiring any secret keys. In the proposed ensemble, a number of models are trained by using images transformed with different keys and block sizes, and then a voting ensemble is applied to the models. Experimental results show that the proposed defense achieves a clean accuracy of 95.56 % and an attack success rate of less than 9 % under attacks with a noise distance of 8/255 on the CIFAR-10 dataset.
Encryption Inspired Adversarial Defense For Visual Classification. 2020 IEEE International Conference on Image Processing (ICIP). :1681—1685.
.
2020. Conventional adversarial defenses reduce classification accuracy whether or not a model is under attacks. Moreover, most of image processing based defenses are defeated due to the problem of obfuscated gradients. In this paper, we propose a new adversarial defense which is a defensive transform for both training and test images inspired by perceptual image encryption methods. The proposed method utilizes a block-wise pixel shuffling method with a secret key. The experiments are carried out on both adaptive and non-adaptive maximum-norm bounded white-box attacks while considering obfuscated gradients. The results show that the proposed defense achieves high accuracy (91.55%) on clean images and (89.66%) on adversarial examples with noise distance of 8/255 on CFAR-10 dataset. Thus, the proposed defense outperforms state-of-the-art adversarial defenses including latent adversarial training, adversarial training and thermometer encoding.