Visible to the public Biblio

Filters: Keyword is computer vision tasks  [Clear All Filters]
2021-03-09
Cui, W., Li, X., Huang, J., Wang, W., Wang, S., Chen, J..  2020.  Substitute Model Generation for Black-Box Adversarial Attack Based on Knowledge Distillation. 2020 IEEE International Conference on Image Processing (ICIP). :648–652.
Although deep convolutional neural network (CNN) performs well in many computer vision tasks, its classification mechanism is very vulnerable when it is exposed to the perturbation of adversarial attacks. In this paper, we proposed a new algorithm to generate the substitute model of black-box CNN models by using knowledge distillation. The proposed algorithm distills multiple CNN teacher models to a compact student model as the substitution of other black-box CNN models to be attacked. The black-box adversarial samples can be consequently generated on this substitute model by using various white-box attacking methods. According to our experiments on ResNet18 and DenseNet121, our algorithm boosts the attacking success rate (ASR) by 20% by training the substitute model based on knowledge distillation.
2020-10-12
Marrone, Stefano, Sansone, Carlo.  2019.  An Adversarial Perturbation Approach Against CNN-based Soft Biometrics Detection. 2019 International Joint Conference on Neural Networks (IJCNN). :1–8.
The use of biometric-based authentication systems spread over daily life consumer electronics. Over the years, researchers' interest shifted from hard (such as fingerprints, voice and keystroke dynamics) to soft biometrics (such as age, ethnicity and gender), mainly by using the latter to improve the authentication systems effectiveness. While newer approaches are constantly being proposed by domain experts, in the last years Deep Learning has raised in many computer vision tasks, also becoming the current state-of-art for several biometric approaches. However, since the automatic processing of data rich in sensitive information could expose users to privacy threats associated to their unfair use (i.e. gender or ethnicity), in the last years researchers started to focus on the development of defensive strategies in the view of a more secure and private AI. The aim of this work is to exploit Adversarial Perturbation, namely approaches able to mislead state-of-the-art CNNs by injecting a suitable small perturbation over the input image, to protect subjects against unwanted soft biometrics-based identification by automatic means. In particular, since ethnicity is one of the most critical soft biometrics, as a case of study we will focus on the generation of adversarial stickers that, once printed, can hide subjects ethnicity in a real-world scenario.
2020-06-19
Wang, Si, Liu, Wenye, Chang, Chip-Hong.  2019.  Detecting Adversarial Examples for Deep Neural Networks via Layer Directed Discriminative Noise Injection. 2019 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). :1—6.

Deep learning is a popular powerful machine learning solution to the computer vision tasks. The most criticized vulnerability of deep learning is its poor tolerance towards adversarial images obtained by deliberately adding imperceptibly small perturbations to the clean inputs. Such negatives can delude a classifier into wrong decision making. Previous defensive techniques mostly focused on refining the models or input transformation. They are either implemented only with small datasets or shown to have limited success. Furthermore, they are rarely scrutinized from the hardware perspective despite Artificial Intelligence (AI) on a chip is a roadmap for embedded intelligence everywhere. In this paper we propose a new discriminative noise injection strategy to adaptively select a few dominant layers and progressively discriminate adversarial from benign inputs. This is made possible by evaluating the differences in label change rate from both adversarial and natural images by injecting different amount of noise into the weights of individual layers in the model. The approach is evaluated on the ImageNet Dataset with 8-bit truncated models for the state-of-the-art DNN architectures. The results show a high detection rate of up to 88.00% with only approximately 5% of false positive rate for MobileNet. Both detection rate and false positive rate have been improved well above existing advanced defenses against the most practical noninvasive universal perturbation attack on deep learning based AI chip.