TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks
Title | TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks |
Publication Type | Conference Paper |
Year of Publication | 2019 |
Authors | Khalid, F., Hanif, M. A., Rehman, S., Ahmed, R., Shafique, M. |
Conference Name | 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS) |
Date Published | July 2019 |
Publisher | IEEE |
ISBN Number | 978-1-7281-2490-2 |
Keywords | Adversarial Machine Learning, AI Poisoning, Automation, Autonomous vehicles, convolutional neural nets, Correlation, data manipulation attacks, data poisoning attacks, Deep Neural Network, deep neural networks, DNNs, feature extraction, generated attack images, German Traffic Sign Recognition Benchmarks dataset, Human Behavior, image classification, Image coding, image recognition, imperceptibility factor, imperceptible attack images, Imperceptible Attack Noise, Inference algorithms, learning (artificial intelligence), machine learning, ML Security, multilevel security system, object detection, Object recognition, Optimization, Optimization algorithms, perceptible noise, pre-trained DNNs, pubcrawl, resilience, Resiliency, Scalability, security, security of data, structural similarity analysis, traffic sign detection, Training, training data-unaware imperceptible security attacks, training dataset |
Abstract | Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity analysis, or noticeable (e.g., by humans) in multi-level security system. Moreover, majority of the inference attack rely on some knowledge about the training dataset. In this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs, without requiring any information about the training dataset (i.e., completely training data-unaware). We present a case study on traffic sign detection using the VGGNet trained on the German Traffic Sign Recognition Benchmarks dataset in an autonomous driving use case. Our results demonstrate that the generated attack images successfully perform misclassification while remaining imperceptible in both "subjective" and "objective" quality tests. |
URL | https://ieeexplore.ieee.org/document/8854425 |
DOI | 10.1109/IOLTS.2019.8854425 |
Citation Key | khalid_trisec_2019 |
- pubcrawl
- learning (artificial intelligence)
- machine learning
- ML Security
- multilevel security system
- object detection
- Object recognition
- optimization
- Optimization algorithms
- perceptible noise
- pre-trained DNNs
- Inference algorithms
- resilience
- Resiliency
- Scalability
- security
- security of data
- structural similarity analysis
- traffic sign detection
- Training
- training data-unaware imperceptible security attacks
- training dataset
- feature extraction
- AI Poisoning
- automation
- autonomous vehicles
- convolutional neural nets
- Correlation
- data manipulation attacks
- data poisoning attacks
- Deep Neural Network
- deep neural networks
- DNNs
- Adversarial Machine Learning
- generated attack images
- German Traffic Sign Recognition Benchmarks dataset
- Human behavior
- image classification
- Image coding
- image recognition
- imperceptibility factor
- imperceptible attack images
- Imperceptible Attack Noise