Effectiveness of Random Deep Feature Selection for Securing Image Manipulation Detectors Against Adversarial Examples
Title | Effectiveness of Random Deep Feature Selection for Securing Image Manipulation Detectors Against Adversarial Examples |
Publication Type | Conference Paper |
Year of Publication | 2020 |
Authors | Barni, M., Nowroozi, E., Tondi, B., Zhang, B. |
Conference Name | ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |
Date Published | May 2020 |
Publisher | IEEE |
ISBN Number | 978-1-5090-6631-5 |
Keywords | adaptive filtering, Adversarial Machine Learning, adversarial multimedia forensics, CNN image manipulation detector, deep learning features, deep learning for forensics, feature extraction, feature randomization, feature selection, fully connected neural network, image classification, image manipulation detection, image manipulation detection tasks, image manipulation detectors, learning (artificial intelligence), linear SVM, Metrics, pubcrawl, random deep feature selection, random feature selection approach, randomization-based defences, resilience, Resiliency, Scalability, secure classification, security of data, Support vector machines |
Abstract | We investigate if the random feature selection approach proposed in [1] to improve the robustness of forensic detectors to targeted attacks, can be extended to detectors based on deep learning features. In particular, we study the transferability of adversarial examples targeting an original CNN image manipulation detector to other detectors (a fully connected neural network and a linear SVM) that rely on a random subset of the features extracted from the flatten layer of the original network. The results we got by considering three image manipulation detection tasks (resizing, median filtering and adaptive histogram equalization), two original network architectures and three classes of attacks, show that feature randomization helps to hinder attack transferability, even if, in some cases, simply changing the architecture of the detector, or even retraining the detector is enough to prevent the transferability of the attacks. |
URL | https://ieeexplore.ieee.org/document/9053318 |
DOI | 10.1109/ICASSP40776.2020.9053318 |
Citation Key | barni_effectiveness_2020 |
- image manipulation detectors
- Support vector machines
- security of data
- secure classification
- Scalability
- Resiliency
- resilience
- randomization-based defences
- random feature selection approach
- random deep feature selection
- pubcrawl
- Metrics
- linear SVM
- learning (artificial intelligence)
- adaptive filtering
- image manipulation detection tasks
- image manipulation detection
- image classification
- fully connected neural network
- Feature Selection
- feature randomization
- feature extraction
- deep learning for forensics
- deep learning features
- CNN image manipulation detector
- adversarial multimedia forensics
- Adversarial Machine Learning