L-GEM based robust learning against poisoning attack
Title | L-GEM based robust learning against poisoning attack |
Publication Type | Conference Paper |
Year of Publication | 2015 |
Authors | Zhang, F., Chan, P. P. K., Tang, T. Q. |
Conference Name | 2015 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR) |
ISBN Number | 978-1-4673-7224-4 |
Keywords | Accuracy, adversarial learning, AI Poisoning, classifier output, farthest-first flips attack, Human Behavior, L-GEM based robust learning, label flip poisoning attacks, learning (artificial intelligence), learning process, localized generalization error bound, localized generalization error model, Localized Generalization Error Model (L-GEM), nearest-first flips attack, Pattern recognition, perturbation, perturbation techniques, poisoning attack, pubcrawl, radial basis function networks, RBFNN, resampling, resilience, Resiliency, Robust Learning, Robustness, sampling methods, Scalability, Sensitivity, sensitivity analysis, Support vector machines, Training, wavelet analysis |
Abstract | Poisoning attack in which an adversary misleads the learning process by manipulating its training set significantly affect the performance of classifiers in security applications. This paper proposed a robust learning method which reduces the influences of attack samples on learning. The sensitivity, defined as the fluctuation of the output with small perturbation of the input, in Localized Generalization Error Model (L-GEM) is measured for each training sample. The classifier's output on attack samples may be sensitive and inaccurate since these samples are different from other untainted samples. An import score is assigned to each sample according to its localized generalization error bound. The classifier is trained using a new training set obtained by resampling the samples according to their importance scores. RBFNN is applied as the classifier in experimental evaluation. The proposed model outperforms than the traditional one under the well-known label flip poisoning attacks including nearest-first and farthest-first flips attack. |
URL | https://ieeexplore.ieee.org/document/7295946/ |
DOI | 10.1109/ICWAPR.2015.7295946 |
Citation Key | zhang_l-gem_2015 |
- Robustness
- poisoning attack
- pubcrawl
- radial basis function networks
- RBFNN
- resampling
- resilience
- Resiliency
- robust learning
- perturbation techniques
- sampling methods
- Scalability
- Sensitivity
- sensitivity analysis
- Support vector machines
- Training
- wavelet analysis
- Accuracy
- perturbation
- Pattern recognition
- nearest-first flips attack
- Localized Generalization Error Model (L-GEM)
- localized generalization error model
- localized generalization error bound
- learning process
- learning (artificial intelligence)
- label flip poisoning attacks
- L-GEM based robust learning
- Human behavior
- farthest-first flips attack
- classifier output
- AI Poisoning
- adversarial learning