Title | Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation |
Publication Type | Conference Paper |
Year of Publication | 2021 |
Authors | Dai, Wei, Berleant, Daniel |
Conference Name | 2021 IEEE International Conference on Big Data (Big Data) |
Keywords | Benchmark Metrics, Benchmark testing, Big Data, codes, Corrupted images, Deep Learning, Imperfect images, Perturbation methods, pubcrawl, Resiliency, Robust Deep Learning, Scalability, Training, visualization, work factor metrics |
Abstract | Deep learning (DL) classifiers are often unstable in that they may change significantly when retested on perturbed images or low quality images. This paper adds to the fundamental body of work on the robustness of DL classifiers. We introduce a new two-dimensional benchmarking matrix to evaluate robustness of DL classifiers, and we also innovate a four-quadrant statistical visualization tool, including minimum accuracy, maximum accuracy, mean accuracy, and coefficient of variation, for benchmarking robustness of DL classifiers. To measure robust DL classifiers, we create comprehensive 69 benchmarking image sets, including a clean set, sets with single factor perturbations, and sets with two-factor perturbation conditions. After collecting experimental results, we first report that using two-factor perturbed images improves both robustness and accuracy of DL classifiers. The two-factor perturbation includes (1) two digital perturbations (salt & pepper noise and Gaussian noise) applied in both sequences, and (2) one digital perturbation (salt & pepper noise) and a geometric perturbation (rotation) applied in both sequences. All source codes, related image sets, and results are shared on the GitHub website at https://github.com/caperock/robustai to support future academic research and industry projects. |
DOI | 10.1109/BigData52589.2021.9671976 |
Citation Key | dai_benchmarking_2021 |