Visible to the public Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation

TitleBenchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation
Publication TypeConference Paper
Year of Publication2021
AuthorsDai, Wei, Berleant, Daniel
Conference Name2021 IEEE International Conference on Big Data (Big Data)
KeywordsBenchmark Metrics, Benchmark testing, Big Data, codes, Corrupted images, Deep Learning, Imperfect images, Perturbation methods, pubcrawl, Resiliency, Robust Deep Learning, Scalability, Training, visualization, work factor metrics
AbstractDeep learning (DL) classifiers are often unstable in that they may change significantly when retested on perturbed images or low quality images. This paper adds to the fundamental body of work on the robustness of DL classifiers. We introduce a new two-dimensional benchmarking matrix to evaluate robustness of DL classifiers, and we also innovate a four-quadrant statistical visualization tool, including minimum accuracy, maximum accuracy, mean accuracy, and coefficient of variation, for benchmarking robustness of DL classifiers. To measure robust DL classifiers, we create comprehensive 69 benchmarking image sets, including a clean set, sets with single factor perturbations, and sets with two-factor perturbation conditions. After collecting experimental results, we first report that using two-factor perturbed images improves both robustness and accuracy of DL classifiers. The two-factor perturbation includes (1) two digital perturbations (salt & pepper noise and Gaussian noise) applied in both sequences, and (2) one digital perturbation (salt & pepper noise) and a geometric perturbation (rotation) applied in both sequences. All source codes, related image sets, and results are shared on the GitHub website at https://github.com/caperock/robustai to support future academic research and industry projects.
DOI10.1109/BigData52589.2021.9671976
Citation Keydai_benchmarking_2021