Biblio

Filters: Author is Chen, Xiaowei  [Clear All Filters]
2021-10-12
Zhong, Zhenyu, Hu, Zhisheng, Chen, Xiaowei.  2020.  Quantifying DNN Model Robustness to the Real-World Threats. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :150–157.
DNN models have suffered from adversarial example attacks, which lead to inconsistent prediction results. As opposed to the gradient-based attack, which assumes white-box access to the model by the attacker, we focus on more realistic input perturbations from the real-world and their actual impact on the model robustness without any presence of the attackers. In this work, we promote a standardized framework to quantify the robustness against real-world threats. It is composed of a set of safety properties associated with common violations, a group of metrics to measure the minimal perturbation that causes the offense, and various criteria that reflect different aspects of the model robustness. By revealing comparison results through this framework among 13 pre-trained ImageNet classifiers, three state-of-the-art object detectors, and three cloud-based content moderators, we deliver the status quo of the real-world model robustness. Beyond that, we provide robustness benchmarking datasets for the community.