Visible to the public Biblio

Filters: Author is Wang, Run  [Clear All Filters]
2023-02-17
Luo, Zhengwu, Wang, Lina, Wang, Run, Yang, Kang, Ye, Aoshuang.  2022.  Improving Robustness Verification of Neural Networks with General Activation Functions via Branching and Optimization. 2022 International Joint Conference on Neural Networks (IJCNN). :1–8.
Robustness verification of neural networks (NNs) is a challenging and significant problem, which draws great attention in recent years. Existing researches have shown that bound propagation is a scalable and effective method for robustness verification, and it can be implemented on GPUs and TPUs to get parallelized. However, the bound propagation methods naturally produce weak bound due to linear relaxations on the neurons, which may cause failure in verification. Although tightening techniques for simple ReLU networks have been explored, they are not applicable for NNs with general activation functions such as Sigmoid and Tanh. Improving robustness verification on these NNs is still challenging. In this paper, we propose a Branch-and-Bound (BaB) style method to address this problem. The proposed BaB procedure improves the weak bound by splitting the input domain of neurons into sub-domains and solving the corresponding sub-problems. We propose a generic heuristic function to determine the priority of neuron splitting by scoring the relaxation and impact of neurons. Moreover, we combine bound optimization with the BaB procedure to improve the weak bound. Experimental results demonstrate that the proposed method gains up to 35% improvement compared to the state-of-art CROWN method on Sigmoid and Tanh networks.
ISSN: 2161-4407