Title | Improving Robustness Verification of Neural Networks with General Activation Functions via Branching and Optimization |
Publication Type | Conference Paper |
Year of Publication | 2022 |
Authors | Luo, Zhengwu, Wang, Lina, Wang, Run, Yang, Kang, Ye, Aoshuang |
Conference Name | 2022 International Joint Conference on Neural Networks (IJCNN) |
Keywords | Artificial neural networks, Biological neural networks, compositionality, Metrics, Neurons, Optimization, pubcrawl, resilience, Resiliency, Robustness, Scalability, scalable verification |
Abstract | Robustness verification of neural networks (NNs) is a challenging and significant problem, which draws great attention in recent years. Existing researches have shown that bound propagation is a scalable and effective method for robustness verification, and it can be implemented on GPUs and TPUs to get parallelized. However, the bound propagation methods naturally produce weak bound due to linear relaxations on the neurons, which may cause failure in verification. Although tightening techniques for simple ReLU networks have been explored, they are not applicable for NNs with general activation functions such as Sigmoid and Tanh. Improving robustness verification on these NNs is still challenging. In this paper, we propose a Branch-and-Bound (BaB) style method to address this problem. The proposed BaB procedure improves the weak bound by splitting the input domain of neurons into sub-domains and solving the corresponding sub-problems. We propose a generic heuristic function to determine the priority of neuron splitting by scoring the relaxation and impact of neurons. Moreover, we combine bound optimization with the BaB procedure to improve the weak bound. Experimental results demonstrate that the proposed method gains up to 35% improvement compared to the state-of-art CROWN method on Sigmoid and Tanh networks. |
Notes | ISSN: 2161-4407 |
DOI | 10.1109/IJCNN55064.2022.9892214 |
Citation Key | luo_improving_2022 |