Visible to the public An Empirical Study of High-Impact Factors for Machine Learning-Based Vulnerability Detection

TitleAn Empirical Study of High-Impact Factors for Machine Learning-Based Vulnerability Detection
Publication TypeConference Paper
Year of Publication2020
AuthorsZheng, Wei, Gao, Jialiang, Wu, Xiaoxue, Xun, Yuxing, Liu, Guoliang, Chen, Xiang
Conference Name2020 IEEE 2nd International Workshop on Intelligent Bug Fixing (IBF)
KeywordsComparative Study, compositionality, Deep Learning, feature extraction, Human Behavior, machine learning, machine learning algorithms, Metrics, pubcrawl, Resiliency, Software, Tools, Training, vulnerability detection
AbstractAhstract-Vulnerability detection is an important topic of software engineering. To improve the effectiveness and efficiency of vulnerability detection, many traditional machine learning-based and deep learning-based vulnerability detection methods have been proposed. However, the impact of different factors on vulnerability detection is unknown. For example, classification models and vectorization methods can directly affect the detection results and code replacement can affect the features of vulnerability detection. We conduct a comparative study to evaluate the impact of different classification algorithms, vectorization methods and user-defined variables and functions name replacement. In this paper, we collected three different vulnerability code datasets. These datasets correspond to different types of vulnerabilities and have different proportions of source code. Besides, we extract and analyze the features of vulnerability code datasets to explain some experimental results. Our findings from the experimental results can be summarized as follows: (i) the performance of using deep learning is better than using traditional machine learning and BLSTM can achieve the best performance. (ii) CountVectorizer can improve the performance of traditional machine learning. (iii) Different vulnerability types and different code sources will generate different features. We use the Random Forest algorithm to generate the features of vulnerability code datasets. These generated features include system-related functions, syntax keywords, and user-defined names. (iv) Datasets without user-defined variables and functions name replacement will achieve better vulnerability detection results.
DOI10.1109/IBF50092.2020.9034888
Citation Keyzheng_empirical_2020