Visible to the public Error Prevalence in NIDS datasets: A Case Study on CIC-IDS-2017 and CSE-CIC-IDS-2018

TitleError Prevalence in NIDS datasets: A Case Study on CIC-IDS-2017 and CSE-CIC-IDS-2018
Publication TypeConference Paper
Year of Publication2022
AuthorsLiu, Lisa, Engelen, Gints, Lynar, Timothy, Essam, Daryl, Joosen, Wouter
Conference Name2022 IEEE Conference on Communications and Network Security (CNS)
Date Publishedoct
KeywordsBenchmark testing, CIC-IDS-2017, Complexity theory, composability, CSE-CIC-IDS-2018, datasets, Documentation, IDS, Labeling, network intrusion, network intrusion detection, Network security, pubcrawl, resilience, Resiliency, telecommunication traffic
AbstractBenchmark datasets are heavily depended upon by the research community to validate theoretical findings and track progression in the state-of-the-art. NIDS dataset creation presents numerous challenges on account of the volume, heterogeneity, and complexity of network traffic, making the process labor intensive, and thus, prone to error. This paper provides a critical review of CIC-IDS-2017 and CIC-CSE-IDS-2018, datasets which have seen extensive usage in the NIDS literature, and are currently considered primary benchmarking datasets for NIDS. We report a large number of previously undocumented errors throughout the dataset creation lifecycle, including in attack orchestration, feature generation, documentation, and labeling. The errors destabilize the results and challenge the findings of numerous publications that have relied on it as a benchmark. We demonstrate the implications of these errors through several experiments. We provide comprehensive documentation to summarize the discovery of these issues, as well as a fully-recreated dataset, with labeling logic that has been reverse-engineered, corrected, and made publicly available for the first time. We demonstrate the implications of dataset errors through a series of experiments. The findings serve to remind the research community of common pitfalls with dataset creation processes, and of the need to be vigilant when adopting new datasets. Lastly, we strongly recommend the release of labeling logic for any dataset released, to ensure full transparency.
DOI10.1109/CNS56114.2022.9947235
Citation Keyliu_error_2022