Visible to the public Biblio

Filters: Keyword is performance comparison  [Clear All Filters]
2022-02-22
Zhou, Tianyang.  2021.  Performance comparison and optimization of mainstream NIDS systems in offline mode based on parallel processing technology. 2021 2nd International Conference on Computing and Data Science (CDS). :136—140.
For the network intrusion detection system (NIDS), improving the performance of the analysis process has always been one of the primary goals that NIDS needs to solve. An important method to improve performance is to use parallel processing technology to maximize the usage of multi-core CPU resources. In this paper, by splitting Pcap data packets, the NIDS software Snort3 can process Pcap packets in parallel mode. On this basis, this paper compares the performance between Snort2, Suricata, and Snort3 with different CPU cores in processing different sizes of Pcap data packets. At the same time, a parallel unpacking algorithm is proposed to further improve the parallel processing performance of Snort3.
2017-08-02
Menninghaus, Mathias, Pulvermüller, Elke.  2016.  Towards Using Code Coverage Metrics for Performance Comparison on the Implementation Level. Proceedings of the 7th ACM/SPEC on International Conference on Performance Engineering. :101–104.

The development process for new algorithms or data structures often begins with the analysis of benchmark results to identify the drawbacks of already existing implementations. Furthermore it ends with the comparison of old and new implementations by using one or more well established benchmark. But how relevant, reproducible, fair, verifiable and usable those benchmarks may be, they have certain drawbacks. On the one hand a new implementation may be biased to provide good results for a specific benchmark. On the other hand benchmarks are very general and often fail to identify the worst and best cases of a specific implementation. In this paper we present a new approach for the comparison of algorithms and data structures on the implementation level using code coverage. Our approach uses model checking and multi-objective evolutionary algorithms to create test cases with a high code coverage. It then executes each of the given implementations with each of the test cases in order to calculate a cross coverage. Using this it calculates a combined coverage and weighted performance where implementations, which are not fully covered by the test cases of the other implementations, are punished. These metrics can be used to compare the performance of several implementations on a much deeper level than traditional benchmarks and they incorporate worst, best and average cases in an equal manner. We demonstrate this approach by two example sets of algorithms and outline the next research steps required in this context along with the greatest risks and challenges.