Visible to the public Systematization of Knowledge from Intrusion Detection Models - January 2017Conflict Detection Enabled

Public Audience
Purpose: To highlight project progress. Information is generally at a higher level which is accessible to the interested public. All information contained in the report (regions 1-3) is a Government Deliverable/CDRL.

PI(s):  Huaiyu Dai, Andy Meneely
Researchers:

Xiaofan He, Yufan Huang, Richeng Jin, Nuthan Munaiah, Kevin Campusano Gonzalez

HARD PROBLEM(S) ADDRESSED

  • Security Metrics and Models - The project aims to establish common criteria for evaluating and systematizing knowledge contributed by research on intrusion detection models.
  • Resilient Architectures - Robust intrusion detection models serve to make large systems more resilient to attack.
  • Scalability and Composability - Intrusion detection models deal with large data sets every day, so scale is always a significant concern.
  • Humans - A key aspect of intrusion detection is interpreting the output and acting upon it, which inherently involves humans. Furthermore, intrusion detection models are ultimately simulations of human behavior.

 

PUBLICATIONS
Report papers written as a results of this research. If accepted by or submitted to a journal, which journal. If presented at a conference, which conference.

  • N/A

ACCOMPLISHMENT HIGHLIGHTS

  • We have continued our study on collaborative intrusion detection systems (CIDSs). Particularly, we have developed a repeated two-layer single-leader multi-follower game to model the problem. Based on this model, we have derived the optimal attack and response strategies for the attacker and IDSs, respectively. Through analysis, we have shown that sharing information among IDSs is always beneficial, no matter low limited the shared information is due to various factors including privacy concerns. Moreover, we have been able to quantify the tradeoff between collaboration utility and privacy.

  • We have completed our systematic literature review of intrusion detection evaluation metrics. We extracted 16 metrics that are commonly used to empirically evaluate intrusion detection systems. We found a highly inconsistent usage of such metrics, including a lack of trade-off analysis within effectiveness metrics (e.g. precision and recall) and a lack of comparison between performance and effectiveness. This systematic literature review will be helpful for guiding researchers and practitioners toward a standard of empirical evaluation that lends itself to repeatability, generalizability, and overall systematiziation of knowledge.