Visible to the public Systematization of Knowledge from Intrusion Detection Models - April 2016Conflict Detection Enabled

Public Audience
Purpose: To highlight project progress. Information is generally at a higher level which is accessible to the interested public. All information contained in the report (regions 1-3) is a Government Deliverable/CDRL.

PI(s):  Huaiyu Dai, Andy Meneely
Researchers:

Xiaofan He, Yufan Huang, Richeng Jin, Nuthan Munaiah, Kevin Campusano Gonzalez

HARD PROBLEM(S) ADDRESSED

  • Security Metrics and Models - The project aims to establish common criteria for evaluating and systematizing knowledge contributed by research on intrusion detection models.
  • Resilient Architectures - Robust intrusion detection models serve to make large systems more resilient to attack.
  • Scalability and Composability - Intrusion detection models deal with large data sets every day, so scale is always a significant concern.
  • Humans - A key aspect of intrusion detection is interpreting the output and acting upon it, which inherently involves humans. Furthermore, intrusion detection models are ultimately simulations of human behavior.

 

PUBLICATIONS
Report papers written as a results of this research. If accepted by or submitted to a journal, which journal. If presented at a conference, which conference.

N/A

 

ACCOMPLISHMENT HIGHLIGHTS

  • We finished the algorithm design for collaborative IDS configuration, and tested it through extensive simulations. Simulation results indicate that the proposed scheme can facilitate effective resource-sharing among IDSs, leading to significant gain in detection performance. Some theoretical analysis was also conducted, and the conditions under which there is a guaranteed performance improvement as compared to the autonomous IDS system were quantified. This work was submitted to 2016 IEEE Globecom for publication.

  • We are continuing our generalization study to examine how consistent IDS research is validated. Currently, our results show that IDS research papers with an empirical component rarely provide consistent validation evidence (e.g. they evaluate accuracy, but not speed; or speed without reporting accuracy).