Systematization of Knowledge from Intrusion Detection Models - July 2017
Public Audience
Purpose: To highlight project progress. Information is generally at a higher level which is accessible to the interested public. All information contained in the report (regions 1-3) is a Government Deliverable/CDRL.
PI(s): Huaiyu Dai, Andy Meneely
Researchers:
Xiaofan He, Yufan Huang, Richeng Jin, Nuthan Munaiah, Kevin Campusano Gonzalez
HARD PROBLEM(S) ADDRESSED
- Security Metrics and Models - The project aims to establish common criteria for evaluating and systematizing knowledge contributed by research on intrusion detection models.
- Resilient Architectures - Robust intrusion detection models serve to make large systems more resilient to attack.
- Scalability and Composability - Intrusion detection models deal with large data sets every day, so scale is always a significant concern.
- Humans - A key aspect of intrusion detection is interpreting the output and acting upon it, which inherently involves humans. Furthermore, intrusion detection models are ultimately simulations of human behavior.
PUBLICATIONS
Report papers written as a results of this research. If accepted by or submitted to a journal, which journal. If presented at a conference, which conference.
-
Richeng Jin, Xiaofan He, Huaiyu Dai, Rudra Dutta, Peng Ning. 2017. Towards Privacy-Aware Collaborative Security: A Game-Theoretic Approach. IEEE Symposium on Privacy-Aware Computing (PAC).
-
Xiaofan He, Mohammad M. Islam, Richeng Jin, Huaiyu Dai. 2017. Foresighted Deception in Dynamic Security Games. IEEE International Conference on Communications (ICC).
ACCOMPLISHMENT HIGHLIGHTS
-
We have continued our study on collaborative security. As compared to our previous study, the scenario with unknown observation capability and misreport probability of the entities is considered in this quarter. Particularly, the Expectation Maximization (EM) algorithm is adopted in our current study to learn the probability distribution of the obfuscated observations. Through this approach, less private information is disclosed as the entities are no longer required to share their misreport probabilities. Simulation results show that the proposed EM based approach works well in the considered scenario.
-
We have also continued our study on privacy metric. Particularly, the concept of information leakage in quantitative information flow theory is considered as a privacy metric in our current study. Based on this privacy metric, the competition between the attacker and the collaborative entities is modeled as a game where both parties aim to acquire more secret information of the opponent and minimize their own information leakage.