Visible to the public Biblio

Filters: Keyword is multisource data  [Clear All Filters]
2020-11-23
Wu, K., Gao, X., Liu, Y..  2018.  Web server security evaluation method based on multi-source data. 2018 International Conference on Cloud Computing, Big Data and Blockchain (ICCBB). :1–6.
Traditional web security assessments are evaluated using a single data source, and the results of the calculations from different data sources are different. Based on multi-source data, this paper uses Analytic Hierarchy Process to construct an evaluation model, calculates the weight of each level of indicators in the web security evaluation model, analyzes and processes the data, calculates the host security threat assessment values at various levels, and visualizes the evaluation results through ECharts+WebGL technology.
2019-01-21
Ayoade, G., Chandra, S., Khan, L., Hamlen, K., Thuraisingham, B..  2018.  Automated Threat Report Classification over Multi-Source Data. 2018 IEEE 4th International Conference on Collaboration and Internet Computing (CIC). :236–245.

With an increase in targeted attacks such as advanced persistent threats (APTs), enterprise system defenders require comprehensive frameworks that allow them to collaborate and evaluate their defense systems against such attacks. MITRE has developed a framework which includes a database of different kill-chains, tactics, techniques, and procedures that attackers employ to perform these attacks. In this work, we leverage natural language processing techniques to extract attacker actions from threat report documents generated by different organizations and automatically classify them into standardized tactics and techniques, while providing relevant mitigation advisories for each attack. A naïve method to achieve this is by training a machine learning model to predict labels that associate the reports with relevant categories. In practice, however, sufficient labeled data for model training is not always readily available, so that training and test data come from different sources, resulting in bias. A naïve model would typically underperform in such a situation. We address this major challenge by incorporating an importance weighting scheme called bias correction that efficiently utilizes available labeled data, given threat reports, whose categories are to be automatically predicted. We empirically evaluated our approach on 18,257 real-world threat reports generated between year 2000 and 2018 from various computer security organizations to demonstrate its superiority by comparing its performance with an existing approach.