Biblio

Filters: Keyword is scenarios  [Clear All Filters]
2022-06-08
Guo, Jiansheng, Qi, Liang, Suo, Jiao.  2021.  Research on Data Classification of Intelligent Connected Vehicles Based on Scenarios. 2021 International Conference on E-Commerce and E-Management (ICECEM). :153–158.
The intelligent connected vehicle industry has entered a period of opportunity, industry data is accumulating rapidly, and the formulation of industry standards to regulate big data management and application is imminent. As the basis of data security, data classification has received unprecedented attention. By combing through the research and development status of data classification in various industries, this article combines industry characteristics and re-examines the framework of industry data classification from the aspects of information security and data assetization, and tries to find the balance point between data security and data value. The intelligent networked automobile industry provides support for big data applications, this article combines the characteristics of the connected vehicle industry, re-examines the data characteristics of the intelligent connected vehicle industry from the 2 aspects as information security and data assetization, and eventually proposes a scene-based hierarchical framework. The framework includes the complete classification process, model, and quantifiable parameters, which provides a solution and theoretical endorsement for the construction of a big data automatic classification system for the intelligent connected vehicle industry and safe data open applications.
2018-02-15
Hibshi, H., Breaux, T. D..  2017.  Reinforcing Security Requirements with Multifactor Quality Measurement. 2017 IEEE 25th International Requirements Engineering Conference (RE). :144–153.

Choosing how to write natural language scenarios is challenging, because stakeholders may over-generalize their descriptions or overlook or be unaware of alternate scenarios. In security, for example, this can result in weak security constraints that are too general, or missing constraints. Another challenge is that analysts are unclear on where to stop generating new scenarios. In this paper, we introduce the Multifactor Quality Method (MQM) to help requirements analysts to empirically collect system constraints in scenarios based on elicited expert preferences. The method combines quantitative statistical analysis to measure system quality with qualitative coding to extract new requirements. The method is bootstrapped with minimal analyst expertise in the domain affected by the quality area, and then guides an analyst toward selecting expert-recommended requirements to monotonically increase system quality. We report the results of applying the method to security. This include 550 requirements elicited from 69 security experts during a bootstrapping stage, and subsequent evaluation of these results in a verification stage with 45 security experts to measure the overall improvement of the new requirements. Security experts in our studies have an average of 10 years of experience. Our results show that using our method, we detect an increase in the security quality ratings collected in the verification stage. Finally, we discuss how our proposed method helps to improve security requirements elicitation, analysis, and measurement.

2016-12-08
Hanan Hibshi, Travis Breaux, Christian Wagner.  2016.  Improving Security Requirements Adequacy An Interval Type 2 Fuzzy Logic Security Assessment System. 2016 IEEE Symposium Series on Computational Intelligence .

Organizations rely on security experts to improve the security of their systems. These professionals use background knowledge and experience to align known threats and vulnerabilities before selecting mitigation options. The substantial depth of expertise in any one area (e.g., databases, networks, operating systems) precludes the possibility that an expert would have complete knowledge about all threats and vulnerabilities. To begin addressing this problem of distributed knowledge, we investigate the challenge of developing a security requirements rule base that mimics human expert reasoning to enable new decision-support systems. In this paper, we show how to collect relevant information from cyber security experts to enable the generation of: (1) interval type-2 fuzzy sets that capture intra- and inter-expert uncertainty around vulnerability levels; and (2) fuzzy logic rules underpinning the decision-making process within the requirements analysis. The proposed method relies on comparative ratings of security requirements in the context of concrete vignettes, providing a novel, interdisciplinary approach to knowledge generation for fuzzy logic systems. The proposed approach is tested by evaluating 52 scenarios with 13 experts to compare their assessments to those of the fuzzy logic decision support system. The initial results show that the system provides reliable assessments to the security analysts, in particular, generating more conservative assessments in 19% of the test scenarios compared to the experts’ ratings.