Visible to the public Biblio

Filters: Keyword is safety assessment  [Clear All Filters]
2020-12-11
Li, J., Liu, H., Wu, J., Zhu, J., Huifeng, Y., Rui, X..  2019.  Research on Nonlinear Frequency Hopping Communication Under Big Data. 2019 International Conference on Computer Network, Electronic and Automation (ICCNEA). :349—354.

Aiming at the problems of poor stability and low accuracy of current communication data informatization processing methods, this paper proposes a research on nonlinear frequency hopping communication data informatization under the framework of big data security evaluation. By adding a frequency hopping mediation module to the frequency hopping communication safety evaluation framework, the communication interference information is discretely processed, and the data parameters of the nonlinear frequency hopping communication data are corrected and converted by combining a fast clustering analysis algorithm, so that the informatization processing of the nonlinear frequency hopping communication data under the big data safety evaluation framework is completed. Finally, experiments prove that the research on data informatization of nonlinear frequency hopping communication under the framework of big data security evaluation could effectively improve the accuracy and stability.

2020-02-17
Papakonstantinou, Nikolaos, Linnosmaa, Joonas, Alanen, Jarmo, Bashir, Ahmed Z., O'Halloran, Bryan, Van Bossuyt, Douglas L..  2019.  Early Hybrid Safety and Security Risk Assessment Based on Interdisciplinary Dependency Models. 2019 Annual Reliability and Maintainability Symposium (RAMS). :1–7.
Safety and security of complex critical infrastructures are very important for economic, environmental and social reasons. The complexity of these systems introduces difficulties in the identification of safety and security risks that emerge from interdisciplinary interactions and dependencies. The discovery of safety and security design weaknesses late in the design process and during system operation can lead to increased costs, additional system complexity, delays and possibly undesirable compromises to address safety and security weaknesses.
2020-02-10
Chechik, Marsha.  2019.  Uncertain Requirements, Assurance and Machine Learning. 2019 IEEE 27th International Requirements Engineering Conference (RE). :2–3.
From financial services platforms to social networks to vehicle control, software has come to mediate many activities of daily life. Governing bodies and standards organizations have responded to this trend by creating regulations and standards to address issues such as safety, security and privacy. In this environment, the compliance of software development to standards and regulations has emerged as a key requirement. Compliance claims and arguments are often captured in assurance cases, with linked evidence of compliance. Evidence can come from testcases, verification proofs, human judgement, or a combination of these. That is, we try to build (safety-critical) systems carefully according to well justified methods and articulate these justifications in an assurance case that is ultimately judged by a human. Yet software is deeply rooted in uncertainty making pragmatic assurance more inductive than deductive: most of complex open-world functionality is either not completely specifiable (due to uncertainty) or it is not cost-effective to do so, and deductive verification cannot happen without specification. Inductive assurance, achieved by sampling or testing, is easier but generalization from finite set of examples cannot be formally justified. And of course the recent popularity of constructing software via machine learning only worsens the problem - rather than being specified by predefined requirements, machine-learned components learn existing patterns from the available training data, and make predictions for unseen data when deployed. On the surface, this ability is extremely useful for hard-to specify concepts, e.g., the definition of a pedestrian in a pedestrian detection component of a vehicle. On the other, safety assessment and assurance of such components becomes very challenging. In this talk, I focus on two specific approaches to arguing about safety and security of software under uncertainty. The first one is a framework for managing uncertainty in assurance cases (for "conventional" and "machine-learned" systems) by systematically identifying, assessing and addressing it. The second is recent work on supporting development of requirements for machine-learned components in safety-critical domains.