Visible to the public Biblio

Filters: Keyword is assurance case  [Clear All Filters]
2023-02-17
Cobos, Luis-Pedro, Miao, Tianlei, Sowka, Kacper, Madzudzo, Garikayi, Ruddle, Alastair R., El Amam, Ehab.  2022.  Application of an Automotive Assurance Case Approach to Autonomous Marine Vessel Security. 2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME). :1–9.
The increase of autonomy in autonomous surface vehicles development brings along modified and new risks and potential hazards, this in turn, introduces the need for processes and methods for ensuring that systems are acceptable for their intended use with respect to dependability and safety concerns. One approach for evaluating software requirements for claims of safety is to employ an assurance case. Much like a legal case, the assurance case lays out an argument and supporting evidence to provide assurance on the software requirements. This paper analyses safety and security requirements relating to autonomous vessels, and regulations in the automotive industry and the marine industry before proposing a generic cybersecurity and safety assurance case that takes a general graphical approach of Goal Structuring Notation (GSN).
2021-11-08
Martin, Robert Alan.  2020.  Assurance for CyberPhysical Systems: Addressing Supply Chain Challenges to Trustworthy Software-Enabled Things. 2020 IEEE Systems Security Symposium (SSS). :1–5.
Software is playing a pivotal role in most enterprises, whether they realize it or not, and with the proliferation of Industrial Internet of Things (IoT) and other CyberPhysical systems across our society and critical infrastructure and our collective love affair with automation, optimization, and ``smart'' devices, the role of these types of systems is only going to increase. This talk addresses the myriad of issues that underlie unsafe, insecure, and unreliable software and provides the insights of the Industrial Internet Consortium and other government and industry efforts on how to conquer them and pave the way to a marketplace of trustworthy software-enabled connected things. As the experience of several sectors has shown, the dependence on connected software needs to be met with a strong understanding of the risks to the overall trustworthiness of our software-based capabilities that we, our enterprises, and our world utilize. In many of these new connected systems issues of safety, reliability, and resilience rival or dominate concerns for security and privacy, the long-time focus of many in the IT world. Without a scalable and efficient method for managing these risks so our enterprises can continue to benefit from these advancements that powers our military, commercial industries, cities, and homes to new levels of efficiency, versatility, and cost effectiveness we face the potential for harm, death, and destructiveness. In such a marketplace, creating, exchanging, and integrating components that are trustworthy as well as entering into value-chain relationships with trustworthy partners and service suppliers will be common if we can provide a method for explicitly defining what is meant by the word trustworthy. The approach being pursued by these groups for applying Software Assurance to these systems and their Supply Chains by leveraging Structured Assurance Cases (the focus of this paper), Software Bill of Materials, and secure development practices applied to the evolving Agile and DevSecOps methodologies, is to explicitly identify the detailed requirements ``about what we need to know about something for it to be worthy of our trust'' and to do that in a way that we can convey that basis of trust to others that: can scale; is consistent within different workflows; is flexible to differing sets of hazards and environments; and is applicable to all sectors, domains, and industries.
2020-02-10
Chechik, Marsha.  2019.  Uncertain Requirements, Assurance and Machine Learning. 2019 IEEE 27th International Requirements Engineering Conference (RE). :2–3.
From financial services platforms to social networks to vehicle control, software has come to mediate many activities of daily life. Governing bodies and standards organizations have responded to this trend by creating regulations and standards to address issues such as safety, security and privacy. In this environment, the compliance of software development to standards and regulations has emerged as a key requirement. Compliance claims and arguments are often captured in assurance cases, with linked evidence of compliance. Evidence can come from testcases, verification proofs, human judgement, or a combination of these. That is, we try to build (safety-critical) systems carefully according to well justified methods and articulate these justifications in an assurance case that is ultimately judged by a human. Yet software is deeply rooted in uncertainty making pragmatic assurance more inductive than deductive: most of complex open-world functionality is either not completely specifiable (due to uncertainty) or it is not cost-effective to do so, and deductive verification cannot happen without specification. Inductive assurance, achieved by sampling or testing, is easier but generalization from finite set of examples cannot be formally justified. And of course the recent popularity of constructing software via machine learning only worsens the problem - rather than being specified by predefined requirements, machine-learned components learn existing patterns from the available training data, and make predictions for unseen data when deployed. On the surface, this ability is extremely useful for hard-to specify concepts, e.g., the definition of a pedestrian in a pedestrian detection component of a vehicle. On the other, safety assessment and assurance of such components becomes very challenging. In this talk, I focus on two specific approaches to arguing about safety and security of software under uncertainty. The first one is a framework for managing uncertainty in assurance cases (for "conventional" and "machine-learned" systems) by systematically identifying, assessing and addressing it. The second is recent work on supporting development of requirements for machine-learned components in safety-critical domains.
2019-08-12
Diskin, Zinovy, Maibaum, Tom, Wassyng, Alan, Wynn-Williams, Stephen, Lawford, Mark.  2018.  Assurance via Model Transformations and Their Hierarchical Refinement. Proceedings of the 21th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems. :426-436.

Assurance is a demonstration that a complex system (such as a car or a communication network) possesses an importantproperty, such as safety or security, with a high level of confidence. In contrast to currently dominant approaches to building assurance cases, which are focused on goal structuring and/or logical inference, we propose considering assurance as a model transformation (MT) enterprise: saying that a system possesses an assured property amounts to saying that a particular assurance view of the system comprising the assurance data, satisfies acceptance criteria posed as assurance constraints. While the MT realizing this view is very complex, we show that it can be decomposed into elementary MTs via a hierarchy of refinement steps. The transformations at the bottom level are ordinary MTs that can be executed for data specifying the system, thus providing the assurance data to be checked against the assurance constraints. In this way, assurance amounts to traversing the hierarchy from the top to the bottom and assuring the correctness of each MT in the path. Our approach has a precise mathematical foundation (rooted in process algebra and category theory) –- a necessity if we are to model precisely and then analyze our assurance cases. We discuss the practical applicability of the approach, and argue that it has several advantages over existing approaches.

2018-02-02
Xu, B., Lu, M., Zhang, D..  2017.  A Software Security Case Developing Method Based on Hierarchical Argument Strategy. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :632–633.

Security cases-which document the rationale for believing that a system is adequately secure-have not been sufficiently used for a lack of practical construction method. This paper presents a hierarchical software security case development method to address this issue. We present a security concept relationship model first, then come up with a hierarchical asset-threat-control measure argument strategy, together with the consideration of an asset classification and threat classification for software security case. Lastly, we propose 11 software security case patterns and illustrate one of them.