Visible to the public Biblio

Filters: Author is Chechik, Marsha  [Clear All Filters]
2020-02-10
Chechik, Marsha.  2019.  Uncertain Requirements, Assurance and Machine Learning. 2019 IEEE 27th International Requirements Engineering Conference (RE). :2–3.
From financial services platforms to social networks to vehicle control, software has come to mediate many activities of daily life. Governing bodies and standards organizations have responded to this trend by creating regulations and standards to address issues such as safety, security and privacy. In this environment, the compliance of software development to standards and regulations has emerged as a key requirement. Compliance claims and arguments are often captured in assurance cases, with linked evidence of compliance. Evidence can come from testcases, verification proofs, human judgement, or a combination of these. That is, we try to build (safety-critical) systems carefully according to well justified methods and articulate these justifications in an assurance case that is ultimately judged by a human. Yet software is deeply rooted in uncertainty making pragmatic assurance more inductive than deductive: most of complex open-world functionality is either not completely specifiable (due to uncertainty) or it is not cost-effective to do so, and deductive verification cannot happen without specification. Inductive assurance, achieved by sampling or testing, is easier but generalization from finite set of examples cannot be formally justified. And of course the recent popularity of constructing software via machine learning only worsens the problem - rather than being specified by predefined requirements, machine-learned components learn existing patterns from the available training data, and make predictions for unseen data when deployed. On the surface, this ability is extremely useful for hard-to specify concepts, e.g., the definition of a pedestrian in a pedestrian detection component of a vehicle. On the other, safety assessment and assurance of such components becomes very challenging. In this talk, I focus on two specific approaches to arguing about safety and security of software under uncertainty. The first one is a framework for managing uncertainty in assurance cases (for "conventional" and "machine-learned" systems) by systematically identifying, assessing and addressing it. The second is recent work on supporting development of requirements for machine-learned components in safety-critical domains.
2017-06-05
Kokaly, Sahar, Salay, Rick, Cassano, Valentin, Maibaum, Tom, Chechik, Marsha.  2016.  A Model Management Approach for Assurance Case Reuse Due to System Evolution. Proceedings of the ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems. :196–206.

Evolution in software systems is a necessary activity that occurs due to fixing bugs, adding functionality or improving system quality. Systems often need to be shown to comply with regulatory standards. Along with demonstrating compliance, an artifact, called an assurance case, is often produced to show that the system indeed satisfies the property imposed by the standard (e.g., safety, privacy, security, etc.). Since each of the system, the standard, and the assurance case can be presented as a model, we propose the extension and use of traditional model management operators to aid in the reuse of parts of the assurance case when the system undergoes an evolution. Specifically, we present a model management approach that eventually produces a partial evolved assurance case and guidelines to help the assurance engineer in completing it. We demonstrate how our approach works on an automotive subsystem regulated by the ISO 26262 standard.