Biblio
Filters: Keyword is system assurance [Clear All Filters]
Assurance levels for decision making in autonomous intelligent systems and their safety. 2020 IEEE 11th International Conference on Dependable Systems, Services and Technologies (DESSERT). :475—483.
.
2020. The autonomy of intelligent systems and their safety rely on their ability for local decision making based on collected environmental information. This is even more for cyber-physical systems running safety critical activities. While this intelligence is partial and fragmented, and cognitive techniques are of limited maturity, the decision function must produce results whose validity and scope must be weighted in light of the underlying assumptions, unavoidable uncertainty and hypothetical safety limitation. Besides the cognitive techniques dependability, it is about the assurance level of the decision self-making. Beyond the pure decision-making capabilities of the autonomous intelligent system, we need techniques that guarantee the system assurance required for the intended use. Security mechanisms for cognitive systems may be consequently tightly intricated. We propose a trustworthiness module which is part of the system and its resulting safety. In this paper, we briefly review the state of the art regarding the dependability of cognitive techniques, the assurance level definition in this context, and related engineering practices. We elaborate regarding the design of autonomous intelligent systems safety, then we discuss its security design and approaches for the mitigation of safety violations by the cognitive functions.
A Rigorous System Engineering Process for Resilient Cyber-Physical Systems Design. 2019 International Symposium on Systems Engineering (ISSE). :1–8.
.
2019. System assurance is the justified confidence that a system functions as intended and is free of exploitable vulnerabilities, either intentionally or unintentionally designed or inserted as part of the system at any time during the life cycle. The computation and communication backbone of Internet of Things (IoT) devices and other cyber-physical systems (CPS) makes them vulnerable to classes of threats previously not relevant for many physical control and computational systems. The design of resilient IoT systems encompasses vulnerabilities to adversarial disruption (Security), behavior in an operational environments (Function), and increasing interdependencies (Connectedness). System assurance can be met only through a comprehensive and aggressive systems engineering approach. Engineering methods to "design in" security have been explored in the United States through two separate research programs, one through the Systems Engineering Research Center (SERC) and one through the Defense Advanced Research Process Agency (DARPA). This paper integrates these two programs and discusses how assurance practices can be improved using new system engineering and system design strategies that rely on both functional and formal design methods.