Visible to the public Assurance levels for decision making in autonomous intelligent systems and their safety

TitleAssurance levels for decision making in autonomous intelligent systems and their safety
Publication TypeConference Paper
Year of Publication2020
AuthorsFourastier, Y., Baron, C., Thomas, C., Esteban, P.
Conference Name2020 IEEE 11th International Conference on Dependable Systems, Services and Technologies (DESSERT)
Date PublishedMay 2020
PublisherIEEE
ISBN Number978-1-7281-9957-3
Keywordsartificial intelligence, assurance level, assurance level definition, autonomous intelligent system, autonomous intelligent systems safety, autonomous system decision making, Cognition, cognitive functions, cognitive systems, cognitive techniques dependability, Collaboration, composability, Cyber-physical systems, decision function, decision making, decision self-making, environmental information, Human Behavior, hypothetical safety limitation, information assurance, Metrics, policy-based governance, pubcrawl, pure decision-making capabilities, resilience, Resiliency, Safety, safety critical activities, safety monitoring design, safety violations, safety-critical software, Scalability, security of data, system assurance, unavoidable uncertainty
AbstractThe autonomy of intelligent systems and their safety rely on their ability for local decision making based on collected environmental information. This is even more for cyber-physical systems running safety critical activities. While this intelligence is partial and fragmented, and cognitive techniques are of limited maturity, the decision function must produce results whose validity and scope must be weighted in light of the underlying assumptions, unavoidable uncertainty and hypothetical safety limitation. Besides the cognitive techniques dependability, it is about the assurance level of the decision self-making. Beyond the pure decision-making capabilities of the autonomous intelligent system, we need techniques that guarantee the system assurance required for the intended use. Security mechanisms for cognitive systems may be consequently tightly intricated. We propose a trustworthiness module which is part of the system and its resulting safety. In this paper, we briefly review the state of the art regarding the dependability of cognitive techniques, the assurance level definition in this context, and related engineering practices. We elaborate regarding the design of autonomous intelligent systems safety, then we discuss its security design and approaches for the mitigation of safety violations by the cognitive functions.
URLhttps://ieeexplore.ieee.org/document/9125079
DOI10.1109/DESSERT50317.2020.9125079
Citation Keyfourastier_assurance_2020