Biblio
Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and detect problems that the system is unaware of. One way of achieving this synergy is by placing the human operator on the loop—i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, an explanation can play an important role in allowing the human operator to understand why the system is making certain decisions and improve the level of knowledge that the operator has about the system. This, in turn, may improve the operator’s capability to intervene and, if necessary, override the decisions being made by the system. However, explanations may incur costs, in terms of delay in actions and the possibility that a human may make a bad judgment. Hence, it is not always obvious whether an explanation will improve overall utility and, if so, then what kind of explanation should be provided to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation content, effect, and cost. We then present a dynamic system adaptation approach that leverages a probabilistic reasoning technique to determine when an explanation should be used to improve overall system utility. We evaluate our explanation framework in the context of a realistic industrial control system with adaptive behaviors.
Security attacks present unique challenges to self-adaptive system design due to the adversarial nature of the environment. Game theory approaches have been explored in security to model malicious behaviors and design reliable defense for the system in a mathematically grounded manner. However, modeling the system as a single player, as done in prior works, is insufficient for the system under partial compromise and for the design of fine-grained defensive strategies where the rest of the system with autonomy can cooperate to mitigate the impact of attacks. To deal with such issues, we propose a new self-adaptive framework incorporating Bayesian game theory and model the defender (i.e., the system) at the granularity of components. Under security attacks, the architecture model of the system is translated into a Bayesian multi-player game, where each component is explicitly modeled as an independent player while security attacks are encoded as variant types for the components. The optimal defensive strategy for the system is dynamically computed by solving the pure equilibrium (i.e., adaptation response) to achieve the best possible system utility, improving the resiliency of the system against security attacks. We illustrate our approach using an example involving load balancing and a case study on inter-domain routing.
Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and can detect problems that the system is unaware of. One way of achieving this is by placing the human operator on the loop – i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, explanation is sometimes helpful to allow the human to understand why the system is making certain decisions and calibrate confidence from the human perspective. However, explanations come with costs in terms of delayed actions and the possibility that a human may make a bad judgement. Hence, it is not always obvious whether explanations will improve overall utility and, if so, what kinds of explanation to provide to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation content, effect, and cost. We then present a dynamic adaptation approach that leverages a probabilistic reasoning technique to determine when the explanation should be used in order to improve overall system utility.
Many self-adaptive systems benefit from human
involvement, where a human operator can provide expertise not available to the system and perform adaptations involving physical changes that cannot be automated. However, a lack
of transparency and intelligibility of system goals and the autonomous behaviors enacted to achieve them may hinder a human operator’s effort to make such involvement effective. Explanation
is sometimes helpful to allow the human to understand why the system is making certain decisions. However, explanations come
with costs in terms of, e.g., delayed actions. Hence, it is not always obvious whether explanations will improve the satisfaction of
system goals and, if so, when to provide them to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions
under which they are warranted. Specifically, we characterize explanations in terms of their impact on a human operator’s ability to effectively engage in adaptive actions. We then present a decision-making approach for planning in self-adaptation that leverages a probabilistic reasoning tool to determine when the explanation should be used in an adaptation strategy in order to improve overall system utility. We illustrate our approach in a
representative scenario for the application of an adaptive news website in the context of potential denial-of-service attacks.