Biblio

Filters: Author is Cámara, Javier  [Clear All Filters]
2023-01-30
Wohlrab, Rebekka, Cámara, Javier, Garlan, David, Schmerl, Bradley.  2022.  Explaining quality attribute tradeoffs in automated planning for self-adaptive systems. Journal of Systems and Software. 198

Self-adaptive systems commonly operate in heterogeneous contexts and need to consider multiple quality attributes. Human stakeholders often express their quality preferences by defining utility functions, which are used by self-adaptive systems to automatically generate adaptation plans. However, the adaptation space of realistic systems is large and it is obscure how utility functions impact the generated adaptation behavior, as well as structural, behavioral, and quality constraints. Moreover, human stakeholders are often not aware of the underlying tradeoffs between quality attributes. To address this issue, we present an approach that uses machine learning techniques (dimensionality reduction, clustering, and decision tree learning) to explain the reasoning behind automated planning. Our approach focuses on the tradeoffs between quality attributes and how the choice of weights in utility functions results in different plans being generated. We help humans understand quality attribute tradeoffs, identify key decisions in adaptation behavior, and explore how differences in utility functions result in different adaptation alternatives. We present two systems to demonstrate the approach’s applicability and consider its potential application to 24 exemplar self-adaptive systems. Moreover, we describe our assessment of the tradeoff between the information reduction and the amount of explained variance retained by the results obtained with our approach.

Cámara, Javier, Wohlrab, Rebekka, Garlan, David, Schmerl, Bradley.  2022.  ExTrA: Explaining architectural design tradeoff spaces via dimensionality reduction. Journal of Systems and Software. 198

In software design, guaranteeing the correctness of run-time system behavior while achieving an acceptable balance among multiple quality attributes remains a challenging problem. Moreover, providing guarantees about the satisfaction of those requirements when systems are subject to uncertain environments is even more challenging. While recent developments in architectural analysis techniques can assist architects in exploring the satisfaction of quantitative guarantees across the design space, existing approaches are still limited because they do not explicitly link design decisions to satisfaction of quality requirements. Furthermore, the amount of information they yield can be overwhelming to a human designer, making it difficult to see the forest for the trees. In this paper we present ExTrA (Explaining Tradeoffs of software Architecture design spaces), an approach to analyzing architectural design spaces that addresses these limitations and provides a basis for explaining design tradeoffs. Our approach employs dimensionality reduction techniques employed in machine learning pipelines like Principal Component Analysis (PCA) and Decision Tree Learning (DTL) to enable architects to understand how design decisions contribute to the satisfaction of extra-functional properties across the design space. Our results show feasibility of the approach in two case studies and evidence that combining complementary techniques like PCA and DTL is a viable approach to facilitate comprehension of tradeoffs in poorly-understood design spaces.

2022-01-12
Li, Nianyu, Cámara, Javier, Garlan, David, Schmerl, Bradley, Jin, Zhi.  2021.  Hey! Preparing Humans to do Tasks in Self-adaptive Systems. Proceedings of the 16th Symposium on Software Engineering for Adaptive and Self-Managing Systems, Virtual.
Many self-adaptive systems benefit from human involvement, where human operators can complement the capabilities of systems (e.g., by supervising decisions, or performing adaptations and tasks involving physical changes that cannot be automated). However, insufficient preparation (e.g., lack of task context comprehension) may hinder the effectiveness of human involvement, especially when operators are unexpectedly interrupted to perform a new task. Preparatory notification of a task provided in advance can sometimes help human operators focus their attention on the forthcoming task and understand its context before task execution, hence improving effectiveness. Nevertheless, deciding when to use preparatory notification as a tactic is not obvious and entails considering different factors that include uncertainties induced by human operator behavior (who might ignore the notice message), human attributes (e.g., operator training level), and other information that refers to the state of the system and its environment. In this paper, informed by work in cognitive science on human attention and context management, we introduce a formal framework to reason about the usage of preparatory notifications in self-adaptive systems involving human operators. Our framework characterizes the effects of managing attention via task notification in terms of task context comprehension. We also build on our framework to develop an automated probabilistic reasoning technique able to determine when and in what form a preparatory notification tactic should be used to optimize system goals. We illustrate our approach in a representative scenario of human-robot collaborative goods delivery.
Li, Nianyu, Cámara, Javier, Garlan, David, Schmerl, Bradley, Jin, Zhi.  2021.  Hey! Preparing Humans to do Tasks in Self-adaptive Systems. Proceedings of the 16th Symposium on Software Engineering for Adaptive and Self-Managing Systems, Virtual.
Many self-adaptive systems benefit from human involvement, where human operators can complement the capabilities of systems (e.g., by supervising decisions, or performing adaptations and tasks involving physical changes that cannot be automated). However, insufficient preparation (e.g., lack of task context comprehension) may hinder the effectiveness of human involvement, especially when operators are unexpectedly interrupted to perform a new task. Preparatory notification of a task provided in advance can sometimes help human operators focus their attention on the forthcoming task and understand its context before task execution, hence improving effectiveness. Nevertheless, deciding when to use preparatory notification as a tactic is not obvious and entails considering different factors that include uncertainties induced by human operator behavior (who might ignore the notice message), human attributes (e.g., operator training level), and other information that refers to the state of the system and its environment. In this paper, informed by work in cognitive science on human attention and context management, we introduce a formal framework to reason about the usage of preparatory notifications in self-adaptive systems involving human operators. Our framework characterizes the effects of managing attention via task notification in terms of task context comprehension. We also build on our framework to develop an automated probabilistic reasoning technique able to determine when and in what form a preparatory notification tactic should be used to optimize system goals. We illustrate our approach in a representative scenario of human-robot collaborative goods delivery.
Li, Nianyu, Cámara, Javier, Garlan, David, Schmerl, Bradley, Jin, Zhi.  2021.  Hey! Preparing Humans to do Tasks in Self-adaptive Systems. Proceedings of the 16th Symposium on Software Engineering for Adaptive and Self-Managing Systems, Virtual.
Many self-adaptive systems benefit from human involvement, where human operators can complement the capabilities of systems (e.g., by supervising decisions, or performing adaptations and tasks involving physical changes that cannot be automated). However, insufficient preparation (e.g., lack of task context comprehension) may hinder the effectiveness of human involvement, especially when operators are unexpectedly interrupted to perform a new task. Preparatory notification of a task provided in advance can sometimes help human operators focus their attention on the forthcoming task and understand its context before task execution, hence improving effectiveness. Nevertheless, deciding when to use preparatory notification as a tactic is not obvious and entails considering different factors that include uncertainties induced by human operator behavior (who might ignore the notice message), human attributes (e.g., operator training level), and other information that refers to the state of the system and its environment. In this paper, informed by work in cognitive science on human attention and context management, we introduce a formal framework to reason about the usage of preparatory notifications in self-adaptive systems involving human operators. Our framework characterizes the effects of managing attention via task notification in terms of task context comprehension. We also build on our framework to develop an automated probabilistic reasoning technique able to determine when and in what form a preparatory notification tactic should be used to optimize system goals. We illustrate our approach in a representative scenario of human-robot collaborative goods delivery.
Cámara, Javier, Silva, Mariana, Garlan, David, Schmerl, Bradley.  2021.  Explaining Architectural Design Tradeoff Spaces: a Machine Learning Approach. Proceedings of the 15th European Conference on Software Architecture, Virtual (Originally, Vaxjo Sweden).
In software design, guaranteeing the correctness of run-time system behavior while achieving an acceptable balance among multiple quality attributes remains a challenging problem. Moreover, providing guarantees about the satisfaction of those requirements when systems are subject to uncertain environments is even more challenging. While recent developments in architectural analysis techniques can assist architects in exploring the satisfaction of quantitative guarantees across the design space, existing approaches are still limited because they do not explicitly link design decisions to satisfaction of quality requirements. Furthermore, the amount of information they yield can be overwhelming to a human designer, making it difficult to distinguish the forest through the trees. In this paper, we present an approach to analyzing architectural design spaces that addresses these limitations and provides a basis to enable the explainability of design tradeoffs. Our approach combines dimensionality reduction techniques employed in machine learning pipelines with quantitative verification to enable architects to understand how design decisions contribute to the satisfaction of strict quantitative guarantees under uncertainty across the design space. Our results show feasibility of the approach in two case studies and evidence that dimensionality reduction is a viable approach to facilitate comprehension of tradeoffs in poorly-understood design spaces.
2021-03-09
Cámara, Javier, Moreno, Gabriel A., Garlan, David.  2020.  Reasoning about When to Provide Explanation for Human-in-the-loop Self-Adaptive Systems. Proceedings of the 2020 IEEE Conference on Autonomic Computing and Self-organizing Systems (ACSOS).

Self-adaptive systems overcome many of the limitations of human supervision in complex software-intensive systems by endowing them with the ability to automatically adapt their structure and behavior in the presence of runtime changes. However, adaptation in some classes of systems (e.g., safetycritical) can benefit by receiving information from humans (e.g., acting as sophisticated sensors, decision-makers), or by involving them as system-level effectors to execute adaptations (e.g., when automation is not possible, or as a fallback mechanism). However, human participants are influenced by factors external to the system (e.g., training level, fatigue) that affect the likelihood of success when they perform a task, its duration, or even if they are willing to perform it in the first place. Without careful consideration of these factors, it is unclear how to decide when to involve humans in adaptation, and in which way. In this paper, we investigate how the explicit modeling of human participants can provide a better insight into the trade-offs of involving humans in adaptation. We contribute a formal framework to reason about human involvement in self-adaptation, focusing on the role of human participants as actors (i.e., effectors) during the execution stage of adaptation. The approach consists of: (i) a language to express adaptation models that capture factors affecting human behavior and its interactions with the system, and (ii) a formalization of these adaptation models as stochastic multiplayer games (SMGs) that can be used to analyze humansystem-environment interactions. We illustrate our approach in an adaptive industrial middleware used to monitor and manage sensor networks in renewable energy production plants.

2023-01-30
Li, Nianyu, Cámara, Javier, Garlan, David, Schmerl, Bradley.  2020.  Reasoning about When to Provide Explanation for Human-in-the-loop Self-Adaptive Systems. In Proceedings of the 2020 IEEE Conference on Autonomic Computing and Self-organizing Systems (ACSOS).

Many self-adaptive systems benefit from human
involvement, where a human operator can provide expertise not available to the system and perform adaptations involving physical changes that cannot be automated. However, a lack
of transparency and intelligibility of system goals and the autonomous behaviors enacted to achieve them may hinder a human operator’s effort to make such involvement effective. Explanation
is sometimes helpful to allow the human to understand why the system is making certain decisions. However, explanations come
with costs in terms of, e.g., delayed actions. Hence, it is not always obvious whether explanations will improve the satisfaction of
system goals and, if so, when to provide them to the operator.  In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions
under which they are warranted. Specifically, we characterize explanations in terms of their impact on a human operator’s ability to effectively engage in adaptive actions. We then present a decision-making approach for planning in self-adaptation that leverages a probabilistic reasoning tool to determine when the explanation should be used in an adaptation strategy in order to improve overall system utility. We illustrate our approach in a
representative scenario for the application of an adaptive news website in the context of potential denial-of-service attacks.

2021-08-11
Sulayman K. Sowe, Martin Fränzle, Jan-Patrick Osterloh, Alexander Trende, Lars Weber, Andreas Lüdtke.  2020.  Challenges for Integrating Humans into Vehicular Cyber-Physical Systems. Software Engineering and Formal Methods. 12226:20–26.
Advances in Vehicular Cyber-Physical Systems (VCPS) are the primary enablers of the shift from no automation to fully autonomous vehicles (AVs). One of the impacts of this shift is to develop safe AVs in which most or all of the functions of the human driver are replaced with an intelligent system. However, while some progress has been made in equipping AVs with advanced AI capabilities, VCPS designers are still faced with the challenge of designing trustworthy AVs that are in sync with the unpredictable behaviours of humans. In order to address this challenge, we present a model that describes how a Human Ambassador component can be integrated into the overall design of a new generation of VCPS. A scenario is presented to demonstrate how the model can work in practice. Formalisation and co-simulation challenges associated with integrating the Human Ambassador component and future work we are undertaking are also discussed.
2018-10-16
Cámara, Javier, Peng, Wenxin, Garlan, David, Schmerl, Bradley.  2018.  Reasoning about sensing uncertainty and its reduction in decision-making for self-adaptation. Science of Computer Programming. 167

Adaptive systems are expected to adapt to unanticipated run-time events using imperfect information about themselves, their environment, and goals. This entails handling the effects of uncertainties in decision-making, which are not always considered as a first-class concern. This paper contributes a formal analysis technique that explicitly considers uncertainty in sensing when reasoning about the best way to adapt, together with uncertainty reduction mechanisms to improve system utility. We illustrate our approach on a Denial of Service (DoS) attack scenario and present results that demonstrate the benefits of uncertainty-aware decision-making in comparison to using an uncertainty-ignorant approach, both in the presence and absence of uncertainty reduction mechanisms.

2014-09-17
Schmerl, Bradley, Cámara, Javier, Gennari, Jeffrey, Garlan, David, Casanova, Paulo, Moreno, Gabriel A., Glazier, Thomas J., Barnes, Jeffrey M..  2014.  Architecture-based Self-protection: Composing and Reasoning About Denial-of-service Mitigations. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :2:1–2:12.

Security features are often hardwired into software applications, making it difficult to adapt security responses to reflect changes in runtime context and new attacks. In prior work, we proposed the idea of architecture-based self-protection as a way of separating adaptation logic from application logic and providing a global perspective for reasoning about security adaptations in the context of other business goals. In this paper, we present an approach, based on this idea, for combating denial-of-service (DoS) attacks. Our approach allows DoS-related tactics to be composed into more sophisticated mitigation strategies that encapsulate possible responses to a security problem. Then, utility-based reasoning can be used to consider different business contexts and qualities. We describe how this approach forms the underpinnings of a scientific approach to self-protection, allowing us to reason about how to make the best choice of mitigation at runtime. Moreover, we also show how formal analysis can be used to determine whether the mitigations cover the range of conditions the system is likely to encounter, and the effect of mitigations on other quality attributes of the system. We evaluate the approach using the Rainbow self-adaptive framework and show how Rainbow chooses DoS mitigation tactics that are sensitive to different business contexts.