Biblio
Filters: Keyword is 2021: October [Clear All Filters]
Consent as a Foundation for Responsible Autonomy. Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI). 36
.
2022. This paper focuses on a dynamic aspect of responsible autonomy, namely, to make intelligent agents be responsible at run time. That is, it considers settings where decision making by agents impinges upon the outcomes perceived by other agents. For an agent to act responsibly, it must accommodate the desires and other attitudes of its users and, through other agents, of their users.
The contribution of this paper is twofold. First, it provides a conceptual analysis of consent, its benefits and misuses, and how understanding consent can help achieve responsible autonomy. Second, it outlines challenges for AI (in particular, for agents and multiagent systems) that merit investigation to form as a basis for modeling consent in multiagent systems and applying consent to achieve responsible autonomy.
Blue Sky Track
AlloyMax: Bringing Maximum Satisfaction to Relational Specifications. The ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) 2021.
.
2021. Alloy is a declarative modeling language based on a first-order relational logic. Its constraint-based analysis has enabled a wide range of applications in software engineering, including configuration synthesis, bug finding, test-case generation, and security analysis. Certain types of analysis tasks in these domains involve finding an optimal solution. For example, in a network configuration problem, instead of finding any valid configuration, it may be desirable to find one that is most permissive (i.e., it permits a maximum number of packets). Due to its dependence on SAT, however, Alloy cannot be used to specify and analyze these types of problems.
We propose AlloyMax, an extension of Alloy with a capability to express and analyze problems with optimal solutions. AlloyMax introduces (1) a small addition of language constructs that can be used to specify a wide range of problems that involve optimality and (2) a new analysis engine that leverages a Maximum Satisfiability (MaxSAT) solver to generate optimal solutions. To enable this new type of analysis, we show how a specification in a first-order relational logic can be translated into an input format of MaxSAT solvers—namely, a Boolean formula in weighted conjunctive normal form (WCNF). We demonstrate the applicability and scalability of AlloyMax on a benchmark of problems. To our knowledge, AlloyMax is the first approach to enable analysis with optimality in a relational modeling language, and we believe that AlloyMax has the potential to bring a wide range of new applications to Alloy.
Explaining Architectural Design Tradeoff Spaces: a Machine Learning Approach. Proceedings of the 15th European Conference on Software Architecture, Virtual (Originally, Vaxjo Sweden).
.
2021. In software design, guaranteeing the correctness of run-time system behavior while achieving an acceptable balance among multiple quality attributes remains a challenging problem. Moreover, providing guarantees about the satisfaction of those requirements when systems are subject to uncertain environments is even more challenging. While recent developments in architectural analysis techniques can assist architects in exploring the satisfaction of quantitative guarantees across the design space, existing approaches are still limited because they do not explicitly link design decisions to satisfaction of quality requirements. Furthermore, the amount of information they yield can be overwhelming to a human designer, making it difficult to distinguish the forest through the trees. In this paper, we present an approach to analyzing architectural design spaces that addresses these limitations and provides a basis to enable the explainability of design tradeoffs. Our approach combines dimensionality reduction techniques employed in machine learning pipelines with quantitative verification to enable architects to understand how design decisions contribute to the satisfaction of strict quantitative guarantees under uncertainty across the design space. Our results show feasibility of the approach in two case studies and evidence that dimensionality reduction is a viable approach to facilitate comprehension of tradeoffs in poorly-understood design spaces.
A Probabilistic Model for Personality Trait Focused Explainability. Proceedings of the 4th international Workshop on Context-aware, Autonomous and Smart Architecture (CASA 2021), co-located with the 15th European Conference on Software Architecture.
.
2021. Explainability refers to the degree to which a software system’s actions or solutions can be understood by humans. Giving humans the right amount of explanation at the right time is an important factor in maximizing the effective collaboration between an adaptive system and humans during interaction. However, explanations come with costs, such as the required time of explanation and humans’ response time. Hence it is not always clear whether explanations will improve overall system utility and, if so, how the system should effectively provide explanation to humans, particularly given that different humans may benefit from different amounts and frequency of explanation. To provide a partial basis for making such decisions, this paper defines a formal framework that incorporates human personality traits as one of the important elements in guiding automated decision- making about the proper amount of explanation that should be given to the human to improve the overall system utility. Specifically, we use probabilistic model analysis to determine how to utilize explanations in an effective way. To illustrate our approach, Grid – a virtual human and system interaction game -- is developed to represent scenarios for human-systems collaboration and to demonstrate how a human’s personality traits can be used as a factor to consider for systems in providing appropriate explanations.
Self-Adaptation for Machine Learning Based Systems.. Proceedings of the 1st International Workshop on Software Architecture and Machine Learning (SAML), .
.
2021. Today’s world is witnessing a shift from human-written software to machine-learned software, with the rise of systems that rely on machine learning. These systems typically operate in non-static environments, which are prone to unexpected changes, as is the case of self-driving cars and enterprise systems. In this context, machine-learned software can misbehave. Thus, it is paramount that these systems are capable of detecting problems with their machined-learned components and
adapt themselves to maintain desired qualities. For instance, a fraud detection system that cannot adapt its machine-learned model to efficiently cope with emerging fraud patterns or changes in the volume of transactions is subject to losses of millions of dollars. In this paper, we take a first step towards the development of a framework aimed to self-adapt systems that rely on machine-learned components. We describe: (i) a set of causes of machine-learned component misbehavior and a set of adaptation tactics inspired by the literature on machine learning, motivating them with the aid of a running example; (ii) the required changes to the MAPE-K loop, a popular control loop for self-adaptive systems; and (iii) the challenges associated with developing this framework. We conclude the paper with a set of research questions to guide future work.
Six Software Engineering Principles for Smarter Cyber-Physical Systems. 2021 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C), Proceedings of the Workshop on Self-Improving System Integration.
.
2021. Cyber-Physical Systems (CPS) integrate computational and physical components. With the digitisation of society and industry and the progressing integration of systems, CPS need to become “smarter” in the sense that they can adapt and learn to handle new and unexpected conditions, and improve over time. Smarter CPS present a combination of challenges that existing engineering methods have difficulties addressing: intertwined digital, physical and social spaces, need for heterogeneous modelling formalisms, demand for context-tied cooperation to achieve system goals, widespread uncertainty and disruptions in changing contexts, inherent human constituents, and continuous encounter with new situations. While approaches have been put forward to deal with some of these challenges, a coherent perspective on engineering smarter CPS is lacking. In this paper, we present six engineering principles for addressing the challenges of smarter CPS. As smarter CPS are software-intensive systems, we approach them from a software engineering perspective with the angle of self-adaptation that offers an effective approach to deal with run-time change. The six principles create an integrated landscape for the engineering and operation of smarter CPS.