Biblio

Filters: Author is Alexander Pretschner  [Clear All Filters]
2021-08-13
2021-08-11
2021-08-13
Alexander Pretschner, Julian Nida-Rümelin.  2021.  Engineering Responsibility.
Engineering Responsibility. Regulation in AI, Bavarian representation in Brussels, 03/24/2021
Alexander Pretschner.  2021.  Ethics in Agile Development.
talk at TUM Executive MBA in Business and IT, 07/14/2021
2021-08-11
Severin Kacianka, Alexander Pretschner.  2021.  Designing Accountable Systems. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. :424–437.
Accountability is an often called for property of technical systems. It is a requirement for algorithmic decision systems, autonomous cyber-physical systems, and for software systems in general. As a concept, accountability goes back to the early history of Liberalism and is suggested as a tool to limit the use of power. This long history has also given us many, often slightly differing, definitions of accountability. The problem that software developers now face is to understand what accountability means for their systems and how to reflect it in a system's design. To enable the rigorous study of accountability in a system, we need models that are suitable for capturing such a varied concept. In this paper, we present a method to express and compare different definitions of accountability using Structural Causal Models. We show how these models can be used to evaluate a system's design and present a small use case based on an autonomous car.
2021-08-13
Severin Kacianka, Alexander Pretschner.  2021.  Designing Accountable Systems. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. :424–437.
Accountability is an often called for property of technical systems. It is a requirement for algorithmic decision systems, autonomous cyber-physical systems, and for software systems in general. As a concept, accountability goes back to the early history of Liberalism and is suggested as a tool to limit the use of power. This long history has also given us many, often slightly differing, definitions of accountability. The problem that software developers now face is to understand what accountability means for their systems and how to reflect it in a system's design. To enable the rigorous study of accountability in a system, we need models that are suitable for capturing such a varied concept. In this paper, we present a method to express and compare different definitions of accountability using Structural Causal Models. We show how these models can be used to evaluate a system's design and present a small use case based on an autonomous car.
2021-08-12
Klaus Bengler, Bianca Biebl, Werner Damm, Martin Fränzle, Willem Hagemann, Moritz Held, Klas Ihme, Severin Kacianka, Sebastian Lehnhoff, Andreas Luedtke et al..  2021.  A Metamodel of Human Cyber Physical Systems. Working Document of the PIRE Project on Assuring Individual, Social, and Cultural Embeddedness of Autonomous Cyber-Physical Systems (ISCE-ACPS). :41.
2021-08-11
2020-10-12
Amjad Ibrahim, Tobias Klesel, Ehsan Zibaei, Severin Kacianka, Alexander Pretschner.  2020.  Actual Causality Canvas: A General Framework for Explanation-based Socio-Technical Constructs. European Conference on Artificial Intelligence 2020.

The rapid deployment of digital systems into all aspects of daily life requires embedding social constructs into the digital world. Because of the complexity of these systems, there is a need for technical support to understand their actions. Social concepts, such as explainability, accountability, and responsibility rely on a notion of actual causality. Encapsulated in the Halpern and Pearl’s (HP) definition, actual causality conveniently integrates into the socio-technical world if operationalized in concrete applications. To the best of our knowledge, theories of actual causality such as the HP definition are either applied in correspondence with domain-specific concepts (e.g., a lineage of a database query) or demonstrated using straightforward philosophical examples. On the other hand, there is a lack of explicit automated actual causality theories and operationalizations for helping understand the actions of systems. Therefore, this paper proposes a unifying framework and an interactive platform (Actual Causality Canvas) to address the problem of operationalizing actual causality for different domains and purposes. We apply this framework in such areas as aircraft accidents, unmanned aerial vehicles, and artificial intelligence (AI) systems for purposes of forensic investigation, fault diagnosis, and explainable AI. We show that with minimal effort, using our general-purpose interactive platform, actual causality reasoning can be integrated into these domains.

Amjad Ibrahim, Simon Rehwald, Antoine Scemama, Florian Andres, Alexander Pretschner.  2020.  Causal Model Extraction from Attack Trees to Attribute Malicious Insiders Attacks. The Seventh International Workshop on Graphical Models for Security.

In the context of insiders, preventive security measures have a high likelihood of failing because insiders ought to have sufficient privileges to perform their jobs. Instead, in this paper, we propose to treat the insider threat by a detective measure that holds an insider accountable in case of violations. However, to enable accountability, we need to create causal models that support reasoning about the causality of a violation. Current security models (e.g., attack trees) do not allow that. Still, they are a useful source for creating causal models. In this paper, we discuss the value added by causal models in the security context. Then, we capture the interaction between attack trees and causal models by proposing an automated approach to extract the latter from the former. Our approach considers insider-specific attack classes such as collusion attacks and causal-model-specific properties like preemption relations. We present an evaluation of the resulting causal models’ validity and effectiveness, in addition to the efficiency of the extraction process.
 

Amjad Ibrahim, Alexander Pretschner.  2020.  From Checking to Inference: Actual Causality Computations as Optimization Problems. 18ᵗʰ International Symposium on Automated Technology for Verification and Analysis.
2021-08-11
Amjad Ibrahim, Alexander Pretschner.  2020.  From Checking to Inference: Actual Causality Computations as Optimization Problems. Automated Technology for Verification and Analysis - 18th International Symposium, {ATVA} 2020, Hanoi, Vietnam, October 19-23, 2020, Proceedings. 12302:343–359.
2019-08-21
Amjad Ibrahim, Simon Rehwald, Alexander Pretschner.  2019.  Efficiently Checking Actual Causality with SAT Solving. Lecture Notes of the 2018 Marktoberdorf Summer school on Software Engineering. To Appear..

Recent formal approaches towards causality have made the concept ready for incorporation into the technical world. However, causality reasoning is computationally hard; and no general algorithmic approach exists that efficiently infers the causes for effects. Thus, checking causality in the context of complex, multi-agent, and distributed socio-technical systems is a significant challenge. Therefore, we conceptualize an intelligent and novel algorithmic approach towards checking causality in acyclic causal models with binary variables, utilizing the optimization power in the solvers of the Boolean Satisfiability Problem (SAT). We present two SAT encodings, and an empirical evaluation of their efficiency and scalability. We show that causality is computed efficiently in less than 5 seconds for models that consist of more than 4000 variables.

Amjad Ibrahim, Severin Kacianka, Alexander Pretschner, Charles Hartsell, Gabor Karsai.  2019.  Practical Causal Models for Cyber-Physical Systems. NASA Formal Methods. :211–227.

Unlike faults in classical systems, faults in Cyber-Physical Systems will often be caused by the system's interaction with its physical environment and social context, rendering these faults harder to diagnose. To complicate matters further, knowledge about the behavior and failure modes of a system are often collected in different models. We show how three of those models, namely attack trees, fault trees, and timed failure propagation graphs can be converted into Halpern-Pearl causal models, combined into a single holistic causal model, and analyzed with actual causality reasoning to detect and explain unwanted events. Halpern-Pearl models have several advantages over their source models, particularly that they allow for modeling preemption, consider the non-occurrence of events, and can incorporate additional domain knowledge. Furthermore, such holistic models allow for analysis across model boundaries, enabling detection and explanation of events that are beyond a single model. Our contribution here delineates a semi-automatic process to (1) convert different models into Halpern-Pearl causal models, (2) combine these models into a single holistic model, and (3) reason about system failures. We illustrate our approach with the help of an Unmanned Aerial Vehicle case study.

2019-09-27
Janos Sztipanovits, Xenofon Koutsoukos, Gabor Karsai, Shankar Sastry, Claire Tomlin, Werner Damm, Martin Fränzle, Jochem Rieger, Alexander Pretschner, Frank Köster.  2019.  Science of design for societal-scale cyber-physical systems: challenges and opportunities. Cyber-Physical Systems. 5:145-172.

Emerging industrial platforms such as the Internet of Things (IoT), Industrial Internet (II) in the US and Industrie 4.0 in Europe have tremendously accelerated the development of new generations of Cyber-Physical Systems (CPS) that integrate humans and human organizations (H-CPS) with physical and computation processes and extend to societal-scale systems such as traffic networks, electric grids, or networks of autonomous systems where control is dynamically shifted between humans and machines. Although such societal-scale CPS can potentially affect many aspect of our lives, significant societal strains have emerged about the new technology trends and their impact on how we live. Emerging tensions extend to regulations, certification, insurance, and other societal constructs that are necessary for the widespread adoption of new technologies. If these systems evolve independently in different parts of the world, they will ‘hard-wire’ the social context in which they are created, making interoperation hard or impossible, decreasing reusability, and narrowing markets for products and services. While impacts of new technology trends on social policies have received attention, the other side of the coin – to make systems adaptable to social policies – is nearly absent from engineering and computer science design practice. This paper focuses on technologies that can be adapted to varying public policies and presents (1) hard problems and technical challenges and (2) some recent research approaches and opportunities. The central goal of this paper is to discuss the challenges and opportunities for constructing H-CPS that can be parameterized by social context. The focus in on three major application domains: connected vehicles, transactive energy systems, and unmanned aerial vehicles.Abbreviations: CPS: Cyber-physical systems; H-CPS: Human-cyber-physical systems; CV: Connected vehicle; II: Industrial Internet; IoT: Internet of Things

2019-08-21
Hauer, Florian, Raphael Stern, Alexander Pretschner.  2019.  Selecting Flow Optimal System Parameters for Automated Driving Systems . 22nd International Conference on Intelligent Transportation Systems.

Driver assist features such as adaptive cruise control (ACC) and highway assistants are becoming increasingly prevalent on commercially available vehicles. These systems are typically designed for safety and rider comfort. However, these systems are often not designed with the quality of the overall traffic flow in mind. For such a system to be beneficial to the traffic flow, it must be string stable and minimize the inter-vehicle spacing to maximize throughput, while still being safe. We propose a methodology to select autonomous driving system parameters that are both safe and string stable using the existing control framework already implemented on commercially available ACC vehicles. Optimal parameter values are selected via model-based optimization for an example highway assistant controller with path planning.

Severin Kacianka, Amjad Ibrahim, Alexander Pretschner, Alexander Trende, Andreas Lüdtke.  2019.  Extending Causal Models from Machines into Humans. 4th Causation, Responsibility, & Explanations in Science & Technology Workshop.

Causal Models are increasingly suggested as a mean to reason about the behavior of cyber-physical systems in socio-technical contexts. They allow us to analyze courses of events and reason about possible alternatives. Until now, however, such reasoning is confined to the technical domain and limited to single systems or at most groups of systems. The humans that are an integral part of any such socio-technical system are usually ignored or dealt with by “expert judgment”. We show how a technical causal model can be extended with models of human behavior to cover the complexity and interplay between humans and technical systems. This integrated socio-technical causal model can then be used to reason not only about actions and decisions taken by the machine, but also about those taken by humans interacting with the system. In this paper we demonstrate the feasibility of merging causal models about machines with causal models about humans and illustrate the usefulness of this approach with a highly automated vehicle example.

Janos Sztipanovits, Xenofon Koutsoukos, Gabor Karsai, Shankar Sastry, Claire Tomlin, Werner Damm, Martin Frönzle, Jochem Rieger, Alexander Pretschner, Frank Köster.  2019.  Science of design for societal-scale cyber-physical systems: challenges and opportunities. Cyber-Physical Systems. 5:145-172.

Emerging industrial platforms such as the Internet of Things (IoT), Industrial Internet (II) in the US and Industrie 4.0 in Europe have tremendously accelerated the development of new generations of Cyber-Physical Systems (CPS) that integrate humans and human organizations (H-CPS) with physical and computation processes and extend to societal-scale systems such as traffic networks, electric grids, or networks of autonomous systems where control is dynamically shifted between humans and machines. Although such societal-scale CPS can potentially affect many aspect of our lives, significant societal strains have emerged about the new technology trends and their impact on how we live. Emerging tensions extend to regulations, certification, insurance, and other societal constructs that are necessary for the widespread adoption of new technologies. If these systems evolve independently in different parts of the world, they will ‘hard-wire’ the social context in which they are created, making interoperation hard or impossible, decreasing reusability, and narrowing markets for products and services. While impacts of new technology trends on social policies have received attention, the other side of the coin – to make systems adaptable to social policies – is nearly absent from engineering and computer science design practice. This paper focuses on technologies that can be adapted to varying public policies and presents (1) hard problems and technical challenges and (2) some recent research approaches and opportunities. The central goal of this paper is to discuss the challenges and opportunities for constructing H-CPS that can be parameterized by social context. The focus in on three major application domains: connected vehicles, transactive energy systems, and unmanned aerial vehicles.Abbreviations: CPS: Cyber-physical systems; H-CPS: Human-cyber-physical systems; CV: Connected vehicle; II: Industrial Internet; IoT: Internet of Things

Severin Kacianka, Alexander Pretschner.  2018.  Understanding and Formalizing Accountability for Cyber-Physical Systems. IEE International Conference on Systems, Man, and Cybernetics. :3165–3170.

Accountability is the property of a system that enables the uncovering of causes for events and helps understand who or what is responsible for these events. Definitions and interpretations of accountability differ; however, they are typically expressed in natural language that obscures design decisions and the impact on the overall system. This paper presents a formal model to express the accountability properties of cyber-physical systems. To illustrate the usefulness of our approach, we demonstrate how three different interpretations of accountability can be expressed using the proposed model and describe the implementation implications through a case study. This formal model can be used to highlight context specific-elements of accountability mechanisms, define their capabilities, and express different notions of accountability. In addition, it makes design decisions explicit and facilitates discussion, analysis and comparison of different approaches.