Visible to the public BiblioConflict Detection Enabled

Filters: Keyword is Societal Design  [Clear All Filters]
2021-08-12
2021-08-11
Werner Damm, Johannes Helbig, Peter Liggesmeyer, Philipp Slusallek.  2021.  Trusted AI: Why We Need a New Major Research and Innovation Initiative for AI in Germany and Europe. White paper submitted to German Federal Ministry of Education and Research. :41.
White paper submitted to German Federal Ministry of Education and Research
Sulayman K. Sowe, Martin Fränzle, Jan-Patrick Osterloh, Alexander Trende, Lars Weber, Andreas Lüdtke.  2020.  Challenges for Integrating Humans into Vehicular Cyber-Physical Systems. Software Engineering and Formal Methods. 12226:20–26.
Advances in Vehicular Cyber-Physical Systems (VCPS) are the primary enablers of the shift from no automation to fully autonomous vehicles (AVs). One of the impacts of this shift is to develop safe AVs in which most or all of the functions of the human driver are replaced with an intelligent system. However, while some progress has been made in equipping AVs with advanced AI capabilities, VCPS designers are still faced with the challenge of designing trustworthy AVs that are in sync with the unpredictable behaviours of humans. In order to address this challenge, we present a model that describes how a Human Ambassador component can be integrated into the overall design of a new generation of VCPS. A scenario is presented to demonstrate how the model can work in practice. Formalisation and co-simulation challenges associated with integrating the Human Ambassador component and future work we are undertaking are also discussed.
Werner Damm, Martin Fränzle, Willem Hagemann, Paul Kröger, Astrid Rakow.  2019.  Dynamic Conflict Resolution Using Justification Based Reasoning. Proceedings of the 4th Workshop on Formal Reasoning about Causation, Responsibility, and Explanations in Science and Technology. 308:47–65.
Amjad Ibrahim, Alexander Pretschner.  2020.  From Checking to Inference: Actual Causality Computations as Optimization Problems. Automated Technology for Verification and Analysis - 18th International Symposium, {ATVA} 2020, Hanoi, Vietnam, October 19-23, 2020, Proceedings. 12302:343–359.
Severin Kacianka, Alexander Pretschner.  2021.  Designing Accountable Systems. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. :424–437.
Accountability is an often called for property of technical systems. It is a requirement for algorithmic decision systems, autonomous cyber-physical systems, and for software systems in general. As a concept, accountability goes back to the early history of Liberalism and is suggested as a tool to limit the use of power. This long history has also given us many, often slightly differing, definitions of accountability. The problem that software developers now face is to understand what accountability means for their systems and how to reflect it in a system's design. To enable the rigorous study of accountability in a system, we need models that are suitable for capturing such a varied concept. In this paper, we present a method to express and compare different definitions of accountability using Structural Causal Models. We show how these models can be used to evaluate a system's design and present a small use case based on an autonomous car.
Martin Fränzle, Paul Kröger.  2020.  Guess What I'm Doing! - Rendering Formal Verification Methods Ripe for the Era of Interacting Intelligent Systems. Leveraging Applications of Formal Methods, Verification and Validation: Applications - 9th International Symposium on Leveraging Applications of Formal Methods, ISoLA 2020, Rhodes, Greece, October 20-30, 2020, Proceedings, Part III. 12478:255-272.
Erika Puiutta, Eric M. S. P. Veith.  2020.  Explainable Reinforcement Learning: A Survey. Machine Learning and Knowledge Extraction. :77–95.
Explainable Artificial Intelligence (XAI), i.e., the development of more transparent and interpretable AI models, has gained increased traction over the last few years. This is due to the fact that, in conjunction with their growth into powerful and ubiquitous tools, AI models exhibit one detrimental characteristic: a performance-transparency trade-off. This describes the fact that the more complex a model's inner workings, the less clear it is how its predictions or decisions were achieved. But, especially considering Machine Learning (ML) methods like Reinforcement Learning (RL) where the system learns autonomously, the necessity to understand the underlying reasoning for their decisions becomes apparent. Since, to the best of our knowledge, there exists no single work offering an overview of Explainable Reinforcement Learning (XRL) methods, this survey attempts to address this gap. We give a short summary of the problem, a definition of important terms, and offer a classification and assessment of current XRL methods. We found that a) the majority of XRL methods function by mimicking and simplifying a complex model instead of designing an inherently simple one, and b) XRL (and XAI) methods often neglect to consider the human side of the equation, not taking into account research from related fields like psychology or philosophy. Thus, an interdisciplinary effort is needed to adapt the generated explanations to a (non-expert) human user in order to effectively progress in the field of XRL and XAI in general.
Bianca Biebl, Klaus Bengler.  2021.  I Spy with My Mental Eye: Analyzing Compensatory Scanning in Drivers with Homonymous Visual Field Loss. Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021). :552–559.
Drivers with visual field loss show a heterogeneous driving performance due to the varying ability to compensate for their perceptual deficits. This paper presents a theoretical investigation of the factors that determine the development of adaptive scanning strategies. The application of the Saliency-Effort-Expectancy-Value (SEEV) model to the use case of homonymous hemianopia in intersections indicates that a lack of guidance and a demand for increased gaze movements in the blind visual field aggravates scanning. The adaptation of the scanning behavior to these challenges consequently requires the presence of adequate mental models of the driving scene and of the individual visual abilities. These factors should be considered in the development of assistance systems and trainings for visually impaired drivers.
Alexander Trende, Anirudh Unni, Jochem Rieger, Martin Fraenzle.  2021.  Modelling Turning Intention in Unsignalized Intersections with Bayesian Networks. International Conference on Human-Computer Interaction. :289-296.
Turning through oncoming traffic at unsignalized intersections can lead to safety-critical situations contributing to 7.4% of all non-severe vehicle crashes. One of the main reasons for these crashes are human errors in the form of incorrect estimation of the gap size with respect to the Principle Other Vehicle (POV). Vehicle-to-vehicle (V2V) technology promises to increase safety in various traffic situations. V2V infrastructure combined with further integration of sensor technology and human intention prediction could help reduce the frequency of these safety-critical situations by predicting dangerous turning manoeuvres in advance, thus, allowing the POV to prepare an appropriate reaction. We performed a driving simulator study to investigate turning decisions at unsignalized intersections. Over the course of the experiments, we recorded over 5000 turning decisions with respect to different gap sizes. Afterwards, the participants filled out a questionnaire featuring demographic and driving style related items. The behavioural and questionnaire data was then used to fit a Bayesian Network model to predict the turning intention of the subject vehicle. We evaluate the model and present the results of a feature importance analysis. The model is able to correctly predict the turning intention with an accuracy of 74%. Furthermore, the feature importance analysis indicates that user specific information is a valuable contribution to the model. We discuss how a working turning intension prediction could reduce the number of safety-critical situations.
Birte Kramer, Christian Neurohr, Matthias Büker, Eckard Böde, Martin Fränzle, Werner Damm.  2020.  Identification and Quantification of Hazardous Scenarios for Automated Driving. Model-Based Safety and Assessment. :163–178.
We present an integrated method for safety assessment of automated driving systems which covers the aspects of functional safety and safety of the intended functionality (SOTIF), including identification and quantification of hazardous scenarios. The proposed method uses and combines established exploration and analytical tools for hazard analysis and risk assessment in the automotive domain, while adding important enhancements to enable their applicability to the uncharted territory of safety analyses for automated driving. The method is tailored to support existing safety processes mandated by the standards ISO 26262 and ISO/PAS 21448 and complements them where necessary. It has been developed in close cooperation with major German automotive manufacturers and suppliers within the PEGASUS project (https://www.pegasusprojekt.de/en). Practical evaluation has been carried out by applying the method to the PEGASUS Highway-Chauffeur, a conceptual automated driving function considered as a common reference system within the project.
Poechhacker, Nikolaus, Kacianka, Severin.  2021.  Algorithmic Accountability in Context. Socio-Technical Perspectives on Structural Causal Models. Frontiers in Big Data. 3:55.
The increasing use of automated decision making (ADM) and machine learning sparked an ongoing discussion about algorithmic accountability. Within computer science, a new form of producing accountability has been discussed recently: causality as an expression of algorithmic accountability, formalized using structural causal models (SCMs). However, causality itself is a concept that needs further exploration. Therefore, in this contribution we confront ideas of SCMs with insights from social theory, more explicitly pragmatism, and argue that formal expressions of causality must always be seen in the context of the social system in which they are applied. This results in the formulation of further research questions and directions.