Visible to the public BiblioConflict Detection Enabled

Found 159 results

2021-08-13
Bianca Biebl, Klaus Bengler.  2021.  I Spy with My Mental Eye – Analyzing Compensatory Scanning in Drivers with Homonymous Visual Field Loss. Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021).
Moritz Held, Jelmer Borst, Anirudh Unni, Jochem Rieger.  2021.  Utilizing ACT-R to investigate interactions between working memory and visuospatial attention while driving. Proceedings of the Annual Meeting of the Cognitive Science Society. 43(1)
In an effort towards predicting mental workload while driving, previous research found interactions between working memory load and visuospatial demands, which complicates the accurate prediction of momentary mental workload. To investigate this interaction, the cognitive concepts working memory load and visuospatial attention were integrated into a cognitive driving model using the cognitive architecture ACT-R. The model was developed to safely drive on a multi-lane highway with ongoing traffic while performing a secondary n-back task using speed signs. To manipulate visuospatial demands, the model must drive through a construction site with reduced lane-width in certain blocks of the experiment. Furthermore, it is able to handle complex driving situations such as overtaking traffic while adjusting the speed according to the n-back task. The behavioral results show a negative effect on driving performance with increasing task difficulty of the secondary task. Additionally, the model indicates an interaction at a common, task-unspecific level.
2021-08-12
Anirudh Unni, Jochem Rieger.  2021.  Characterizing and modeling human states in human-CPS interactions at the brain-level.
presented at workshop ‘Safety Critical Human-Cyber-Physical Systems’, Oct 29, 2020
Klaus Bengler, Bianca Biebl, Christian Lehsing.  2021.  Webinar presentation: Why and How? at webinar ‘Driving Despite Impairment’
presented at webinar ‘Driving Despite Impairment’, organized by Prof. Dr. phil. Klaus Bengler & Bianca Biebl, Nov 11, 2020.
Eckhard Böde, Werner Damm.  2021.  Simulation of Abstract Scenarios: Towards Automated Tooling in Criticality Analysis.
invited presentation at Workshop “From autonomous driving to innovative vehicle concepts”, Swiss Academy of Engineering Sciences SATW, December 2020
Werner Damm.  2021.  Challenges for Assuring Safety for AI based Mobility Applications.
invited presentation at the award event of the Artificial Intelligence Dependability Assessment (AI-DA) Student Challenge, Siemens Mobility, July 16, 2021
Werner Damm, Andreas Hein, Mark Busse.  2021.  The Car that Cares. Patent application at the German Patent office DPMA . :20.
Klaus Bengler, Bianca Biebl, Werner Damm, Martin Fränzle, Willem Hagemann, Moritz Held, Klas Ihme, Severin Kacianka, Sebastian Lehnhoff, Andreas Luedtke et al..  2021.  A Metamodel of Human Cyber Physical Systems. Working Document of the PIRE Project on Assuring Individual, Social, and Cultural Embeddedness of Autonomous Cyber-Physical Systems (ISCE-ACPS). :41.
2021-08-11
Werner Damm, Johannes Helbig, Peter Liggesmeyer, Philipp Slusallek.  2021.  Trusted AI: Why We Need a New Major Research and Innovation Initiative for AI in Germany and Europe. White paper submitted to German Federal Ministry of Education and Research. :41.
White paper submitted to German Federal Ministry of Education and Research
Sulayman K. Sowe, Martin Fränzle, Jan-Patrick Osterloh, Alexander Trende, Lars Weber, Andreas Lüdtke.  2020.  Challenges for Integrating Humans into Vehicular Cyber-Physical Systems. Software Engineering and Formal Methods. 12226:20–26.
Advances in Vehicular Cyber-Physical Systems (VCPS) are the primary enablers of the shift from no automation to fully autonomous vehicles (AVs). One of the impacts of this shift is to develop safe AVs in which most or all of the functions of the human driver are replaced with an intelligent system. However, while some progress has been made in equipping AVs with advanced AI capabilities, VCPS designers are still faced with the challenge of designing trustworthy AVs that are in sync with the unpredictable behaviours of humans. In order to address this challenge, we present a model that describes how a Human Ambassador component can be integrated into the overall design of a new generation of VCPS. A scenario is presented to demonstrate how the model can work in practice. Formalisation and co-simulation challenges associated with integrating the Human Ambassador component and future work we are undertaking are also discussed.
Werner Damm, Martin Fränzle, Willem Hagemann, Paul Kröger, Astrid Rakow.  2019.  Dynamic Conflict Resolution Using Justification Based Reasoning. Proceedings of the 4th Workshop on Formal Reasoning about Causation, Responsibility, and Explanations in Science and Technology. 308:47–65.
Amjad Ibrahim, Alexander Pretschner.  2020.  From Checking to Inference: Actual Causality Computations as Optimization Problems. Automated Technology for Verification and Analysis - 18th International Symposium, {ATVA} 2020, Hanoi, Vietnam, October 19-23, 2020, Proceedings. 12302:343–359.
Severin Kacianka, Alexander Pretschner.  2021.  Designing Accountable Systems. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. :424–437.
Accountability is an often called for property of technical systems. It is a requirement for algorithmic decision systems, autonomous cyber-physical systems, and for software systems in general. As a concept, accountability goes back to the early history of Liberalism and is suggested as a tool to limit the use of power. This long history has also given us many, often slightly differing, definitions of accountability. The problem that software developers now face is to understand what accountability means for their systems and how to reflect it in a system's design. To enable the rigorous study of accountability in a system, we need models that are suitable for capturing such a varied concept. In this paper, we present a method to express and compare different definitions of accountability using Structural Causal Models. We show how these models can be used to evaluate a system's design and present a small use case based on an autonomous car.
Martin Fränzle, Paul Kröger.  2020.  Guess What I'm Doing! - Rendering Formal Verification Methods Ripe for the Era of Interacting Intelligent Systems. Leveraging Applications of Formal Methods, Verification and Validation: Applications - 9th International Symposium on Leveraging Applications of Formal Methods, ISoLA 2020, Rhodes, Greece, October 20-30, 2020, Proceedings, Part III. 12478:255-272.
Erika Puiutta, Eric M. S. P. Veith.  2020.  Explainable Reinforcement Learning: A Survey. Machine Learning and Knowledge Extraction. :77–95.
Explainable Artificial Intelligence (XAI), i.e., the development of more transparent and interpretable AI models, has gained increased traction over the last few years. This is due to the fact that, in conjunction with their growth into powerful and ubiquitous tools, AI models exhibit one detrimental characteristic: a performance-transparency trade-off. This describes the fact that the more complex a model's inner workings, the less clear it is how its predictions or decisions were achieved. But, especially considering Machine Learning (ML) methods like Reinforcement Learning (RL) where the system learns autonomously, the necessity to understand the underlying reasoning for their decisions becomes apparent. Since, to the best of our knowledge, there exists no single work offering an overview of Explainable Reinforcement Learning (XRL) methods, this survey attempts to address this gap. We give a short summary of the problem, a definition of important terms, and offer a classification and assessment of current XRL methods. We found that a) the majority of XRL methods function by mimicking and simplifying a complex model instead of designing an inherently simple one, and b) XRL (and XAI) methods often neglect to consider the human side of the equation, not taking into account research from related fields like psychology or philosophy. Thus, an interdisciplinary effort is needed to adapt the generated explanations to a (non-expert) human user in order to effectively progress in the field of XRL and XAI in general.
Bianca Biebl, Klaus Bengler.  2021.  I Spy with My Mental Eye: Analyzing Compensatory Scanning in Drivers with Homonymous Visual Field Loss. Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021). :552–559.
Drivers with visual field loss show a heterogeneous driving performance due to the varying ability to compensate for their perceptual deficits. This paper presents a theoretical investigation of the factors that determine the development of adaptive scanning strategies. The application of the Saliency-Effort-Expectancy-Value (SEEV) model to the use case of homonymous hemianopia in intersections indicates that a lack of guidance and a demand for increased gaze movements in the blind visual field aggravates scanning. The adaptation of the scanning behavior to these challenges consequently requires the presence of adequate mental models of the driving scene and of the individual visual abilities. These factors should be considered in the development of assistance systems and trainings for visually impaired drivers.