Visible to the public A Social Science-based Approach to Explanations for (Game) AI

TitleA Social Science-based Approach to Explanations for (Game) AI
Publication TypeConference Paper
Year of Publication2018
AuthorsVolz, V., Majchrzak, K., Preuss, M.
Conference Name2018 IEEE Conference on Computational Intelligence and Games (CIG)
Date PublishedAug. 2018
PublisherIEEE
ISBN Number978-1-5386-4359-4
KeywordsAI revolution, artificial intelligence, baseline definition, bottom-up approach, Brain modeling, Collaboration, complex algorithmic systems, Complexity theory, computer games, deep learning methods, explainable AI, Games, GVGAI framework, human working memory limitation, image recognition, learning (artificial intelligence), Object oriented modeling, pubcrawl, resilience, Resiliency, Scalability, social science-based approach, social sciences, super-sensors, working memory, xai
Abstract

The current AI revolution provides us with many new, but often very complex algorithmic systems. This complexity does not only limit understanding, but also acceptance of e.g. deep learning methods. In recent years, explainable AI (XAI) has been proposed as a remedy. However, this research is rarely supported by publications on explanations from social sciences. We suggest a bottom-up approach to explanations for (game) AI, by starting from a baseline definition of understandability informed by the concept of limited human working memory. We detail our approach and demonstrate its application to two games from the GVGAI framework. Finally, we discuss our vision of how additional concepts from social sciences can be integrated into our proposed approach and how the results can be generalised.

URLhttps://ieeexplore.ieee.org/document/8490361
DOI10.1109/CIG.2018.8490361
Citation Keyvolz_social_2018