Visible to the public Biblio

Filters: Keyword is explainable AI  [Clear All Filters]
2023-03-31
Huang, Jun, Wang, Zerui, Li, Ding, Liu, Yan.  2022.  The Analysis and Development of an XAI Process on Feature Contribution Explanation. 2022 IEEE International Conference on Big Data (Big Data). :5039–5048.
Explainable Artificial Intelligence (XAI) research focuses on effective explanation techniques to understand and build AI models with trust, reliability, safety, and fairness. Feature importance explanation summarizes feature contributions for end-users to make model decisions. However, XAI methods may produce varied summaries that lead to further analysis to evaluate the consistency across multiple XAI methods on the same model and data set. This paper defines metrics to measure the consistency of feature contribution explanation summaries under feature importance order and saliency map. Driven by these consistency metrics, we develop an XAI process oriented on the XAI criterion of feature importance, which performs a systematical selection of XAI techniques and evaluation of explanation consistency. We demonstrate the process development involving twelve XAI methods on three topics, including a search ranking system, code vulnerability detection and image classification. Our contribution is a practical and systematic process with defined consistency metrics to produce rigorous feature contribution explanations.
2022-06-10
Nguyen, Tien N., Choo, Raymond.  2021.  Human-in-the-Loop XAI-enabled Vulnerability Detection, Investigation, and Mitigation. 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). :1210–1212.
The need for cyber resilience is increasingly important in our technology-dependent society, where computing systems, devices and data will continue to be the target of cyber attackers. Hence, we propose a conceptual framework called ‘Human-in-the-Loop Explainable-AI-Enabled Vulnerability Detection, Investigation, and Mitigation’ (HXAI-VDIM). Specifically, instead of resolving complex scenario of security vulnerabilities as an output of an AI/ML model, we integrate the security analyst or forensic investigator into the man-machine loop and leverage explainable AI (XAI) to combine both AI and Intelligence Assistant (IA) to amplify human intelligence in both proactive and reactive processes. Our goal is that HXAI-VDIM integrates human and machine in an interactive and iterative loop with security visualization that utilizes human intelligence to guide the XAI-enabled system and generate refined solutions.
2022-06-06
Uchida, Hikaru, Matsubara, Masaki, Wakabayashi, Kei, Morishima, Atsuyuki.  2020.  Human-in-the-loop Approach towards Dual Process AI Decisions. 2020 IEEE International Conference on Big Data (Big Data). :3096–3098.
How to develop AI systems that can explain how they made decisions is one of the important and hot topics today. Inspired by the dual-process theory in psychology, this paper proposes a human-in-the-loop approach to develop System-2 AI that makes an inference logically and outputs interpretable explanation. Our proposed method first asks crowd workers to raise understandable features of objects of multiple classes and collect training data from the Internet to generate classifiers for the features. Logical decision rules with the set of generated classifiers can explain why each object is of a particular class. In our preliminary experiment, we applied our method to an image classification of Asian national flags and examined the effectiveness and issues of our method. In our future studies, we plan to combine the System-2 AI with System-1 AI (e.g., neural networks) to efficiently output decisions.
2021-12-22
Murray, Bryce, Anderson, Derek T., Havens, Timothy C..  2021.  Actionable XAI for the Fuzzy Integral. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
The adoption of artificial intelligence (AI) into domains that impact human life (healthcare, agriculture, security and defense, etc.) has led to an increased demand for explainable AI (XAI). Herein, we focus on an under represented piece of the XAI puzzle, information fusion. To date, a number of low-level XAI explanation methods have been proposed for the fuzzy integral (FI). However, these explanations are tailored to experts and its not always clear what to do with the information they return. In this article we review and categorize existing FI work according to recent XAI nomenclature. Second, we identify a set of initial actions that a user can take in response to these low-level statistical, graphical, local, and linguistic XAI explanations. Third, we investigate the design of an interactive user friendly XAI report. Two case studies, one synthetic and one real, show the results of following recommended actions to understand and improve tasks involving classification.
2021-05-05
Rathod, Jash, Joshi, Chaitali, Khochare, Janavi, Kazi, Faruk.  2020.  Interpreting a Black-Box Model used for SCADA Attack detection in Gas Pipelines Control System. 2020 IEEE 17th India Council International Conference (INDICON). :1—7.
Various Machine Learning techniques are considered to be "black-boxes" because of their limited interpretability and explainability. This cannot be afforded, especially in the domain of Cyber-Physical Systems, where there can be huge losses of infrastructure of industries and Governments. Supervisory Control And Data Acquisition (SCADA) systems need to detect and be protected from cyber-attacks. Thus, we need to adopt approaches that make the system secure, can explain predictions made by model, and interpret the model in a human-understandable format. Recently, Autoencoders have shown great success in attack detection in SCADA systems. Numerous interpretable machine learning techniques are developed to help us explain and interpret models. The work presented here is a novel approach to use techniques like Local Interpretable Model-Agnostic Explanations (LIME) and Layer-wise Relevance Propagation (LRP) for interpretation of Autoencoder networks trained on a Gas Pipelines Control System to detect attacks in the system.
2021-03-01
Tao, J., Xiong, Y., Zhao, S., Xu, Y., Lin, J., Wu, R., Fan, C..  2020.  XAI-Driven Explainable Multi-view Game Cheating Detection. 2020 IEEE Conference on Games (CoG). :144–151.
Online gaming is one of the most successful applications having a large number of players interacting in an online persistent virtual world through the Internet. However, some cheating players gain improper advantages over normal players by using illegal automated plugins which has brought huge harm to game health and player enjoyment. Game industries have been devoting much efforts on cheating detection with multiview data sources and achieved great accuracy improvements by applying artificial intelligence (AI) techniques. However, generating explanations for cheating detection from multiple views still remains a challenging task. To respond to the different purposes of explainability in AI models from different audience profiles, we propose the EMGCD, the first explainable multi-view game cheating detection framework driven by explainable AI (XAI). It combines cheating explainers to cheating classifiers from different views to generate individual, local and global explanations which contributes to the evidence generation, reason generation, model debugging and model compression. The EMGCD has been implemented and deployed in multiple game productions in NetEase Games, achieving remarkable and trustworthy performance. Our framework can also easily generalize to other types of related tasks in online games, such as explainable recommender systems, explainable churn prediction, etc.
D’Alterio, P., Garibaldi, J. M., John, R. I..  2020.  Constrained Interval Type-2 Fuzzy Classification Systems for Explainable AI (XAI). 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
In recent year, there has been a growing need for intelligent systems that not only are able to provide reliable classifications but can also produce explanations for the decisions they make. The demand for increased explainability has led to the emergence of explainable artificial intelligence (XAI) as a specific research field. In this context, fuzzy logic systems represent a promising tool thanks to their inherently interpretable structure. The use of a rule-base and linguistic terms, in fact, have allowed researchers to create models that are able to produce explanations in natural language for each of the classifications they make. So far, however, designing systems that make use of interval type-2 (IT2) fuzzy logic and also give explanations for their outputs has been very challenging, partially due to the presence of the type-reduction step. In this paper, it will be shown how constrained interval type-2 (CIT2) fuzzy sets represent a valid alternative to conventional interval type-2 sets in order to address this issue. Through the analysis of two case studies from the medical domain, it is shown how explainable CIT2 classifiers are produced. These systems can explain which rules contributed to the creation of each of the endpoints of the output interval centroid, while showing (in these examples) the same level of accuracy as their IT2 counterpart.
2021-02-01
Wickramasinghe, C. S., Marino, D. L., Grandio, J., Manic, M..  2020.  Trustworthy AI Development Guidelines for Human System Interaction. 2020 13th International Conference on Human System Interaction (HSI). :130–136.
Artificial Intelligence (AI) is influencing almost all areas of human life. Even though these AI-based systems frequently provide state-of-the-art performance, humans still hesitate to develop, deploy, and use AI systems. The main reason for this is the lack of trust in AI systems caused by the deficiency of transparency of existing AI systems. As a solution, “Trustworthy AI” research area merged with the goal of defining guidelines and frameworks for improving user trust in AI systems, allowing humans to use them without fear. While trust in AI is an active area of research, very little work exists where the focus is to build human trust to improve the interactions between human and AI systems. In this paper, we provide a concise survey on concepts of trustworthy AI. Further, we present trustworthy AI development guidelines for improving the user trust to enhance the interactions between AI systems and humans, that happen during the AI system life cycle.
2021-01-15
Kobayashi, H., Kadoguchi, M., Hayashi, S., Otsuka, A., Hashimoto, M..  2020.  An Expert System for Classifying Harmful Content on the Dark Web. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1—6.

In this research, we examine and develop an expert system with a mechanism to automate crime category classification and threat level assessment, using the information collected by crawling the dark web. We have constructed a bag of words from 250 posts on the dark web and developed an expert system which takes the frequency of terms as an input and classifies sample posts into 6 criminal category dealing with drugs, stolen credit card, passwords, counterfeit products, child porn and others, and 3 threat levels (high, middle, low). Contrary to prior expectations, our simple and explainable expert system can perform competitively with other existing systems. For short, our experimental result with 1500 posts on the dark web shows 76.4% of recall rate for 6 criminal category classification and 83% of recall rate for 3 threat level discrimination for 100 random-sampled posts.

2020-10-06
Amarasinghe, Kasun, Wickramasinghe, Chathurika, Marino, Daniel, Rieger, Craig, Manicl, Milos.  2018.  Framework for Data Driven Health Monitoring of Cyber-Physical Systems. 2018 Resilience Week (RWS). :25—30.

Modern infrastructure is heavily reliant on systems with interconnected computational and physical resources, named Cyber-Physical Systems (CPSs). Hence, building resilient CPSs is a prime need and continuous monitoring of the CPS operational health is essential for improving resilience. This paper presents a framework for calculating and monitoring of health in CPSs using data driven techniques. The main advantages of this data driven methodology is that the ability of leveraging heterogeneous data streams that are available from the CPSs and the ability of performing the monitoring with minimal a priori domain knowledge. The main objective of the framework is to warn the operators of any degradation in cyber, physical or overall health of the CPS. The framework consists of four components: 1) Data acquisition and feature extraction, 2) state identification and real time state estimation, 3) cyber-physical health calculation and 4) operator warning generation. Further, this paper presents an initial implementation of the first three phases of the framework on a CPS testbed involving a Microgrid simulation and a cyber-network which connects the grid with its controller. The feature extraction method and the use of unsupervised learning algorithms are discussed. Experimental results are presented for the first two phases and the results showed that the data reflected different operating states and visualization techniques can be used to extract the relationships in data features.

2019-12-16
Guo, Wenbo, Mu, Dongliang, Xu, Jun, Su, Purui, Wang, Gang, Xing, Xinyu.  2018.  LEMNA: Explaining Deep Learning Based Security Applications. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :364–379.
While deep learning has shown a great potential in various domains, the lack of transparency has limited its application in security or safety-critical areas. Existing research has attempted to develop explanation techniques to provide interpretable explanations for each classification decision. Unfortunately, current methods are optimized for non-security tasks ( e.g., image analysis). Their key assumptions are often violated in security applications, leading to a poor explanation fidelity. In this paper, we propose LEMNA, a high-fidelity explanation method dedicated for security applications. Given an input data sample, LEMNA generates a small set of interpretable features to explain how the input sample is classified. The core idea is to approximate a local area of the complex deep learning decision boundary using a simple interpretable model. The local interpretable model is specially designed to (1) handle feature dependency to better work with security applications ( e.g., binary code analysis); and (2) handle nonlinear local boundaries to boost explanation fidelity. We evaluate our system using two popular deep learning applications in security (a malware classifier, and a function start detector for binary reverse-engineering). Extensive evaluations show that LEMNA's explanation has a much higher fidelity level compared to existing methods. In addition, we demonstrate practical use cases of LEMNA to help machine learning developers to validate model behavior, troubleshoot classification errors, and automatically patch the errors of the target models.
2018-12-10
Volz, V., Majchrzak, K., Preuss, M..  2018.  A Social Science-based Approach to Explanations for (Game) AI. 2018 IEEE Conference on Computational Intelligence and Games (CIG). :1–2.

The current AI revolution provides us with many new, but often very complex algorithmic systems. This complexity does not only limit understanding, but also acceptance of e.g. deep learning methods. In recent years, explainable AI (XAI) has been proposed as a remedy. However, this research is rarely supported by publications on explanations from social sciences. We suggest a bottom-up approach to explanations for (game) AI, by starting from a baseline definition of understandability informed by the concept of limited human working memory. We detail our approach and demonstrate its application to two games from the GVGAI framework. Finally, we discuss our vision of how additional concepts from social sciences can be integrated into our proposed approach and how the results can be generalised.

Murray, B., Islam, M. A., Pinar, A. J., Havens, T. C., Anderson, D. T., Scott, G..  2018.  Explainable AI for Understanding Decisions and Data-Driven Optimization of the Choquet Integral. 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.

To date, numerous ways have been created to learn a fusion solution from data. However, a gap exists in terms of understanding the quality of what was learned and how trustworthy the fusion is for future-i.e., new-data. In part, the current paper is driven by the demand for so-called explainable AI (XAI). Herein, we discuss methods for XAI of the Choquet integral (ChI), a parametric nonlinear aggregation function. Specifically, we review existing indices, and we introduce new data-centric XAI tools. These various XAI-ChI methods are explored in the context of fusing a set of heterogeneous deep convolutional neural networks for remote sensing.