Visible to the public Biblio

Filters: Keyword is explainable artificial intelligence  [Clear All Filters]
2022-12-01
Abeyagunasekera, Sudil Hasitha Piyath, Perera, Yuvin, Chamara, Kenneth, Kaushalya, Udari, Sumathipala, Prasanna, Senaweera, Oshada.  2022.  LISA : Enhance the explainability of medical images unifying current XAI techniques. 2022 IEEE 7th International conference for Convergence in Technology (I2CT). :1—9.
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Yeo, Guo Feng Anders, Hudson, Irene, Akman, David, Chan, Jeffrey.  2022.  A Simple Framework for XAI Comparisons with a Case Study. 2022 5th International Conference on Artificial Intelligence and Big Data (ICAIBD). :501—508.
The number of publications related to Explainable Artificial Intelligence (XAI) has increased rapidly this last decade. However, the subjective nature of explainability has led to a lack of consensus regarding commonly used definitions for explainability and with differing problem statements falling under the XAI label resulting in a lack of comparisons. This paper proposes in broad terms a simple comparison framework for XAI methods based on the output and what we call the practical attributes. The aim of the framework is to ensure that everything that can be held constant for the purpose of comparison, is held constant and to ignore many of the subjective elements present in the area of XAI. An example utilizing such a comparison along the lines of the proposed framework is performed on local, post-hoc, model-agnostic XAI algorithms which are designed to measure the feature importance/contribution for a queried instance. These algorithms are assessed on two criteria using synthetic datasets across a range of classifiers. The first is based on selecting features which contribute to the underlying data structure and the second is how accurately the algorithms select the features used in a decision tree path. The results from the first comparison showed that when the classifier was able to pick up the underlying pattern in the model, the LIME algorithm was the most accurate at selecting the underlying ground truth features. The second test returned mixed results with some instances in which the XAI algorithms were able to accurately return the features used to produce predictions, however this result was not consistent.
2021-12-22
Nascita, Alfredo, Montieri, Antonio, Aceto, Giuseppe, Ciuonzo, Domenico, Persico, Valerio, Pescapè, Antonio.  2021.  Unveiling MIMETIC: Interpreting Deep Learning Traffic Classifiers via XAI Techniques. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :455–460.
The widespread use of powerful mobile devices has deeply affected the mix of traffic traversing both the Internet and enterprise networks (with bring-your-own-device policies). Traffic encryption has become extremely common, and the quick proliferation of mobile apps and their simple distribution and update have created a specifically challenging scenario for traffic classification and its uses, especially network-security related ones. The recent rise of Deep Learning (DL) has responded to this challenge, by providing a solution to the time-consuming and human-limited handcrafted feature design, and better clas-sification performance. The counterpart of the advantages is the lack of interpretability of these black-box approaches, limiting or preventing their adoption in contexts where the reliability of results, or interpretability of polices is necessary. To cope with these limitations, eXplainable Artificial Intelligence (XAI) techniques have seen recent intensive research. Along these lines, our work applies XAI-based techniques (namely, Deep SHAP) to interpret the behavior of a state-of-the-art multimodal DL traffic classifier. As opposed to common results seen in XAI, we aim at a global interpretation, rather than sample-based ones. The results quantify the importance of each modality (payload- or header-based), and of specific subsets of inputs (e.g., TLS SNI and TCP Window Size) in determining the classification outcome, down to per-class (viz. application) level. The analysis is based on a publicly-released recent dataset focused on mobile app traffic.
Renda, Alessandro, Ducange, Pietro, Gallo, Gionatan, Marcelloni, Francesco.  2021.  XAI Models for Quality of Experience Prediction in Wireless Networks. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–6.
Explainable Artificial Intelligence (XAI) is expected to play a key role in the design phase of next generation cellular networks. As 5G is being implemented and 6G is just in the conceptualization stage, it is increasingly clear that AI will be essential to manage the ever-growing complexity of the network. However, AI models will not only be required to deliver high levels of performance, but also high levels of explainability. In this paper we show how fuzzy models may be well suited to address this challenge. We compare fuzzy and classical decision tree models with a Random Forest (RF) classifier on a Quality of Experience classification dataset. The comparison suggests that, in our setting, fuzzy decision trees are easier to interpret and perform comparably or even better than classical ones in identifying stall events in a video streaming application. The accuracy drop with respect to RF classifier, which is considered to be a black-box ensemble model, is counterbalanced by a significant gain in terms of explainability.
2021-03-01
Tan, R., Khan, N., Guan, L..  2020.  Locality Guided Neural Networks for Explainable Artificial Intelligence. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
In current deep network architectures, deeper layers in networks tend to contain hundreds of independent neurons which makes it hard for humans to understand how they interact with each other. By organizing the neurons by correlation, humans can observe how clusters of neighbouring neurons interact with each other. In this paper, we propose a novel algorithm for back propagation, called Locality Guided Neural Network (LGNN) for training networks that preserves locality between neighbouring neurons within each layer of a deep network. Heavily motivated by Self-Organizing Map (SOM), the goal is to enforce a local topology on each layer of a deep network such that neighbouring neurons are highly correlated with each other. This method contributes to the domain of Explainable Artificial Intelligence (XAI), which aims to alleviate the black-box nature of current AI methods and make them understandable by humans. Our method aims to achieve XAI in deep learning without changing the structure of current models nor requiring any post processing. This paper focuses on Convolutional Neural Networks (CNNs), but can theoretically be applied to any type of deep learning architecture. In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100. In depth analyses presenting both qualitative and quantitative results demonstrate that our method is capable of enforcing a topology on each layer while achieving a small increase in classification accuracy.
Sarathy, N., Alsawwaf, M., Chaczko, Z..  2020.  Investigation of an Innovative Approach for Identifying Human Face-Profile Using Explainable Artificial Intelligence. 2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY). :155–160.
Human identification is a well-researched topic that keeps evolving. Advancement in technology has made it easy to train models or use ones that have been already created to detect several features of the human face. When it comes to identifying a human face from the side, there are many opportunities to advance the biometric identification research further. This paper investigates the human face identification based on their side profile by extracting the facial features and diagnosing the feature sets with geometric ratio expressions. These geometric ratio expressions are computed into feature vectors. The last stage involves the use of weighted means to measure similarity. This research addresses the problem of using an eXplainable Artificial Intelligence (XAI) approach. Findings from this research, based on a small data-set, conclude that the used approach offers encouraging results. Further investigation could have a significant impact on how face profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89.
Tao, J., Xiong, Y., Zhao, S., Xu, Y., Lin, J., Wu, R., Fan, C..  2020.  XAI-Driven Explainable Multi-view Game Cheating Detection. 2020 IEEE Conference on Games (CoG). :144–151.
Online gaming is one of the most successful applications having a large number of players interacting in an online persistent virtual world through the Internet. However, some cheating players gain improper advantages over normal players by using illegal automated plugins which has brought huge harm to game health and player enjoyment. Game industries have been devoting much efforts on cheating detection with multiview data sources and achieved great accuracy improvements by applying artificial intelligence (AI) techniques. However, generating explanations for cheating detection from multiple views still remains a challenging task. To respond to the different purposes of explainability in AI models from different audience profiles, we propose the EMGCD, the first explainable multi-view game cheating detection framework driven by explainable AI (XAI). It combines cheating explainers to cheating classifiers from different views to generate individual, local and global explanations which contributes to the evidence generation, reason generation, model debugging and model compression. The EMGCD has been implemented and deployed in multiple game productions in NetEase Games, achieving remarkable and trustworthy performance. Our framework can also easily generalize to other types of related tasks in online games, such as explainable recommender systems, explainable churn prediction, etc.
D’Alterio, P., Garibaldi, J. M., John, R. I..  2020.  Constrained Interval Type-2 Fuzzy Classification Systems for Explainable AI (XAI). 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
In recent year, there has been a growing need for intelligent systems that not only are able to provide reliable classifications but can also produce explanations for the decisions they make. The demand for increased explainability has led to the emergence of explainable artificial intelligence (XAI) as a specific research field. In this context, fuzzy logic systems represent a promising tool thanks to their inherently interpretable structure. The use of a rule-base and linguistic terms, in fact, have allowed researchers to create models that are able to produce explanations in natural language for each of the classifications they make. So far, however, designing systems that make use of interval type-2 (IT2) fuzzy logic and also give explanations for their outputs has been very challenging, partially due to the presence of the type-reduction step. In this paper, it will be shown how constrained interval type-2 (CIT2) fuzzy sets represent a valid alternative to conventional interval type-2 sets in order to address this issue. Through the analysis of two case studies from the medical domain, it is shown how explainable CIT2 classifiers are produced. These systems can explain which rules contributed to the creation of each of the endpoints of the output interval centroid, while showing (in these examples) the same level of accuracy as their IT2 counterpart.
Meskauskas, Z., Jasinevicius, R., Kazanavicius, E., Petrauskas, V..  2020.  XAI-Based Fuzzy SWOT Maps for Analysis of Complex Systems. 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
The classical SWOT methodology and many of the tools based on it used so far are very static, used for one stable project and lacking dynamics [1]. This paper proposes the idea of combining several SWOT analyses enriched with computing with words (CWW) paradigm into a single network. In this network, individual analysis of the situation is treated as the node. The whole structure is based on fuzzy cognitive maps (FCM) that have forward and backward chaining, so it is called fuzzy SWOT maps. Fuzzy SWOT maps methodology newly introduces the dynamics that projects are interacting, what exists in a real dynamic environment. The whole fuzzy SWOT maps network structure has explainable artificial intelligence (XAI) traits because each node in this network is a "white box"-all the reasoning chain can be tracked and checked why a particular decision has been made, which increases explainability by being able to check the rules to determine why a particular decision was made or why and how one project affects another. To confirm the vitality of the approach, a case with three interacting projects has been analyzed with a developed prototypical software tool and results are delivered.
Kuppa, A., Le-Khac, N.-A..  2020.  Black Box Attacks on Explainable Artificial Intelligence(XAI) methods in Cyber Security. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Cybersecurity community is slowly leveraging Machine Learning (ML) to combat ever evolving threats. One of the biggest drivers for successful adoption of these models is how well domain experts and users are able to understand and trust their functionality. As these black-box models are being employed to make important predictions, the demand for transparency and explainability is increasing from the stakeholders.Explanations supporting the output of ML models are crucial in cyber security, where experts require far more information from the model than a simple binary output for their analysis. Recent approaches in the literature have focused on three different areas: (a) creating and improving explainability methods which help users better understand the internal workings of ML models and their outputs; (b) attacks on interpreters in white box setting; (c) defining the exact properties and metrics of the explanations generated by models. However, they have not covered, the security properties and threat models relevant to cybersecurity domain, and attacks on explainable models in black box settings.In this paper, we bridge this gap by proposing a taxonomy for Explainable Artificial Intelligence (XAI) methods, covering various security properties and threat models relevant to cyber security domain. We design a novel black box attack for analyzing the consistency, correctness and confidence security properties of gradient based XAI methods. We validate our proposed system on 3 security-relevant data-sets and models, and demonstrate that the method achieves attacker's goal of misleading both the classifier and explanation report and, only explainability method without affecting the classifier output. Our evaluation of the proposed approach shows promising results and can help in designing secure and robust XAI methods.

2018-12-10
Zhu, J., Liapis, A., Risi, S., Bidarra, R., Youngblood, G. M..  2018.  Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation. 2018 IEEE Conference on Computational Intelligence and Games (CIG). :1–8.

Growing interest in eXplainable Artificial Intelligence (XAI) aims to make AI and machine learning more understandable to human users. However, most existing work focuses on new algorithms, and not on usability, practical interpretability and efficacy on real users. In this vision paper, we propose a new research area of eXplainable AI for Designers (XAID), specifically for game designers. By focusing on a specific user group, their needs and tasks, we propose a human-centered approach for facilitating game designers to co-create with AI/ML techniques through XAID. We illustrate our initial XAID framework through three use cases, which require an understanding both of the innate properties of the AI techniques and users' needs, and we identify key open challenges.

Oyekanlu, E..  2018.  Distributed Osmotic Computing Approach to Implementation of Explainable Predictive Deep Learning at Industrial IoT Network Edges with Real-Time Adaptive Wavelet Graphs. 2018 IEEE First International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). :179–188.
Challenges associated with developing analytics solutions at the edge of large scale Industrial Internet of Things (IIoT) networks close to where data is being generated in most cases involves developing analytics solutions from ground up. However, this approach increases IoT development costs and system complexities, delay time to market, and ultimately lowers competitive advantages associated with delivering next-generation IoT designs. To overcome these challenges, existing, widely available, hardware can be utilized to successfully participate in distributed edge computing for IIoT systems. In this paper, an osmotic computing approach is used to illustrate how distributed osmotic computing and existing low-cost hardware may be utilized to solve complex, compute-intensive Explainable Artificial Intelligence (XAI) deep learning problem from the edge, through the fog, to the network cloud layer of IIoT systems. At the edge layer, the C28x digital signal processor (DSP), an existing low-cost, embedded, real-time DSP that has very wide deployment and integration in several IoT industries is used as a case study for constructing real-time graph-based Coiflet wavelets that could be used for several analytic applications including deep learning pre-processing applications at the edge and fog layers of IIoT networks. Our implementation is the first known application of the fixed-point C28x DSP to construct Coiflet wavelets. Coiflet Wavelets are constructed in the form of an osmotic microservice, using embedded low-level machine language to program the C28x at the network edge. With the graph-based approach, it is shown that an entire Coiflet wavelet distribution could be generated from only one wavelet stored in the C28x based edge device, and this could lead to significant savings in memory at the edge of IoT networks. Pearson correlation coefficient is used to select an edge generated Coiflet wavelet and the selected wavelet is used at the fog layer for pre-processing and denoising IIoT data to improve data quality for fog layer based deep learning application. Parameters for implementing deep learning at the fog layer using LSTM networks have been determined in the cloud. For XAI, communication network noise is shown to have significant impact on results of predictive deep learning at IIoT network fog layer.
Beaton, Brian.  2018.  Crucial Answers About Humanoid Capital. Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. :5–12.

Inside AI research and engineering communities, explainable artificial intelligence (XAI) is one of the most provocative and promising lines of AI research and development today. XAI has the potential to make expressible the context and domain-specific benefits of particular AI applications to a diverse and inclusive array of stakeholders and audiences. In addition, XAI has the potential to make AI benefit claims more deeply evidenced. Outside AI research and engineering communities, one of the most provocative and promising lines of research happening today is the work on "humanoid capital" at the edges of the social, behavioral, and economic sciences. Humanoid capital theorists renovate older discussions of "human capital" as part of trying to make calculable and provable the domain-specific capital value, value-adding potential, or relative worth (i.e., advantages and benefits) of different humanoid models over time. Bringing these two exciting streams of research into direct conversation for the first time is the larger goal of this landmark paper. The primary research contribution of the paper is to detail some of the key requirements for making humanoid robots explainable in capital terms using XAI approaches. In this regard, the paper not only brings two streams of provocative research into much-needed conversation but also advances both streams.