Visible to the public Biblio

Filters: Keyword is xai  [Clear All Filters]
2022-12-01
Srikanth, K S, Ramesh, T K, Palaniswamy, Suja, Srinivasan, Ranganathan.  2022.  XAI based model evaluation by applying domain knowledge. 2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT). :1—6.
Artificial intelligence(AI) is used in decision support systems which learn and perceive features as a function of the number of layers and the weights computed during training. Due to their inherent black box nature, it is insufficient to consider accuracy, precision and recall as metrices for evaluating a model's performance. Domain knowledge is also essential to identify features that are significant by the model to arrive at its decision. In this paper, we consider a use case of face mask recognition to explain the application and benefits of XAI. Eight models used to solve the face mask recognition problem were selected. GradCAM Explainable AI (XAI) is used to explain the state-of-art models. Models that were selecting incorrect features were eliminated even though, they had a high accuracy. Domain knowledge relevant to face mask recognition viz., facial feature importance is applied to identify the model that picked the most appropriate features to arrive at the decision. We demonstrate that models with high accuracies need not be necessarily select the right features. In applications requiring rapid deployment, this method can act as a deciding factor in shortlisting models with a guarantee that the models are looking at the right features for arriving at the classification. Furthermore, the outcomes of the model can be explained to the user enhancing their confidence on the AI model being deployed in the field.
Yu, Jialin, Cristea, Alexandra I., Harit, Anoushka, Sun, Zhongtian, Aduragba, Olanrewaju Tahir, Shi, Lei, Moubayed, Noura Al.  2022.  INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations. 2022 International Joint Conference on Neural Networks (IJCNN). :1—8.
XAI with natural language processing aims to produce human-readable explanations as evidence for AI decision-making, which addresses explainability and transparency. However, from an HCI perspective, the current approaches only focus on delivering a single explanation, which fails to account for the diversity of human thoughts and experiences in language. This paper thus addresses this gap, by proposing a generative XAI framework, INTERACTION (explain aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder). Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation. We conduct intensive experiments with the Transformer architecture on a benchmark dataset, e-SNLI [1]. Our method achieves competitive or better performance against state-of-the-art baseline models on explanation generation (up to 4.7% gain in BLEU) and prediction (up to 4.4% gain in accuracy) in step one; it can also generate multiple diverse explanations in step two.
Embarak, Ossama.  2022.  An adaptive paradigm for smart education systems in smart cities using the internet of behaviour (IoB) and explainable artificial intelligence (XAI). 2022 8th International Conference on Information Technology Trends (ITT). :74—79.
The rapid shift towards smart cities, particularly in the era of pandemics, necessitates the employment of e-learning, remote learning systems, and hybrid models. Building adaptive and personalized education becomes a requirement to mitigate the downsides of distant learning while maintaining high levels of achievement. Explainable artificial intelligence (XAI), machine learning (ML), and the internet of behaviour (IoB) are just a few of the technologies that are helping to shape the future of smart education in the age of smart cities through Customization and personalization. This study presents a paradigm for smart education based on the integration of XAI and IoB technologies. The research uses data acquired on students' behaviours to determine whether or not the current education systems respond appropriately to learners' requirements. Despite the existence of sophisticated education systems, they have not yet reached the degree of development that allows them to be tailored to learners' cognitive needs and support them in the absence of face-to-face instruction. The study collected data on 41 learner's behaviours in response to academic activities and assessed whether the running systems were able to capture such behaviours and respond appropriately or not; the study used evaluation methods that demonstrated that there is a change in students' academic progression concerning monitoring using IoT/IoB to enable a relative response to support their progression.
Abeyagunasekera, Sudil Hasitha Piyath, Perera, Yuvin, Chamara, Kenneth, Kaushalya, Udari, Sumathipala, Prasanna, Senaweera, Oshada.  2022.  LISA : Enhance the explainability of medical images unifying current XAI techniques. 2022 IEEE 7th International conference for Convergence in Technology (I2CT). :1—9.
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Henriksen, Eilert, Halden, Ugur, Kuzlu, Murat, Cali, Umit.  2022.  Electrical Load Forecasting Utilizing an Explainable Artificial Intelligence (XAI) Tool on Norwegian Residential Buildings. 2022 International Conference on Smart Energy Systems and Technologies (SEST). :1—6.
Electrical load forecasting is an essential part of the smart grid to maintain a stable and reliable grid along with helping decisions for economic planning. With the integration of more renewable energy resources, especially solar photovoltaic (PV), and transitioning into a prosumer-based grid, electrical load forecasting is deemed to play a crucial role on both regional and household levels. However, most of the existing forecasting methods can be considered black-box models due to deep digitalization enablers, such as Deep Neural Networks (DNN), where human interpretation remains limited. Additionally, the black box character of many models limits insights and applicability. In order to mitigate this shortcoming, eXplainable Artificial Intelligence (XAI) is introduced as a measure to get transparency into the model’s behavior and human interpretation. By utilizing XAI, experienced power market and system professionals can be integrated into developing the data-driven approach, even without knowing the data science domain. In this study, an electrical load forecasting model utilizing an XAI tool for a Norwegian residential building was developed and presented.
Lee, H., Lim, H., Lee, B..  2022.  Analysis of EV charging load impact on distribution network using XAI technique. CIRED Porto Workshop 2022: E-mobility and power distribution systems. 2022:167—170.
In order to solve the problems that may arise from the negative impact of EV charging loads on the power distribution network, it is very important to predict the distribution network variability according to EV charging loads. If appropriate facility reinforcement or system operation is made through evaluation of the impact of EV charging load, it will be possible to prevent facility failure in advance and maintain the power quality at a certain level, enabling stable network operation. By analysing the degree of change in the predicted load according to the EV load characteristics through the load prediction model, it is possible to evaluate the influence of the distribution network according to the EV linkage. This paper aims to investigate the effect of EV charging load on voltage stability, power loss, reliability index and economic loss of distribution network. For this, we transformed univariate time series of EV charging data into a multivariate time series using feature engineering techniques. Then, time series forecast models are trained based on the multivariate dataset. Finally, XAI techniques such as LIME and SHAP are applied to the models to obtain the feature importance analysis results.
Fujita, Koji, Shibahara, Toshiki, Chiba, Daiki, Akiyama, Mitsuaki, Uchida, Masato.  2022.  Objection!: Identifying Misclassified Malicious Activities with XAI. ICC 2022 - IEEE International Conference on Communications. :2065—2070.
Many studies have been conducted to detect various malicious activities in cyberspace using classifiers built by machine learning. However, it is natural for any classifier to make mistakes, and hence, human verification is necessary. One method to address this issue is eXplainable AI (XAI), which provides a reason for the classification result. However, when the number of classification results to be verified is large, it is not realistic to check the output of the XAI for all cases. In addition, it is sometimes difficult to interpret the output of XAI. In this study, we propose a machine learning model called classification verifier that verifies the classification results by using the output of XAI as a feature and raises objections when there is doubt about the reliability of the classification results. The results of experiments on malicious website detection and malware detection show that the proposed classification verifier can efficiently identify misclassified malicious activities.
Barnard, Pieter, Macaluso, Irene, Marchetti, Nicola, DaSilva, Luiz A..  2022.  Resource Reservation in Sliced Networks: An Explainable Artificial Intelligence (XAI) Approach. ICC 2022 - IEEE International Conference on Communications. :1530—1535.
The growing complexity of wireless networks has sparked an upsurge in the use of artificial intelligence (AI) within the telecommunication industry in recent years. In network slicing, a key component of 5G that enables network operators to lease their resources to third-party tenants, AI models may be employed in complex tasks, such as short-term resource reservation (STRR). When AI is used to make complex resource management decisions with financial and service quality implications, it is important that these decisions be understood by a human-in-the-loop. In this paper, we apply state-of-the-art techniques from the field of Explainable AI (XAI) to the problem of STRR. Using real-world data to develop an AI model for STRR, we demonstrate how our XAI methodology can be used to explain the real-time decisions of the model, to reveal trends about the model’s general behaviour, as well as aid in the diagnosis of potential faults during the model’s development. In addition, we quantitatively validate the faithfulness of the explanations across an extensive range of XAI metrics to ensure they remain trustworthy and actionable.
Yeo, Guo Feng Anders, Hudson, Irene, Akman, David, Chan, Jeffrey.  2022.  A Simple Framework for XAI Comparisons with a Case Study. 2022 5th International Conference on Artificial Intelligence and Big Data (ICAIBD). :501—508.
The number of publications related to Explainable Artificial Intelligence (XAI) has increased rapidly this last decade. However, the subjective nature of explainability has led to a lack of consensus regarding commonly used definitions for explainability and with differing problem statements falling under the XAI label resulting in a lack of comparisons. This paper proposes in broad terms a simple comparison framework for XAI methods based on the output and what we call the practical attributes. The aim of the framework is to ensure that everything that can be held constant for the purpose of comparison, is held constant and to ignore many of the subjective elements present in the area of XAI. An example utilizing such a comparison along the lines of the proposed framework is performed on local, post-hoc, model-agnostic XAI algorithms which are designed to measure the feature importance/contribution for a queried instance. These algorithms are assessed on two criteria using synthetic datasets across a range of classifiers. The first is based on selecting features which contribute to the underlying data structure and the second is how accurately the algorithms select the features used in a decision tree path. The results from the first comparison showed that when the classifier was able to pick up the underlying pattern in the model, the LIME algorithm was the most accurate at selecting the underlying ground truth features. The second test returned mixed results with some instances in which the XAI algorithms were able to accurately return the features used to produce predictions, however this result was not consistent.
2021-12-22
Ortega, Alfonso, Fierrez, Julian, Morales, Aythami, Wang, Zilong, Ribeiro, Tony.  2021.  Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and Explainable Automatic Recruitment. 2021 IEEE Winter Conference on Applications of Computer Vision Workshops (WACVW). :78–87.
Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods can become crucial. Inductive Logic Programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the process of data. Learning from Interpretation Transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given blackbox system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains.
Poli, Jean-Philippe, Ouerdane, Wassila, Pierrard, Régis.  2021.  Generation of Textual Explanations in XAI: The Case of Semantic Annotation. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–6.
Semantic image annotation is a field of paramount importance in which deep learning excels. However, some application domains, like security or medicine, may need an explanation of this annotation. Explainable Artificial Intelligence is an answer to this need. In this work, an explanation is a sentence in natural language that is dedicated to human users to provide them clues about the process that leads to the decision: the labels assignment to image parts. We focus on semantic image annotation with fuzzy logic that has proven to be a useful framework that captures both image segmentation imprecision and the vagueness of human spatial knowledge and vocabulary. In this paper, we present an algorithm for textual explanation generation of the semantic annotation of image regions.
Guerdan, Luke, Raymond, Alex, Gunes, Hatice.  2021.  Toward Affective XAI: Facial Affect Analysis for Understanding Explainable Human-AI Interactions. 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). :3789–3798.
As machine learning approaches are increasingly used to augment human decision-making, eXplainable Artificial Intelligence (XAI) research has explored methods for communicating system behavior to humans. However, these approaches often fail to account for the affective responses of humans as they interact with explanations. Facial affect analysis, which examines human facial expressions of emotions, is one promising lens for understanding how users engage with explanations. Therefore, in this work, we aim to (1) identify which facial affect features are pronounced when people interact with XAI interfaces, and (2) develop a multitask feature embedding for linking facial affect signals with participants' use of explanations. Our analyses and results show that the occurrence and values of facial AU1 and AU4, and Arousal are heightened when participants fail to use explanations effectively. This suggests that facial affect analysis should be incorporated into XAI to personalize explanations to individuals' interaction styles and to adapt explanations based on the difficulty of the task performed.
Nascita, Alfredo, Montieri, Antonio, Aceto, Giuseppe, Ciuonzo, Domenico, Persico, Valerio, Pescapè, Antonio.  2021.  Unveiling MIMETIC: Interpreting Deep Learning Traffic Classifiers via XAI Techniques. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :455–460.
The widespread use of powerful mobile devices has deeply affected the mix of traffic traversing both the Internet and enterprise networks (with bring-your-own-device policies). Traffic encryption has become extremely common, and the quick proliferation of mobile apps and their simple distribution and update have created a specifically challenging scenario for traffic classification and its uses, especially network-security related ones. The recent rise of Deep Learning (DL) has responded to this challenge, by providing a solution to the time-consuming and human-limited handcrafted feature design, and better clas-sification performance. The counterpart of the advantages is the lack of interpretability of these black-box approaches, limiting or preventing their adoption in contexts where the reliability of results, or interpretability of polices is necessary. To cope with these limitations, eXplainable Artificial Intelligence (XAI) techniques have seen recent intensive research. Along these lines, our work applies XAI-based techniques (namely, Deep SHAP) to interpret the behavior of a state-of-the-art multimodal DL traffic classifier. As opposed to common results seen in XAI, we aim at a global interpretation, rather than sample-based ones. The results quantify the importance of each modality (payload- or header-based), and of specific subsets of inputs (e.g., TLS SNI and TCP Window Size) in determining the classification outcome, down to per-class (viz. application) level. The analysis is based on a publicly-released recent dataset focused on mobile app traffic.
Panda, Akash Kumar, Kosko, Bart.  2021.  Bayesian Pruned Random Rule Foams for XAI. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–6.
A random rule foam grows and combines several independent fuzzy rule-based systems by randomly sampling input-output data from a trained deep neural classifier. The random rule foam defines an interpretable proxy system for the sampled black-box classifier. The random foam gives the complete Bayesian posterior probabilities over the foam subsystems that contribute to the proxy system's output for a given pattern input. It also gives the Bayesian posterior over the if-then fuzzy rules in each of these constituent foams. The random foam also computes a conditional variance that describes the uncertainty in its predicted output given the random foam's learned rule structure. The mixture structure leads to bootstrap confidence intervals around the output. Using the Bayesian posterior probabilities to prune or discard low-probability sub-foams improves the system's classification accuracy. Simulations used the MNIST image data set of 60,000 gray-scale images of ten hand-written digits. Dropping the lowest-probability foams per input pattern brought the pruned random foam's classification accuracy nearly to that of the neural classifier. Posterior pruning outperformed simple accuracy pruning of a random foam and outperformed a random forest trained on the same neural classifier.
Renda, Alessandro, Ducange, Pietro, Gallo, Gionatan, Marcelloni, Francesco.  2021.  XAI Models for Quality of Experience Prediction in Wireless Networks. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–6.
Explainable Artificial Intelligence (XAI) is expected to play a key role in the design phase of next generation cellular networks. As 5G is being implemented and 6G is just in the conceptualization stage, it is increasingly clear that AI will be essential to manage the ever-growing complexity of the network. However, AI models will not only be required to deliver high levels of performance, but also high levels of explainability. In this paper we show how fuzzy models may be well suited to address this challenge. We compare fuzzy and classical decision tree models with a Random Forest (RF) classifier on a Quality of Experience classification dataset. The comparison suggests that, in our setting, fuzzy decision trees are easier to interpret and perform comparably or even better than classical ones in identifying stall events in a video streaming application. The accuracy drop with respect to RF classifier, which is considered to be a black-box ensemble model, is counterbalanced by a significant gain in terms of explainability.
Malhotra, Diksha, Srivastava, Shubham, Saini, Poonam, Singh, Awadhesh Kumar.  2021.  Blockchain Based Audit Trailing of XAI Decisions: Storing on IPFS and Ethereum Blockchain. 2021 International Conference on COMmunication Systems NETworkS (COMSNETS). :1–5.
Explainable Artificial Intelligence (XAI) generates explanations which are used by regulators to audit the responsibility in case of any catastrophic failure. These explanations are currently stored in centralized systems. However, due to lack of security and traceability in centralized systems, the respective owner may temper the explanations for his convenience in order to avoid any penalty. Nowadays, Blockchain has emerged as one of the promising technologies that might overcome the security limitations. Hence, in this paper, we propose a novel Blockchain based framework for proof-of-authenticity pertaining to XAI decisions. The framework stores the explanations in InterPlanetary File System (IPFS) due to storage limitations of Ethereum Blockchain. Further, a Smart Contract is designed and deployed in order to supervise the storage and retrieval of explanations from Ethereum Blockchain. Furthermore, to induce cryptographic security in the network, an explanation's hash is calculated and stored in Blockchain too. Lastly, we perform the cost and security analysis of our proposed system.
Kim, Jiha, Park, Hyunhee.  2021.  OA-GAN: Overfitting Avoidance Method of GAN Oversampling Based on xAI. 2021 Twelfth International Conference on Ubiquitous and Future Networks (ICUFN). :394–398.
The most representative method of deep learning is data-driven learning. These methods are often data-dependent, and lack of data leads to poor learning. There is a GAN method that creates a likely image as a way to solve a problem that lacks data. The GAN determines that the discriminator is fake/real with respect to the image created so that the generator learns. However, overfitting problems when the discriminator becomes overly dependent on the learning data. In this paper, we explain overfitting problem when the discriminator decides to fake/real using xAI. Depending on the area of the described image, it is possible to limit the learning of the discriminator to avoid overfitting. By doing so, the generator can produce similar but more diverse images.
Zhang, Yuyi, Xu, Feiran, Zou, Jingying, Petrosian, Ovanes L., Krinkin, Kirill V..  2021.  XAI Evaluation: Evaluating Black-Box Model Explanations for Prediction. 2021 II International Conference on Neural Networks and Neurotechnologies (NeuroNT). :13–16.
The results of evaluating explanations of the black-box model for prediction are presented. The XAI evaluation is realized through the different principles and characteristics between black-box model explanations and XAI labels. In the field of high-dimensional prediction, the black-box model represented by neural network and ensemble models can predict complex data sets more accurately than traditional linear regression and white-box models such as the decision tree model. However, an unexplainable characteristic not only hinders developers from debugging but also causes users mistrust. In the XAI field dedicated to ``opening'' the black box model, effective evaluation methods are still being developed. Within the established XAI evaluation framework (MDMC) in this paper, explanation methods for the prediction can be effectively tested, and the identified explanation method with relatively higher quality can improve the accuracy, transparency, and reliability of prediction.
Murray, Bryce, Anderson, Derek T., Havens, Timothy C..  2021.  Actionable XAI for the Fuzzy Integral. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
The adoption of artificial intelligence (AI) into domains that impact human life (healthcare, agriculture, security and defense, etc.) has led to an increased demand for explainable AI (XAI). Herein, we focus on an under represented piece of the XAI puzzle, information fusion. To date, a number of low-level XAI explanation methods have been proposed for the fuzzy integral (FI). However, these explanations are tailored to experts and its not always clear what to do with the information they return. In this article we review and categorize existing FI work according to recent XAI nomenclature. Second, we identify a set of initial actions that a user can take in response to these low-level statistical, graphical, local, and linguistic XAI explanations. Third, we investigate the design of an interactive user friendly XAI report. Two case studies, one synthetic and one real, show the results of following recommended actions to understand and improve tasks involving classification.
2021-03-01
Tan, R., Khan, N., Guan, L..  2020.  Locality Guided Neural Networks for Explainable Artificial Intelligence. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
In current deep network architectures, deeper layers in networks tend to contain hundreds of independent neurons which makes it hard for humans to understand how they interact with each other. By organizing the neurons by correlation, humans can observe how clusters of neighbouring neurons interact with each other. In this paper, we propose a novel algorithm for back propagation, called Locality Guided Neural Network (LGNN) for training networks that preserves locality between neighbouring neurons within each layer of a deep network. Heavily motivated by Self-Organizing Map (SOM), the goal is to enforce a local topology on each layer of a deep network such that neighbouring neurons are highly correlated with each other. This method contributes to the domain of Explainable Artificial Intelligence (XAI), which aims to alleviate the black-box nature of current AI methods and make them understandable by humans. Our method aims to achieve XAI in deep learning without changing the structure of current models nor requiring any post processing. This paper focuses on Convolutional Neural Networks (CNNs), but can theoretically be applied to any type of deep learning architecture. In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100. In depth analyses presenting both qualitative and quantitative results demonstrate that our method is capable of enforcing a topology on each layer while achieving a small increase in classification accuracy.
Sun, S. C., Guo, W..  2020.  Approximate Symbolic Explanation for Neural Network Enabled Water-Filling Power Allocation. 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring). :1–4.
Water-filling (WF) is a well-established iterative solution to optimal power allocation in parallel fading channels. Slow iterative search can be impractical for allocating power to a large number of OFDM sub-channels. Neural networks (NN) can transform the iterative WF threshold search process into a direct high-dimensional mapping from channel gain to transmit power solution. Our results show that the NN can perform very well (error 0.05%) and can be shown to be indeed performing approximate WF power allocation. However, there is no guarantee on the NN is mapping between channel states and power output. Here, we attempt to explain the NN power allocation solution via the Meijer G-function as a general explainable symbolic mapping. Our early results indicate that whilst the Meijer G-function has universal representation potential, its large search space means finding the best symbolic representation is challenging.
Taylor, E., Shekhar, S., Taylor, G. W..  2020.  Response Time Analysis for Explainability of Visual Processing in CNNs. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :1555–1558.
Explainable artificial intelligence (XAI) methods rely on access to model architecture and parameters that is not always feasible for most users, practitioners, and regulators. Inspired by cognitive psychology, we present a case for response times (RTs) as a technique for XAI. RTs are observable without access to the model. Moreover, dynamic inference models performing conditional computation generate variable RTs for visual learning tasks depending on hierarchical representations. We show that MSDNet, a conditional computation model with early-exit architecture, exhibits slower RT for images with more complex features in the ObjectNet test set, as well as the human phenomenon of scene grammar, where object recognition depends on intrascene object-object relationships. These results cast light on MSDNet's feature space without opening the black box and illustrate the promise of RT methods for XAI.
Sarathy, N., Alsawwaf, M., Chaczko, Z..  2020.  Investigation of an Innovative Approach for Identifying Human Face-Profile Using Explainable Artificial Intelligence. 2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY). :155–160.
Human identification is a well-researched topic that keeps evolving. Advancement in technology has made it easy to train models or use ones that have been already created to detect several features of the human face. When it comes to identifying a human face from the side, there are many opportunities to advance the biometric identification research further. This paper investigates the human face identification based on their side profile by extracting the facial features and diagnosing the feature sets with geometric ratio expressions. These geometric ratio expressions are computed into feature vectors. The last stage involves the use of weighted means to measure similarity. This research addresses the problem of using an eXplainable Artificial Intelligence (XAI) approach. Findings from this research, based on a small data-set, conclude that the used approach offers encouraging results. Further investigation could have a significant impact on how face profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89.
Davis, B., Glenski, M., Sealy, W., Arendt, D..  2020.  Measure Utility, Gain Trust: Practical Advice for XAI Researchers. 2020 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX). :1–8.
Research into the explanation of machine learning models, i.e., explainable AI (XAI), has seen a commensurate exponential growth alongside deep artificial neural networks throughout the past decade. For historical reasons, explanation and trust have been intertwined. However, the focus on trust is too narrow, and has led the research community astray from tried and true empirical methods that produced more defensible scientific knowledge about people and explanations. To address this, we contribute a practical path forward for researchers in the XAI field. We recommend researchers focus on the utility of machine learning explanations instead of trust. We outline five broad use cases where explanations are useful and, for each, we describe pseudo-experiments that rely on objective empirical measurements and falsifiable hypotheses. We believe that this experimental rigor is necessary to contribute to scientific knowledge in the field of XAI.
Tao, J., Xiong, Y., Zhao, S., Xu, Y., Lin, J., Wu, R., Fan, C..  2020.  XAI-Driven Explainable Multi-view Game Cheating Detection. 2020 IEEE Conference on Games (CoG). :144–151.
Online gaming is one of the most successful applications having a large number of players interacting in an online persistent virtual world through the Internet. However, some cheating players gain improper advantages over normal players by using illegal automated plugins which has brought huge harm to game health and player enjoyment. Game industries have been devoting much efforts on cheating detection with multiview data sources and achieved great accuracy improvements by applying artificial intelligence (AI) techniques. However, generating explanations for cheating detection from multiple views still remains a challenging task. To respond to the different purposes of explainability in AI models from different audience profiles, we propose the EMGCD, the first explainable multi-view game cheating detection framework driven by explainable AI (XAI). It combines cheating explainers to cheating classifiers from different views to generate individual, local and global explanations which contributes to the evidence generation, reason generation, model debugging and model compression. The EMGCD has been implemented and deployed in multiple game productions in NetEase Games, achieving remarkable and trustworthy performance. Our framework can also easily generalize to other types of related tasks in online games, such as explainable recommender systems, explainable churn prediction, etc.