Visible to the public Biblio

Filters: Keyword is explanation  [Clear All Filters]
2021-12-22
Poli, Jean-Philippe, Ouerdane, Wassila, Pierrard, Régis.  2021.  Generation of Textual Explanations in XAI: The Case of Semantic Annotation. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–6.
Semantic image annotation is a field of paramount importance in which deep learning excels. However, some application domains, like security or medicine, may need an explanation of this annotation. Explainable Artificial Intelligence is an answer to this need. In this work, an explanation is a sentence in natural language that is dedicated to human users to provide them clues about the process that leads to the decision: the labels assignment to image parts. We focus on semantic image annotation with fuzzy logic that has proven to be a useful framework that captures both image segmentation imprecision and the vagueness of human spatial knowledge and vocabulary. In this paper, we present an algorithm for textual explanation generation of the semantic annotation of image regions.
2021-03-04
Sejr, J. H., Zimek, A., Schneider-Kamp, P..  2020.  Explainable Detection of Zero Day Web Attacks. 2020 3rd International Conference on Data Intelligence and Security (ICDIS). :71—78.

The detection of malicious HTTP(S) requests is a pressing concern in cyber security, in particular given the proliferation of HTTP-based (micro-)service architectures. In addition to rule-based systems for known attacks, anomaly detection has been shown to be a promising approach for unknown (zero-day) attacks. This article extends existing work by integrating outlier explanations for individual requests into an end-to-end pipeline. These end-to-end explanations reflect the internal working of the pipeline. Empirically, we show that found explanations coincide with manually labelled explanations for identified outliers, allowing security professionals to quickly identify and understand malicious requests.

2021-03-01
Sarathy, N., Alsawwaf, M., Chaczko, Z..  2020.  Investigation of an Innovative Approach for Identifying Human Face-Profile Using Explainable Artificial Intelligence. 2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY). :155–160.
Human identification is a well-researched topic that keeps evolving. Advancement in technology has made it easy to train models or use ones that have been already created to detect several features of the human face. When it comes to identifying a human face from the side, there are many opportunities to advance the biometric identification research further. This paper investigates the human face identification based on their side profile by extracting the facial features and diagnosing the feature sets with geometric ratio expressions. These geometric ratio expressions are computed into feature vectors. The last stage involves the use of weighted means to measure similarity. This research addresses the problem of using an eXplainable Artificial Intelligence (XAI) approach. Findings from this research, based on a small data-set, conclude that the used approach offers encouraging results. Further investigation could have a significant impact on how face profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89.
Houzé, É, Diaconescu, A., Dessalles, J.-L., Mengay, D., Schumann, M..  2020.  A Decentralized Approach to Explanatory Artificial Intelligence for Autonomic Systems. 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C). :115–120.
While Explanatory AI (XAI) is attracting increasing interest from academic research, most AI-based solutions still rely on black box methods. This is unsuitable for certain domains, such as smart homes, where transparency is key to gaining user trust and solution adoption. Moreover, smart homes are challenging environments for XAI, as they are decentralized systems that undergo runtime changes. We aim to develop an XAI solution for addressing problems that an autonomic management system either could not resolve or resolved in a surprising manner. This implies situations where the current state of affairs is not what the user expected, hence requiring an explanation. The objective is to solve the apparent conflict between expectation and observation through understandable logical steps, thus generating an argumentative dialogue. While focusing on the smart home domain, our approach is intended to be generic and transferable to other cyber-physical systems offering similar challenges. This position paper focuses on proposing a decentralized algorithm, called D-CAN, and its corresponding generic decentralized architecture. This approach is particularly suited for SISSY systems, as it enables XAI functions to be extended and updated when devices join and leave the managed system dynamically. We illustrate our proposal via several representative case studies from the smart home domain.
Meskauskas, Z., Jasinevicius, R., Kazanavicius, E., Petrauskas, V..  2020.  XAI-Based Fuzzy SWOT Maps for Analysis of Complex Systems. 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
The classical SWOT methodology and many of the tools based on it used so far are very static, used for one stable project and lacking dynamics [1]. This paper proposes the idea of combining several SWOT analyses enriched with computing with words (CWW) paradigm into a single network. In this network, individual analysis of the situation is treated as the node. The whole structure is based on fuzzy cognitive maps (FCM) that have forward and backward chaining, so it is called fuzzy SWOT maps. Fuzzy SWOT maps methodology newly introduces the dynamics that projects are interacting, what exists in a real dynamic environment. The whole fuzzy SWOT maps network structure has explainable artificial intelligence (XAI) traits because each node in this network is a "white box"-all the reasoning chain can be tracked and checked why a particular decision has been made, which increases explainability by being able to check the rules to determine why a particular decision was made or why and how one project affects another. To confirm the vitality of the approach, a case with three interacting projects has been analyzed with a developed prototypical software tool and results are delivered.
2020-08-28
Parafita, Álvaro, Vitrià, Jordi.  2019.  Explaining Visual Models by Causal Attribution. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). :4167—4175.

Model explanations based on pure observational data cannot compute the effects of features reliably, due to their inability to estimate how each factor alteration could affect the rest. We argue that explanations should be based on the causal model of the data and the derived intervened causal models, that represent the data distribution subject to interventions. With these models, we can compute counterfactuals, new samples that will inform us how the model reacts to feature changes on our input. We propose a novel explanation methodology based on Causal Counterfactuals and identify the limitations of current Image Generative Models in their application to counterfactual creation.