Sarathy, N., Alsawwaf, M., Chaczko, Z..
2020.
Investigation of an Innovative Approach for Identifying Human Face-Profile Using Explainable Artificial Intelligence. 2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY). :155–160.
Human identification is a well-researched topic that keeps evolving. Advancement in technology has made it easy to train models or use ones that have been already created to detect several features of the human face. When it comes to identifying a human face from the side, there are many opportunities to advance the biometric identification research further. This paper investigates the human face identification based on their side profile by extracting the facial features and diagnosing the feature sets with geometric ratio expressions. These geometric ratio expressions are computed into feature vectors. The last stage involves the use of weighted means to measure similarity. This research addresses the problem of using an eXplainable Artificial Intelligence (XAI) approach. Findings from this research, based on a small data-set, conclude that the used approach offers encouraging results. Further investigation could have a significant impact on how face profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89.
Houzé, É, Diaconescu, A., Dessalles, J.-L., Mengay, D., Schumann, M..
2020.
A Decentralized Approach to Explanatory Artificial Intelligence for Autonomic Systems. 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C). :115–120.
While Explanatory AI (XAI) is attracting increasing interest from academic research, most AI-based solutions still rely on black box methods. This is unsuitable for certain domains, such as smart homes, where transparency is key to gaining user trust and solution adoption. Moreover, smart homes are challenging environments for XAI, as they are decentralized systems that undergo runtime changes. We aim to develop an XAI solution for addressing problems that an autonomic management system either could not resolve or resolved in a surprising manner. This implies situations where the current state of affairs is not what the user expected, hence requiring an explanation. The objective is to solve the apparent conflict between expectation and observation through understandable logical steps, thus generating an argumentative dialogue. While focusing on the smart home domain, our approach is intended to be generic and transferable to other cyber-physical systems offering similar challenges. This position paper focuses on proposing a decentralized algorithm, called D-CAN, and its corresponding generic decentralized architecture. This approach is particularly suited for SISSY systems, as it enables XAI functions to be extended and updated when devices join and leave the managed system dynamically. We illustrate our proposal via several representative case studies from the smart home domain.
Meskauskas, Z., Jasinevicius, R., Kazanavicius, E., Petrauskas, V..
2020.
XAI-Based Fuzzy SWOT Maps for Analysis of Complex Systems. 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
The classical SWOT methodology and many of the tools based on it used so far are very static, used for one stable project and lacking dynamics [1]. This paper proposes the idea of combining several SWOT analyses enriched with computing with words (CWW) paradigm into a single network. In this network, individual analysis of the situation is treated as the node. The whole structure is based on fuzzy cognitive maps (FCM) that have forward and backward chaining, so it is called fuzzy SWOT maps. Fuzzy SWOT maps methodology newly introduces the dynamics that projects are interacting, what exists in a real dynamic environment. The whole fuzzy SWOT maps network structure has explainable artificial intelligence (XAI) traits because each node in this network is a "white box"-all the reasoning chain can be tracked and checked why a particular decision has been made, which increases explainability by being able to check the rules to determine why a particular decision was made or why and how one project affects another. To confirm the vitality of the approach, a case with three interacting projects has been analyzed with a developed prototypical software tool and results are delivered.