Malhotra, Diksha, Srivastava, Shubham, Saini, Poonam, Singh, Awadhesh Kumar.
2021.
Blockchain Based Audit Trailing of XAI Decisions: Storing on IPFS and Ethereum Blockchain. 2021 International Conference on COMmunication Systems NETworkS (COMSNETS). :1–5.
Explainable Artificial Intelligence (XAI) generates explanations which are used by regulators to audit the responsibility in case of any catastrophic failure. These explanations are currently stored in centralized systems. However, due to lack of security and traceability in centralized systems, the respective owner may temper the explanations for his convenience in order to avoid any penalty. Nowadays, Blockchain has emerged as one of the promising technologies that might overcome the security limitations. Hence, in this paper, we propose a novel Blockchain based framework for proof-of-authenticity pertaining to XAI decisions. The framework stores the explanations in InterPlanetary File System (IPFS) due to storage limitations of Ethereum Blockchain. Further, a Smart Contract is designed and deployed in order to supervise the storage and retrieval of explanations from Ethereum Blockchain. Furthermore, to induce cryptographic security in the network, an explanation's hash is calculated and stored in Blockchain too. Lastly, we perform the cost and security analysis of our proposed system.
Murray, Bryce, Anderson, Derek T., Havens, Timothy C..
2021.
Actionable XAI for the Fuzzy Integral. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
The adoption of artificial intelligence (AI) into domains that impact human life (healthcare, agriculture, security and defense, etc.) has led to an increased demand for explainable AI (XAI). Herein, we focus on an under represented piece of the XAI puzzle, information fusion. To date, a number of low-level XAI explanation methods have been proposed for the fuzzy integral (FI). However, these explanations are tailored to experts and its not always clear what to do with the information they return. In this article we review and categorize existing FI work according to recent XAI nomenclature. Second, we identify a set of initial actions that a user can take in response to these low-level statistical, graphical, local, and linguistic XAI explanations. Third, we investigate the design of an interactive user friendly XAI report. Two case studies, one synthetic and one real, show the results of following recommended actions to understand and improve tasks involving classification.