Biblio
Filters: Author is Anderson, Derek T. [Clear All Filters]
Attribution Modeling for Deep Morphological Neural Networks using Saliency Maps. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
.
2021. Mathematical morphology has been explored in deep learning architectures, as a substitute to convolution, for problems like pattern recognition and object detection. One major advantage of using morphology in deep learning is the utility of morphological erosion and dilation. Specifically, these operations naturally embody interpretability due to their underlying connections to the analysis of geometric structures. While the use of these operations results in explainable learned filters, morphological deep learning lacks attribution modeling, i.e., a paradigm to specify what areas of the original observed image are important. Furthermore, convolution-based deep learning has achieved attribution modeling through a variety of neural eXplainable Artificial Intelligence (XAI) paradigms (e.g., saliency maps, integrated gradients, guided backpropagation, and gradient class activation mapping). Thus, a problem for morphology-based deep learning is that these XAI methods do not have a morphological interpretation due to the differences in the underlying mathematics. Herein, we extend the neural XAI paradigm of saliency maps to morphological deep learning, and by doing, so provide an example of morphological attribution modeling. Furthermore, our qualitative results highlight some advantages of using morphological attribution modeling.
Actionable XAI for the Fuzzy Integral. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
.
2021. The adoption of artificial intelligence (AI) into domains that impact human life (healthcare, agriculture, security and defense, etc.) has led to an increased demand for explainable AI (XAI). Herein, we focus on an under represented piece of the XAI puzzle, information fusion. To date, a number of low-level XAI explanation methods have been proposed for the fuzzy integral (FI). However, these explanations are tailored to experts and its not always clear what to do with the information they return. In this article we review and categorize existing FI work according to recent XAI nomenclature. Second, we identify a set of initial actions that a user can take in response to these low-level statistical, graphical, local, and linguistic XAI explanations. Third, we investigate the design of an interactive user friendly XAI report. Two case studies, one synthetic and one real, show the results of following recommended actions to understand and improve tasks involving classification.