Visible to the public LISA : Enhance the explainability of medical images unifying current XAI techniques

TitleLISA : Enhance the explainability of medical images unifying current XAI techniques
Publication TypeConference Paper
Year of Publication2022
AuthorsAbeyagunasekera, Sudil Hasitha Piyath, Perera, Yuvin, Chamara, Kenneth, Kaushalya, Udari, Sumathipala, Prasanna, Senaweera, Oshada
Conference Name2022 IEEE 7th International conference for Convergence in Technology (I2CT)
KeywordsAdditives, anchors, Chest X-ray, CNN, CXR, Debugging, explainable artificial intelligence, Integrated Gradients, LIME, LISA, Local Interpritable Model Agnostic Explanations, Medical diagnosis, Mission critical systems, Neural networks, Predictive models, pubcrawl, resilience, Resiliency, Scalability, SHAP, Shapley Additive Explanations, transfer learning, Unified Explanations, xai
AbstractThis work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model's decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
DOI10.1109/I2CT54291.2022.9824840
Citation Keyabeyagunasekera_lisa_2022