Title | LISA : Enhance the explainability of medical images unifying current XAI techniques |
Publication Type | Conference Paper |
Year of Publication | 2022 |
Authors | Abeyagunasekera, Sudil Hasitha Piyath, Perera, Yuvin, Chamara, Kenneth, Kaushalya, Udari, Sumathipala, Prasanna, Senaweera, Oshada |
Conference Name | 2022 IEEE 7th International conference for Convergence in Technology (I2CT) |
Keywords | Additives, anchors, Chest X-ray, CNN, CXR, Debugging, explainable artificial intelligence, Integrated Gradients, LIME, LISA, Local Interpritable Model Agnostic Explanations, Medical diagnosis, Mission critical systems, Neural networks, Predictive models, pubcrawl, resilience, Resiliency, Scalability, SHAP, Shapley Additive Explanations, transfer learning, Unified Explanations, xai |
Abstract | This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model's decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model. |
DOI | 10.1109/I2CT54291.2022.9824840 |
Citation Key | abeyagunasekera_lisa_2022 |