Biblio
Filters: Keyword is LIME [Clear All Filters]
LISA : Enhance the explainability of medical images unifying current XAI techniques. 2022 IEEE 7th International conference for Convergence in Technology (I2CT). :1—9.
.
2022. This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Interpreting a Black-Box Model used for SCADA Attack detection in Gas Pipelines Control System. 2020 IEEE 17th India Council International Conference (INDICON). :1—7.
.
2020. Various Machine Learning techniques are considered to be "black-boxes" because of their limited interpretability and explainability. This cannot be afforded, especially in the domain of Cyber-Physical Systems, where there can be huge losses of infrastructure of industries and Governments. Supervisory Control And Data Acquisition (SCADA) systems need to detect and be protected from cyber-attacks. Thus, we need to adopt approaches that make the system secure, can explain predictions made by model, and interpret the model in a human-understandable format. Recently, Autoencoders have shown great success in attack detection in SCADA systems. Numerous interpretable machine learning techniques are developed to help us explain and interpret models. The work presented here is a novel approach to use techniques like Local Interpretable Model-Agnostic Explanations (LIME) and Layer-wise Relevance Propagation (LRP) for interpretation of Autoencoder networks trained on a Gas Pipelines Control System to detect attacks in the system.