Biblio
Filters: Author is Khochare, Janavi [Clear All Filters]
Interpreting a Black-Box Model used for SCADA Attack detection in Gas Pipelines Control System. 2020 IEEE 17th India Council International Conference (INDICON). :1—7.
.
2020. Various Machine Learning techniques are considered to be "black-boxes" because of their limited interpretability and explainability. This cannot be afforded, especially in the domain of Cyber-Physical Systems, where there can be huge losses of infrastructure of industries and Governments. Supervisory Control And Data Acquisition (SCADA) systems need to detect and be protected from cyber-attacks. Thus, we need to adopt approaches that make the system secure, can explain predictions made by model, and interpret the model in a human-understandable format. Recently, Autoencoders have shown great success in attack detection in SCADA systems. Numerous interpretable machine learning techniques are developed to help us explain and interpret models. The work presented here is a novel approach to use techniques like Local Interpretable Model-Agnostic Explanations (LIME) and Layer-wise Relevance Propagation (LRP) for interpretation of Autoencoder networks trained on a Gas Pipelines Control System to detect attacks in the system.