Visible to the public Biblio

Filters: Keyword is Explainability  [Clear All Filters]
2023-09-18
Warmsley, Dana, Waagen, Alex, Xu, Jiejun, Liu, Zhining, Tong, Hanghang.  2022.  A Survey of Explainable Graph Neural Networks for Cyber Malware Analysis. 2022 IEEE International Conference on Big Data (Big Data). :2932—2939.
Malicious cybersecurity activities have become increasingly worrisome for individuals and companies alike. While machine learning methods like Graph Neural Networks (GNNs) have proven successful on the malware detection task, their output is often difficult to understand. Explainable malware detection methods are needed to automatically identify malicious programs and present results to malware analysts in a way that is human interpretable. In this survey, we outline a number of GNN explainability methods and compare their performance on a real-world malware detection dataset. Specifically, we formulated the detection problem as a graph classification problem on the malware Control Flow Graphs (CFGs). We find that gradient-based methods outperform perturbation-based methods in terms of computational expense and performance on explainer-specific metrics (e.g., Fidelity and Sparsity). Our results provide insights into designing new GNN-based models for cyber malware detection and attribution.
2022-03-23
Matellán, Vicente, Rodríguez-Lera, Francisco-J., Guerrero-Higueras, Ángel-M., Rico, Francisco-Martín, Ginés, Jonatan.  2021.  The Role of Cybersecurity and HPC in the Explainability of Autonomous Robots Behavior. 2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO). :1–5.
Autonomous robots are increasingly widespread in our society. These robots need to be safe, reliable, respectful of privacy, not manipulable by external agents, and capable of offering explanations of their behavior in order to be accountable and acceptable in our societies. Companies offering robotic services will need to provide mechanisms to address these issues using High Performance Computing (HPC) facilities, where logs and off-line forensic analysis could be addressed if required, but these solutions are still not available in software development frameworks for robots. The aim of this paper is to discuss the implications and interactions among cybersecurity, safety, and explainability with the goal of making autonomous robots more trustworthy.
2021-02-16
Kowalski, P., Zocholl, M., Jousselme, A.-L..  2020.  Explainability in threat assessment with evidential networks and sensitivity spaces. 2020 IEEE 23rd International Conference on Information Fusion (FUSION). :1—8.
One of the main threats to the underwater communication cables identified in the recent years is possible tampering or damage by malicious actors. This paper proposes a solution with explanation abilities to detect and investigate this kind of threat within the evidence theory framework. The reasoning scheme implements the traditional “opportunity-capability-intent” threat model to assess a degree to which a given vessel may pose a threat. The scenario discussed considers a variety of possible pieces of information available from different sources. A source quality model is used to reason with the partially reliable sources and the impact of this meta-information on the overall assessment is illustrated. Examples of uncertain relationships between the relevant variables are modelled and the constructed model is used to investigate the probability of threat of four vessels of different types. One of these cases is discussed in more detail to demonstrate the explanation abilities. Explanations about inference are provided thanks to sensitivity spaces in which the impact of the different pieces of information on the reasoning are compared.
2018-12-10
Ha, Taehyun, Lee, Sangwon, Kim, Sangyeon.  2018.  Designing Explainability of an Artificial Intelligence System. Proceedings of the Technology, Mind, and Society. :14:1–14:1.

Explainability and accuracy of the machine learning algorithms usually laid on a trade-off relationship. Several algorithms such as deep-learning artificial neural networks have high accuracy but low explainability. Since there were only limited ways to access the learning and prediction processes in algorithms, researchers and users were not able to understand how the results were given to them. However, a recent project, explainable artificial intelligence (XAI) by DARPA, showed that AI systems can be highly explainable but also accurate. Several technical reports of XAI suggested ways of extracting explainable features and their positive effects on users; the results showed that explainability of AI was helpful to make users understand and trust the system. However, only a few studies have addressed why the explainability can bring positive effects to users. We suggest theoretical reasons from the attribution theory and anthropomorphism studies. Trough a review, we develop three hypotheses: (1) causal attribution is a human nature and thus a system which provides casual explanation on their process will affect users to attribute the result of system; (2) Based on the attribution results, users will perceive the system as human-like and which will be a motivation of anthropomorphism; (3) The system will be perceived by the users through the anthropomorphism. We provide a research framework for designing causal explainability of an AI system and discuss the expected results of the research.