An Assessment of the Usability of Machine Learning Based Tools for the Security Operations Center
Title | An Assessment of the Usability of Machine Learning Based Tools for the Security Operations Center |
Publication Type | Conference Paper |
Year of Publication | 2020 |
Authors | Oesch, S., Bridges, R., Smith, J., Beaver, J., Goodall, J., Huffer, K., Miles, C., Scofield, D. |
Conference Name | 2020 International Conferences on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics) |
Date Published | Nov. 2020 |
Publisher | IEEE |
ISBN Number | 978-1-7281-7647-5 |
Keywords | Air gaps, Analytical models, composability, Government, Human Behavior, human factors, machine learning, Malware, Metrics, predictability, pubcrawl, resilience, Resiliency, Scalability, security, Security Heuristics, Tools, usability |
Abstract | Gartner, a large research and advisory company, anticipates that by 2024 80% of security operation centers (SOCs) will use machine learning (ML) based solutions to enhance their operations.11https://www.ciodive.com/news/how-data-science-tools-can-lighten-the-load-for-cybersecurity-teams/572209/ In light of such widespread adoption, it is vital for the research community to identify and address usability concerns. This work presents the results of the first in situ usability assessment of ML-based tools. With the support of the US Navy, we leveraged the national cyber range-a large, air-gapped cyber testbed equipped with state-of-the-art network and user emulation capabilities-to study six US Naval SOC analysts' usage of two tools. Our analysis identified several serious usability issues, including multiple violations of established usability heuristics for user interface design. We also discovered that analysts lacked a clear mental model of how these tools generate scores, resulting in mistrust \$a\$ and/or misuse of the tools themselves. Surprisingly, we found no correlation between analysts' level of education or years of experience and their performance with either tool, suggesting that other factors such as prior background knowledge or personality play a significant role in ML-based tool usage. Our findings demonstrate that ML-based security tool vendors must put a renewed focus on working with analysts, both experienced and inexperienced, to ensure that their systems are usable and useful in real-world security operations settings. |
URL | https://ieeexplore.ieee.org/document/9291520/ |
DOI | 10.1109/iThings-GreenCom-CPSCom-SmartData-Cybermatics50389.2020.00111 |
Citation Key | oesch_assessment_2020 |