Visible to the public Toward Trustworthy Deep Learning in Security

TitleToward Trustworthy Deep Learning in Security
Publication TypeConference Paper
Year of Publication2018
AuthorsGo, Wooyoung, Lee, Daewoo
Conference NameProceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security
PublisherACM
Conference LocationNew York, NY, USA
ISBN Number978-1-4503-5693-0
Keywordsclassification criteria, composability, convolutional neural networks, Cyber-physical systems, guided grad-cam, Internet of Things, pubcrawl, Resiliency, trustworthiness
Abstract

In the security area, there has been an increasing tendency to apply deep learning, which is perceived as a black box method because of the lack of understanding of its internal functioning. Can we trust deep learning models when they achieve high test accuracy? Using a visual explanation method, we find that deep learning models used in security tasks can easily focus on semantically non-discriminative parts of input data even though they produce the right answers. Furthermore, when a model is re-trained without any change in the learning procedure (i.e., no change in training/validation data, initialization/optimization methods and hyperparameters), it can focus on significantly different parts of many samples while producing the same answers. For trustworthy deep learning in security, therefore, we argue that it is necessary to verify the classification criteria of deep learning models before deploying them, even though they successfully achieve high test accuracy.

URLhttp://doi.acm.org/10.1145/3243734.3278526
DOI10.1145/3243734.3278526
Citation Keygo_toward_2018