Toward Trustworthy Deep Learning in Security
Title | Toward Trustworthy Deep Learning in Security |
Publication Type | Conference Paper |
Year of Publication | 2018 |
Authors | Go, Wooyoung, Lee, Daewoo |
Conference Name | Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security |
Publisher | ACM |
Conference Location | New York, NY, USA |
ISBN Number | 978-1-4503-5693-0 |
Keywords | classification criteria, composability, convolutional neural networks, Cyber-physical systems, guided grad-cam, Internet of Things, pubcrawl, Resiliency, trustworthiness |
Abstract | In the security area, there has been an increasing tendency to apply deep learning, which is perceived as a black box method because of the lack of understanding of its internal functioning. Can we trust deep learning models when they achieve high test accuracy? Using a visual explanation method, we find that deep learning models used in security tasks can easily focus on semantically non-discriminative parts of input data even though they produce the right answers. Furthermore, when a model is re-trained without any change in the learning procedure (i.e., no change in training/validation data, initialization/optimization methods and hyperparameters), it can focus on significantly different parts of many samples while producing the same answers. For trustworthy deep learning in security, therefore, we argue that it is necessary to verify the classification criteria of deep learning models before deploying them, even though they successfully achieve high test accuracy. |
URL | http://doi.acm.org/10.1145/3243734.3278526 |
DOI | 10.1145/3243734.3278526 |
Citation Key | go_toward_2018 |