Visible to the public Improving Security of Neural Networks in the Identification Module of Decision Support Systems

TitleImproving Security of Neural Networks in the Identification Module of Decision Support Systems
Publication TypeConference Paper
Year of Publication2020
AuthorsMonakhov, Yuri, Monakhov, Mikhail, Telny, Andrey, Mazurok, Dmitry, Kuznetsova, Anna
Conference Name2020 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT)
Keywordsadversarial attacks, Artificial neural networks, cyber physical systems, Cyber-physical systems, Decision Support System, Metrics, Neural Network Security, Neural networks, policy-based governance, pubcrawl, Resiliency
AbstractIn recent years, neural networks have been implemented while solving various tasks. Deep learning algorithms provide state of the art performance in computer vision, NLP, speech recognition, speaker recognition and many other fields. In spite of the good performance, neural networks have significant drawback- they have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. While being imperceptible to a human eye, such perturbations lead to significant drop in classification accuracy. It is demonstrated by many studies related to neural network security. Considering the pros and cons of neural networks, as well as a variety of their applications, developing of the methods to improve the robustness of neural networks against adversarial attacks becomes an urgent task. In the article authors propose the “minimalistic” attacker model of the decision support system identification unit, adaptive recommendations on security enhancing, and a set of protective methods. Suggested methods allow for significant increase in classification accuracy under adversarial attacks, as it is demonstrated by an experiment outlined in this article.
DOI10.1109/USBEREIT48449.2020.9117651
Citation Keymonakhov_improving_2020