Visible to the public Biblio

Filters: Author is Monakhov, Mikhail  [Clear All Filters]
2021-08-17
Monakhov, Yuri, Kuznetsova, Anna, Monakhov, Mikhail, Telny, Andrey, Bednyatsky, Ilya.  2020.  Performance Evaluation of the Modified HTB Algorithm. 2020 Dynamics of Systems, Mechanisms and Machines (Dynamics). :1—5.
In this article, authors present the results of testing the modified HTB traffic control algorithm in an experimental setup. The algorithm is implemented as a Linux kernel module. An analysis of the experimental results revealed the effect of uneven packet loss in priority classes. In the second part of the article, the authors propose a solution to this problem by applying a distribution scheme for the excess of tokens, according to which excess class tokens are given to the leaf with the highest priority. The new modification of the algorithm was simulated in the AnyLogic environment. The results of an experimental study demonstrated that dividing the excess tokens of the parent class between daughter classes is less effective in terms of network performance than allocating the excess tokens to a high-priority class during the competition for tokens between classes. In general, a modification of the HTB algorithm that implements the proposed token surplus distribution scheme yields more consistent delay times for the high-priority class.
2021-05-13
Monakhov, Yuri, Monakhov, Mikhail, Telny, Andrey, Mazurok, Dmitry, Kuznetsova, Anna.  2020.  Improving Security of Neural Networks in the Identification Module of Decision Support Systems. 2020 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT). :571–574.
In recent years, neural networks have been implemented while solving various tasks. Deep learning algorithms provide state of the art performance in computer vision, NLP, speech recognition, speaker recognition and many other fields. In spite of the good performance, neural networks have significant drawback- they have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. While being imperceptible to a human eye, such perturbations lead to significant drop in classification accuracy. It is demonstrated by many studies related to neural network security. Considering the pros and cons of neural networks, as well as a variety of their applications, developing of the methods to improve the robustness of neural networks against adversarial attacks becomes an urgent task. In the article authors propose the “minimalistic” attacker model of the decision support system identification unit, adaptive recommendations on security enhancing, and a set of protective methods. Suggested methods allow for significant increase in classification accuracy under adversarial attacks, as it is demonstrated by an experiment outlined in this article.