Visible to the public Biblio

Found 249 results

Filters: Keyword is neural nets  [Clear All Filters]
2020-05-08
Katasev, Alexey S., Emaletdinova, Lilia Yu., Kataseva, Dina V..  2018.  Neural Network Spam Filtering Technology. 2018 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM). :1—5.

In this paper we solve the problem of neural network technology development for e-mail messages classification. We analyze basic methods of spam filtering such as a sender IP-address analysis, spam messages repeats detection and the Bayesian filtering according to words. We offer the neural network technology for solving this problem because the neural networks are universal approximators and effective in addressing the problems of classification. Also, we offer the scheme of this technology for e-mail messages “spam”/“not spam” classification. The creation of effective neural network model of spam filtering is performed within the databases knowledge discovery technology. For this training set is formed, the neural network model is trained, its value and classifying ability are estimated. The experimental studies have shown that a developed artificial neural network model is adequate and it can be effectively used for the e-mail messages classification. Thus, in this paper we have shown the possibility of the effective neural network model use for the e-mail messages filtration and have shown a scheme of artificial neural network model use as a part of the e-mail spam filtering intellectual system.

Su, Chunmei, Li, Yonggang, Mao, Wen, Hu, Shangcheng.  2018.  Information Network Risk Assessment Based on AHP and Neural Network. 2018 10th International Conference on Communication Software and Networks (ICCSN). :227—231.
This paper analyzes information network security risk assessment methods and models. Firstly an improved AHP method is proposed to assign the value of assets for solving the problem of risk judgment matrix consistency effectively. And then the neural network technology is proposed to construct the neural network model corresponding to the risk judgment matrix for evaluating the individual risk of assets objectively, the methods for calculating the asset risk value and system risk value are given. Finally some application results are given. Practice proves that the methods are correct and effective, which has been used in information network security risk assessment application and offers a good foundation for the implementation of the automatic assessment.
Zhang, Xu, Ye, Zhiwei, Yan, Lingyu, Wang, Chunzhi, Wang, Ruoxi.  2018.  Security Situation Prediction based on Hybrid Rice Optimization Algorithm and Back Propagation Neural Network. 2018 IEEE 4th International Symposium on Wireless Systems within the International Conferences on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS-SWS). :73—77.
Research on network security situation awareness is currently a research hotspot in the field of network security. It is one of the easiest and most effective methods to use the BP neural network for security situation prediction. However, there are still some problems in BP neural network, such as slow convergence rate, easy to fall into local extremum, etc. On the other hand, some common used evolutionary algorithms, such as genetic algorithm (GA) and particle swarm optimization (PSO), easily fall into local optimum. Hybrid rice optimization algorithm is a newly proposed algorithm with strong search ability, so the method of this paper is proposed. This article describes in detail the use of BP network security posture prediction method. In the proposed method, HRO is used to train the connection weights of the BP network. Through the advantages of HRO global search and fast convergence, the future security situation of the network is predicted, and the accuracy of the situation prediction is effectively improved.
Katasev, Alexey S., Emaletdinova, Lilia Yu., Kataseva, Dina V..  2018.  Neural Network Model for Information Security Incident Forecasting. 2018 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM). :1—5.

This paper describes the technology of neural network application to solve the problem of information security incidents forecasting. We describe the general problem of analyzing and predicting time series in a graphical and mathematical setting. To solve this problem, it is proposed to use a neural network model. To solve the task of forecasting a time series of information security incidents, data are generated and described on the basis of which the neural network is trained. We offer a neural network structure, train the neural network, estimate it's adequacy and forecasting ability. We show the possibility of effective use of a neural network model as a part of an intelligent forecasting system.

Zhang, Shaobo, Shen, Yongjun, Zhang, Guidong.  2018.  Network Security Situation Prediction Model Based on Multi-Swarm Chaotic Particle Optimization and Optimized Grey Neural Network. 2018 IEEE 9th International Conference on Software Engineering and Service Science (ICSESS). :426—429.
Network situation value is an important index to measure network security. Establishing an effective network situation prediction model can prevent the occurrence of network security incidents, and plays an important role in network security protection. Through the understanding and analysis of the network security situation, we can see that there are many factors affecting the network security situation, and the relationship between these factors is complex., it is difficult to establish more accurate mathematical expressions to describe the network situation. Therefore, this paper uses the grey neural network as the prediction model, but because the convergence speed of the grey neural network is very fast, the network is easy to fall into local optimum, and the parameters can not be further modified, so the Multi-Swarm Chaotic Particle Optimization (MSCPO)is used to optimize the key parameters of the grey neural network. By establishing the nonlinear mapping relationship between the influencing factors and the network security situation, the network situation can be predicted and protected.
Hafeez, Azeem, Topolovec, Kenneth, Awad, Selim.  2019.  ECU Fingerprinting through Parametric Signal Modeling and Artificial Neural Networks for In-vehicle Security against Spoofing Attacks. 2019 15th International Computer Engineering Conference (ICENCO). :29—38.
Fully connected autonomous vehicles are more vulnerable than ever to hacking and data theft. The controller area network (CAN) protocol is used for communication between in-vehicle control networks (IVN). The absence of basic security features of this protocol, like message authentication, makes it quite vulnerable to a wide range of attacks including spoofing attacks. As traditional cybersecurity methods impose limitations in ensuring confidentiality and integrity of transmitted messages via CAN, a new technique has emerged among others to approve its reliability in fully authenticating the CAN messages. At the physical layer of the communication system, the method of fingerprinting the messages is implemented to link the received signal to the transmitting electronic control unit (ECU). This paper introduces a new method to implement the security of modern electric vehicles. The lumped element model is used to characterize the channel-specific step response. ECU and channel imperfections lead to a unique transfer function for each transmitter. Due to the unique transfer function, the step response for each transmitter is unique. In this paper, we use control system parameters as a feature-set, afterward, a neural network is used transmitting node identification for message authentication. A dataset collected from a CAN network with eight-channel lengths and eight ECUs to evaluate the performance of the suggested method. Detection results show that the proposed method achieves an accuracy of 97.4% of transmitter detection.
Lavrova, Daria, Zegzhda, Dmitry, Yarmak, Anastasiia.  2019.  Using GRU neural network for cyber-attack detection in automated process control systems. 2019 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom). :1—3.
This paper provides an approach to the detection of information security breaches in automated process control systems (APCS), which consists in forecasting multivariate time series formed from the values of the operating parameters of the end system devices. Using an experimental model of water treatment, a comparison was made of the forecasting results for the parameters characterizing the operation of the entire model, and for the parameters characterizing the flow of individual subprocesses implemented by the model. For forecasting, GRU-neural network training was performed.
Saraswat, Pavi, Garg, Kanika, Tripathi, Rajan, Agarwal, Ayush.  2019.  Encryption Algorithm Based on Neural Network. 2019 4th International Conference on Internet of Things: Smart Innovation and Usages (IoT-SIU). :1—5.
Security is one of the most important needs in network communication. Cryptography is a science which involves two techniques encryption and decryption and it basically enables to send sensitive and confidential data over the unsecure network. The basic idea of cryptography is concealing of the data from unauthenticated users as they can misuse the data. In this paper we use auto associative neural network concept of soft computing in combination with encryption technique to send data securely on communication network.
Chaudhary, Anshika, Mittal, Himangi, Arora, Anuja.  2019.  Anomaly Detection using Graph Neural Networks. 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon). :346—350.

Conventional methods for anomaly detection include techniques based on clustering, proximity or classification. With the rapidly growing social networks, outliers or anomalies find ingenious ways to obscure themselves in the network and making the conventional techniques inefficient. In this paper, we utilize the ability of Deep Learning over topological characteristics of a social network to detect anomalies in email network and twitter network. We present a model, Graph Neural Network, which is applied on social connection graphs to detect anomalies. The combinations of various social network statistical measures are taken into account to study the graph structure and functioning of the anomalous nodes by employing deep neural networks on it. The hidden layer of the neural network plays an important role in finding the impact of statistical measure combination in anomaly detection.

Fu, Tian, Lu, Yiqin, Zhen, Wang.  2019.  APT Attack Situation Assessment Model Based on optimized BP Neural Network. 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :2108—2111.
In this paper, it first analyzed the characteristics of Advanced Persistent Threat (APT). according to APT attack, this paper established an BP neural network optimized by improved adaptive genetic algorithm to predict the security risk of nodes in the network. and calculated the path of APT attacks with the maximum possible attack. Finally, experiments verify the effectiveness and correctness of the algorithm by simulating attacks. Experiments show that this model can effectively evaluate the security situation in the network, For the defenders to adopt effective measures defend against APT attacks, thus improving the security of the network.
Wang, Dongqi, Shuai, Xuanyue, Hu, Xueqiong, Zhu, Li.  2019.  Research on Computer Network Security Evaluation Method Based on Levenberg-Marquardt Algorithms. 2019 International Conference on Communications, Information System and Computer Engineering (CISCE). :399—402.
As we all know, computer network security evaluation is an important link in the field of network security. Traditional computer network security evaluation methods use BP neural network combined with network security standards to train and simulate. However, because BP neural network is easy to fall into local minimum point in the training process, the evalu-ation results are often inaccurate. In this paper, the LM (Levenberg-Marquard) algorithm is used to optimize the BP neural network. The LM-BP algorithm is constructed and applied to the computer network security evaluation. The results show that compared with the traditional evaluation algorithm, the optimized neural network has the advantages of fast running speed and accurate evaluation results.
Guan, Chengli, Yang, Yue.  2019.  Research of Computer Network Security Evaluation Based on Backpropagation Neural Network. 2019 IEEE International Conference on Power, Intelligent Computing and Systems (ICPICS). :181—184.
In recent years, due to the invasion of virus and loopholes, computer networks in colleges and universities have caused great adverse effects on schools, teachers and students. In order to improve the accuracy of computer network security evaluation, Back Propagation (BP) neural network was trained and built. The evaluation index and target expectations have been determined based on the expert system, with 15 secondary evaluation index values taken as input layer parameters, and the computer network security evaluation level values taken as output layer parameter. All data were divided into learning sample sets and forecasting sample sets. The results showed that the designed BP neural network exhibited a fast convergence speed and the system error was 0.000999654. Furthermore, the predictive values of the network were in good agreement with the experimental results, and the correlation coefficient was 0.98723. These results indicated that the network had an excellent training accuracy and generalization ability, which effectively reflected the performance of the system for the computer network security evaluation.
2020-04-20
Lecuyer, Mathias, Atlidakis, Vaggelis, Geambasu, Roxana, Hsu, Daniel, Jana, Suman.  2019.  Certified Robustness to Adversarial Examples with Differential Privacy. 2019 IEEE Symposium on Security and Privacy (SP). :656–672.
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to norm-bounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google's Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.
2020-04-17
Jang, Yunseok, Zhao, Tianchen, Hong, Seunghoon, Lee, Honglak.  2019.  Adversarial Defense via Learning to Generate Diverse Attacks. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). :2740—2749.

With the remarkable success of deep learning, Deep Neural Networks (DNNs) have been applied as dominant tools to various machine learning domains. Despite this success, however, it has been found that DNNs are surprisingly vulnerable to malicious attacks; adding a small, perceptually indistinguishable perturbations to the data can easily degrade classification performance. Adversarial training is an effective defense strategy to train a robust classifier. In this work, we propose to utilize the generator to learn how to create adversarial examples. Unlike the existing approaches that create a one-shot perturbation by a deterministic generator, we propose a recursive and stochastic generator that produces much stronger and diverse perturbations that comprehensively reveal the vulnerability of the target classifier. Our experiment results on MNIST and CIFAR-10 datasets show that the classifier adversarially trained with our method yields more robust performance over various white-box and black-box attacks.

2020-04-13
Wu, Qiong, Zhang, Haitao, Du, Peilun, Li, Ye, Guo, Jianli, He, Chenze.  2019.  Enabling Adaptive Deep Neural Networks for Video Surveillance in Distributed Edge Clouds. 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS). :525–528.
In the field of video surveillance, the demands of intelligent video analysis services based on Deep Neural Networks (DNNs) have grown rapidly. Although most existing studies focus on the performance of DNNs pre-deployed at remote clouds, the network delay caused by computation offloading from network cameras to remote clouds is usually long and sometimes unbearable. Edge computing can enable rich services and applications in close proximity to the network cameras. However, owing to the limited computing resources of distributed edge clouds, it is challenging to satisfy low latency and high accuracy requirements for all users, especially when the number of users surges. To address this challenge, we first formulate the intelligent video surveillance task scheduling problem that minimizes the average response time while meeting the performance requirements of tasks and prove that it is NP-hard. Second, we present an adaptive DNN model selection method to identify the most effective DNN model for each task by comparing the feature similarity between the input video segment and pre-stored training videos. Third, we propose a two-stage delay-aware graph searching approach that presents a beneficial trade-off between network delay and computing delay. Experimental results demonstrate the efficiency of our approach.
2020-04-10
Newaz, AKM Iqtidar, Sikder, Amit Kumar, Rahman, Mohammad Ashiqur, Uluagac, A. Selcuk.  2019.  HealthGuard: A Machine Learning-Based Security Framework for Smart Healthcare Systems. 2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS). :389—396.
The integration of Internet-of-Things and pervasive computing in medical devices have made the modern healthcare system “smart.” Today, the function of the healthcare system is not limited to treat the patients only. With the help of implantable medical devices and wearables, Smart Healthcare System (SHS) can continuously monitor different vital signs of a patient and automatically detect and prevent critical medical conditions. However, these increasing functionalities of SHS raise several security concerns and attackers can exploit the SHS in numerous ways: they can impede normal function of the SHS, inject false data to change vital signs, and tamper a medical device to change the outcome of a medical emergency. In this paper, we propose HealthGuard, a novel machine learning-based security framework to detect malicious activities in a SHS. HealthGuard observes the vital signs of different connected devices of a SHS and correlates the vitals to understand the changes in body functions of the patient to distinguish benign and malicious activities. HealthGuard utilizes four different machine learning-based detection techniques (Artificial Neural Network, Decision Tree, Random Forest, k-Nearest Neighbor) to detect malicious activities in a SHS. We trained HealthGuard with data collected for eight different smart medical devices for twelve benign events including seven normal user activities and five disease-affected events. Furthermore, we evaluated the performance of HealthGuard against three different malicious threats. Our extensive evaluation shows that HealthGuard is an effective security framework for SHS with an accuracy of 91 % and an F1 score of 90 %.
Robic-Butez, Pierrick, Win, Thu Yein.  2019.  Detection of Phishing websites using Generative Adversarial Network. 2019 IEEE International Conference on Big Data (Big Data). :3216—3221.

Phishing is typically deployed as an attack vector in the initial stages of a hacking endeavour. Due to it low-risk rightreward nature it has seen a widespread adoption, and detecting it has become a challenge in recent times. This paper proposes a novel means of detecting phishing websites using a Generative Adversarial Network. Taking into account the internal structure and external metadata of a website, the proposed approach uses a generator network which generates both legitimate as well as synthetic phishing features to train a discriminator network. The latter then determines if the features are either normal or phishing websites, before improving its detection accuracy based on the classification error. The proposed approach is evaluated using two different phishing datasets and is found to achieve a detection accuracy of up to 94%.

2020-04-03
Song, Liwei, Shokri, Reza, Mittal, Prateek.  2019.  Membership Inference Attacks Against Adversarially Robust Deep Learning Models. 2019 IEEE Security and Privacy Workshops (SPW). :50—56.
In recent years, the research community has increasingly focused on understanding the security and privacy challenges posed by deep learning models. However, the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards enhancing our understanding of deep learning models when the two domains are combined together. We do this by measuring the success of membership inference attacks against two state-of-the-art adversarial defense methods that mitigate evasion attacks: adversarial training and provable defense. On the one hand, membership inference attacks aim to infer an individual's participation in the target model's training dataset and are known to be correlated with target model's overfitting. On the other hand, adversarial defense methods aim to enhance the robustness of target models by ensuring that model predictions are unchanged for a small area around each sample in the training dataset. Intuitively, adversarial defenses may rely more on the training dataset and be more vulnerable to membership inference attacks. By performing empirical membership inference attacks on both adversarially robust models and corresponding undefended models, we find that the adversarial training method is indeed more susceptible to membership inference attacks, and the privacy leakage is directly correlated with model robustness. We also find that the provable defense approach does not lead to enhanced success of membership inference attacks. However, this is achieved by significantly sacrificing the accuracy of the model on benign data points, indicating that privacy, security, and prediction accuracy are not jointly achieved in these two approaches.
2020-03-27
Tamura, Keiichi, Omagari, Akitada, Hashida, Shuichi.  2019.  Novel Defense Method against Audio Adversarial Example for Speech-to-Text Transcription Neural Networks. 2019 IEEE 11th International Workshop on Computational Intelligence and Applications (IWCIA). :115–120.
With the developments in deep learning, the security of neural networks against vulnerabilities has become one of the most urgent research topics in deep learning. There are many types of security countermeasures. Adversarial examples and their defense methods, in particular, have been well-studied in recent years. An adversarial example is designed to make neural networks misclassify or produce inaccurate output. Audio adversarial examples are a type of adversarial example where the main target of attack is a speech-to-text transcription neural network. In this study, we propose a new defense method against audio adversarial examples for the speech-to-text transcription neural networks. It is difficult to determine whether an input waveform data representing the sound of voice is an audio adversarial example. Therefore, the main framework of the proposed defense method is based on a sandbox approach. To evaluate the proposed defense method, we used actual audio adversarial examples that were created on Deep Speech, which is a speech-to-text transcription neural network. We confirmed that our defense method can identify audio adversarial examples to protect speech-to-text systems.
Liu, Yingying, Wang, Yiwei.  2019.  A Robust Malware Detection System Using Deep Learning on API Calls. 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :1456–1460.
With the development of technology, the massive malware become the major challenge to current computer security. In our work, we implemented a malware detection system using deep learning on API calls. By means of cuckoo sandbox, we extracted the API calls sequence of malicious programs. Through filtering and ordering the redundant API calls, we extracted the valid API sequences. Compared with GRU, BGRU, LSTM and SimpleRNN, we evaluated the BLSTM on the massive datasets including 21,378 samples. The experimental results demonstrate that BLSTM has the best performance for malware detection, reaching the accuracy of 97.85%.
2020-03-16
Ablaev, Farid, Andrianov, Sergey, Soloviev, Aleksey.  2019.  Quantum Electronic Generator of Random Numbers for Information Security in Automatic Control Systems. 2019 International Russian Automation Conference (RusAutoCon). :1–5.

The problems of random numbers application to the information security of data, communication lines, computer units and automated driving systems are considered. The possibilities for making up quantum generators of random numbers and existing solutions for acquiring of sufficiently random sequences are analyzed. The authors found out the method for the creation of quantum generators on the basis of semiconductor electronic components. The electron-quantum generator based on electrons tunneling is experimentally demonstrated. It is shown that it is able to create random sequences of high security level and satisfying known NIST statistical tests (P-Value\textbackslashtextgreater0.9). The generator created can be used for formation of both closed and open cryptographic keys in computer systems and other platforms and has great potential for realization of random walks and probabilistic computing on the basis of neural nets and other IT problems.

2020-03-09
ELMAARADI, Ayoub, LYHYAOUI, Abdelouahid, CHAIRI, IKRAM.  2019.  New security architecture using hybrid IDS for virtual private clouds. 2019 Third International Conference on Intelligent Computing in Data Sciences (ICDS). :1–5.

We recently see a real digital revolution where all companies prefer to use cloud computing because of its capability to offer a simplest way to deploy the needed services. However, this digital transformation has generated different security challenges as the privacy vulnerability against cyber-attacks. In this work we will present a new architecture of a hybrid Intrusion detection System, IDS for virtual private clouds, this architecture combines both network-based and host-based intrusion detection system to overcome the limitation of each other, in case the intruder bypassed the Network-based IDS and gained access to a host, in intend to enhance security in private cloud environments. We propose to use a non-traditional mechanism in the conception of the IDS (the detection engine). Machine learning, ML algorithms will can be used to build the IDS in both parts, to detect malicious traffic in the Network-based part as an additional layer for network security, and also detect anomalies in the Host-based part to provide more privacy and confidentiality in the virtual machine. It's not in our scope to train an Artificial Neural Network ”ANN”, but just to propose a new scheme for IDS based ANN, In our future work we will present all the details related to the architecture and parameters of the ANN, as well as the results of some real experiments.

2020-03-02
Vatanparvar, Korosh, Al Faruque, Mohammad Abdullah.  2019.  Self-Secured Control with Anomaly Detection and Recovery in Automotive Cyber-Physical Systems. 2019 Design, Automation Test in Europe Conference Exhibition (DATE). :788–793.

Cyber-Physical Systems (CPS) are growing with added complexity and functionality. Multidisciplinary interactions with physical systems are the major keys to CPS. However, sensors, actuators, controllers, and wireless communications are prone to attacks that compromise the system. Machine learning models have been utilized in controllers of automotive to learn, estimate, and provide the required intelligence in the control process. However, their estimation is also vulnerable to the attacks from physical or cyber domains. They have shown unreliable predictions against unknown biases resulted from the modeling. In this paper, we propose a novel control design using conditional generative adversarial networks that will enable a self-secured controller to capture the normal behavior of the control loop and the physical system, detect the anomaly, and recover from them. We experimented our novel control design on a self-secured BMS by driving a Nissan Leaf S on standard driving cycles while under various attacks. The performance of the design has been compared to the state-of-the-art; the self-secured BMS could detect the attacks with 83% accuracy and the recovery estimation error of 21% on average, which have improved by 28% and 8%, respectively.

2020-02-26
Sabbagh, Majid, Gongye, Cheng, Fei, Yunsi, Wang, Yanzhi.  2019.  Evaluating Fault Resiliency of Compressed Deep Neural Networks. 2019 IEEE International Conference on Embedded Software and Systems (ICESS). :1–7.

Model compression is considered to be an effective way to reduce the implementation cost of deep neural networks (DNNs) while maintaining the inference accuracy. Many recent studies have developed efficient model compression algorithms and implementations in accelerators on various devices. Protecting integrity of DNN inference against fault attacks is important for diverse deep learning enabled applications. However, there has been little research investigating the fault resilience of DNNs and the impact of model compression on fault tolerance. In this work, we consider faults on different data types and develop a simulation framework for understanding the fault resiliency of compressed DNN models as compared to uncompressed models. We perform our experiments on two common DNNs, LeNet-5 and VGG16, and evaluate their fault resiliency with different types of compression. The results show that binary quantization can effectively increase the fault resilience of DNN models by 10000x for both LeNet5 and VGG16. Finally, we propose software and hardware mitigation techniques to increase the fault resiliency of DNN models.

Sokolov, S. A., Iliev, T. B., Stoyanov, I. S..  2019.  Analysis of Cybersecurity Threats in Cloud Applications Using Deep Learning Techniques. 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). :441–446.

In this paper we present techniques based on machine learning techniques on monitoring data for analysis of cybersecurity threats in cloud environments that incorporate enterprise applications from the fields of telecommunications and IoT. Cybersecurity is a term describing techniques for protecting computers, telecommunications equipment, applications, environments and data. In modern networks enormous volume of generated traffic can be observed. We propose several techniques such as Support Vector Machines, Neural networks and Deep Neural Networks in combination for analysis of monitoring data. An approach for combining classifier results based on performance weights is proposed. The proposed approach delivers promising results comparable to existing algorithms and is suitable for enterprise grade security applications.