Visible to the public Biblio

Found 431 results

Filters: Keyword is Neural networks  [Clear All Filters]
2022-02-07
Lakhdhar, Yosra, Rekhis, Slim.  2021.  Machine Learning Based Approach for the Automated Mapping of Discovered Vulnerabilities to Adversial Tactics. 2021 IEEE Security and Privacy Workshops (SPW). :309–317.
To defend networks against security attacks, cyber defenders have to identify vulnerabilities that could be exploited by an attacker and fix them. However, vulnerabilities are constantly evolving and their number is rising. In addition, the resources required (i.e., time and cost) to patch all the identified vulnerabilities and update the affected assets are not always affordable. For these reasons, the defender needs to have a set of metrics that could be used to automatically map new discovered vulnerabilities to potential attack tactics. Using such a mapping to attack tactics, will allow security solutions to better respond inline to any vulnerabilities exploitation tentatives, by selecting and prioritizing suitable response strategy. In this work, we provide a multilabel classification approach to automatically map a detected vulnerability to the MITRE Adversarial Tactics that could be used by the attacker. The proposed approach will help cyber defenders to prioritize their defense strategies, ensure a rapid and efficient investigation process, and well manage new detected vulnerabilities. We evaluate a set of machine learning algorithms (BinaryRelevance, LabelPowerset, ClassifierChains, MLKNN, BRKNN, RAkELd, NLSP, and Neural Networks) and found out that ClassifierChains with RandomForest classifier is the best method in our experiment.
Or-Meir, Ori, Cohen, Aviad, Elovici, Yuval, Rokach, Lior, Nissim, Nir.  2021.  Pay Attention: Improving Classification of PE Malware Using Attention Mechanisms Based on System Call Analysis. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Malware poses a threat to computing systems worldwide, and security experts work tirelessly to detect and classify malware as accurately and quickly as possible. Since malware can use evasion techniques to bypass static analysis and security mechanisms, dynamic analysis methods are more useful for accurately analyzing the behavioral patterns of malware. Previous studies showed that malware behavior can be represented by sequences of executed system calls and that machine learning algorithms can leverage such sequences for the task of malware classification (a.k.a. malware categorization). Accurate malware classification is helpful for malware signature generation and is thus beneficial to antivirus vendors; this capability is also valuable to organizational security experts, enabling them to mitigate malware attacks and respond to security incidents. In this paper, we propose an improved methodology for malware classification, based on analyzing sequences of system calls invoked by malware in a dynamic analysis environment. We show that adding an attention mechanism to a LSTM model improves accuracy for the task of malware classification, thus outperforming the state-of-the-art algorithm by up to 6%. We also show that the transformer architecture can be used to analyze very long sequences with significantly lower time complexity for training and prediction. Our proposed method can serve as the basis for a decision support system for security experts, for the task of malware categorization.
2022-02-03
Huang, Chao, Luo, Wenhao, Liu, Rui.  2021.  Meta Preference Learning for Fast User Adaptation in Human-Supervisory Multi-Robot Deployments. 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :5851—5856.
As multi-robot systems (MRS) are widely used in various tasks such as natural disaster response and social security, people enthusiastically expect an MRS to be ubiquitous that a general user without heavy training can easily operate. However, humans have various preferences on balancing between task performance and safety, imposing different requirements onto MRS control. Failing to comply with preferences makes people feel difficult in operation and decreases human willingness of using an MRS. Therefore, to improve social acceptance as well as performance, there is an urgent need to adjust MRS behaviors according to human preferences before triggering human corrections, which increases cognitive load. In this paper, a novel Meta Preference Learning (MPL) method was developed to enable an MRS to fast adapt to user preferences. MPL based on meta learning mechanism can quickly assess human preferences from limited instructions; then, a neural network based preference model adjusts MRS behaviors for preference adaption. To validate method effectiveness, a task scenario "An MRS searches victims in an earthquake disaster site" was designed; 20 human users were involved to identify preferences as "aggressive", "medium", "reserved"; based on user guidance and domain knowledge, about 20,000 preferences were simulated to cover different operations related to "task quality", "task progress", "robot safety". The effectiveness of MPL in preference adaption was validated by the reduced duration and frequency of human interventions.
2022-01-31
Kumová, Věra, Pilát, Martin.  2021.  Beating White-Box Defenses with Black-Box Attacks. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Deep learning has achieved great results in the last decade, however, it is sensitive to so called adversarial attacks - small perturbations of the input that cause the network to classify incorrectly. In the last years a number of attacks and defenses against these attacks were described. Most of the defenses however focus on defending against gradient-based attacks. In this paper, we describe an evolutionary attack and show that the adversarial examples produced by the attack have different features than those from gradient-based attacks. We also show that these features mean that one of the state-of-the-art defenses fails to detect such attacks.
Zhao, Rui.  2021.  The Vulnerability of the Neural Networks Against Adversarial Examples in Deep Learning Algorithms. 2021 2nd International Conference on Computing and Data Science (CDS). :287–295.
With the further development in the fields of computer vision, network security, natural language processing and so on so forth, deep learning technology gradually exposed certain security risks. The existing deep learning algorithms cannot effectively describe the essential characteristics of data, making the algorithm unable to give the correct result in the face of malicious input. Based on current security threats faced by deep learning, this paper introduces the problem of adversarial examples in deep learning, sorts out the existing attack and defense methods of black box and white box, and classifies them. It briefly describes the application of some adversarial examples in different scenarios in recent years, compares several defense technologies of adversarial examples, and finally summarizes the problems in this research field and prospects its future development. This paper introduces the common white box attack methods in detail, and further compares the similarities and differences between the attack of black and white boxes. Correspondingly, the author also introduces the defense methods, and analyzes the performance of these methods against the black and white box attack.
2022-01-25
Islam, Muhammad Aminul, Veal, Charlie, Gouru, Yashaswini, Anderson, Derek T..  2021.  Attribution Modeling for Deep Morphological Neural Networks using Saliency Maps. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Mathematical morphology has been explored in deep learning architectures, as a substitute to convolution, for problems like pattern recognition and object detection. One major advantage of using morphology in deep learning is the utility of morphological erosion and dilation. Specifically, these operations naturally embody interpretability due to their underlying connections to the analysis of geometric structures. While the use of these operations results in explainable learned filters, morphological deep learning lacks attribution modeling, i.e., a paradigm to specify what areas of the original observed image are important. Furthermore, convolution-based deep learning has achieved attribution modeling through a variety of neural eXplainable Artificial Intelligence (XAI) paradigms (e.g., saliency maps, integrated gradients, guided backpropagation, and gradient class activation mapping). Thus, a problem for morphology-based deep learning is that these XAI methods do not have a morphological interpretation due to the differences in the underlying mathematics. Herein, we extend the neural XAI paradigm of saliency maps to morphological deep learning, and by doing, so provide an example of morphological attribution modeling. Furthermore, our qualitative results highlight some advantages of using morphological attribution modeling.
Marulli, Fiammetta, Balzanella, Antonio, Campanile, Lelio, Iacono, Mauro, Mastroianni, Michele.  2021.  Exploring a Federated Learning Approach to Enhance Authorship Attribution of Misleading Information from Heterogeneous Sources. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Authorship Attribution (AA) is currently applied in several applications, among which fraud detection and anti-plagiarism checks: this task can leverage stylometry and Natural Language Processing techniques. In this work, we explored some strategies to enhance the performance of an AA task for the automatic detection of false and misleading information (e.g., fake news). We set up a text classification model for AA based on stylometry exploiting recurrent deep neural networks and implemented two learning tasks trained on the same collection of fake and real news, comparing their performances: one is based on Federated Learning architecture, the other on a centralized architecture. The goal was to discriminate potential fake information from true ones when the fake news comes from heterogeneous sources, with different styles. Preliminary experiments show that a distributed approach significantly improves recall with respect to the centralized model. As expected, precision was lower in the distributed model. This aspect, coupled with the statistical heterogeneity of data, represents some open issues that will be further investigated in future work.
Goh, Gary S. W., Lapuschkin, Sebastian, Weber, Leander, Samek, Wojciech, Binder, Alexander.  2021.  Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution. 2020 25th International Conference on Pattern Recognition (ICPR). :4949–4956.
Integrated Gradients as an attribution method for deep neural network models offers simple implementability. However, it suffers from noisiness of explanations which affects the ease of interpretability. The SmoothGrad technique is proposed to solve the noisiness issue and smoothen the attribution maps of any gradient-based attribution method. In this paper, we present SmoothTaylor as a novel theoretical concept bridging Integrated Gradients and SmoothGrad, from the Taylor's theorem perspective. We apply the methods to the image classification problem, using the ILSVRC2012 ImageNet object recognition dataset, and a couple of pretrained image models to generate attribution maps. These attribution maps are empirically evaluated using quantitative measures for sensitivity and noise level. We further propose adaptive noising to optimize for the noise scale hyperparameter value. From our experiments, we find that the SmoothTaylor approach together with adaptive noising is able to generate better quality saliency maps with lesser noise and higher sensitivity to the relevant points in the input space as compared to Integrated Gradients.
2022-01-10
Freas, Christopher B., Shah, Dhara, Harrison, Robert W..  2021.  Accuracy and Generalization of Deep Learning Applied to Large Scale Attacks. 2021 IEEE International Conference on Communications Workshops (ICC Workshops). :1–6.
Distributed denial of service attacks threaten the security and health of the Internet. Remediation relies on up-to-date and accurate attack signatures. Signature-based detection is relatively inexpensive computationally. Yet, signatures are inflexible when small variations exist in the attack vector. Attackers exploit this rigidity by altering their attacks to bypass the signatures. Our previous work revealed a critical problem with conventional machine learning models. Conventional models are unable to generalize on the temporal nature of network flow data to classify attacks. We thus explored the use of deep learning techniques on real flow data. We found that a variety of attacks could be identified with high accuracy compared to previous approaches. We show that a convolutional neural network can be implemented for this problem that is suitable for large volumes of data while maintaining useful levels of accuracy.
Viktoriia, Hrechko, Hnatienko, Hrygorii, Babenko, Tetiana.  2021.  An Intelligent Model to Assess Information Systems Security Level. 2021 Fifth World Conference on Smart Trends in Systems Security and Sustainability (WorldS4). :128–133.

This research presents a model for assessing information systems cybersecurity maturity level. The main purpose of the model is to provide comprehensive support for information security specialists and auditors in checking information systems security level, checking security policy implementation, and compliance with security standards. The model synthesized based on controls and practices present in ISO 27001 and ISO 27002 and the neural network of direct signal propagation. The methodology described in this paper can also be extended to synthesis a model for different security control sets and, consequently, to verify compliance with another security standard or policy. The resulting model describes a real non-automated process of assessing the maturity of an IS at an acceptable level and it can be recommended to be used in the process of real audit of Information Security Management Systems.

Sallam, Youssef F., Ahmed, Hossam El-din H., Saleeb, Adel, El-Bahnasawy, Nirmeen A., El-Samie, Fathi E. Abd.  2021.  Implementation of Network Attack Detection Using Convolutional Neural Network. 2021 International Conference on Electronic Engineering (ICEEM). :1–6.
The Internet obviously has a major impact on the global economy and human life every day. This boundless use pushes the attack programmers to attack the data frameworks on the Internet. Web attacks influence the reliability of the Internet and its administrations. These attacks are classified as User-to-Root (U2R), Remote-to-Local (R2L), Denial-of-Service (DoS) and Probing (Probe). Subsequently, making sure about web framework security and protecting data are pivotal. The conventional layers of safeguards like antivirus scanners, firewalls and proxies, which are applied to treat the security weaknesses are insufficient. So, Intrusion Detection Systems (IDSs) are utilized to screen PC and data frameworks for security shortcomings. IDS adds more effectiveness in securing networks against attacks. This paper presents an IDS model based on Deep Learning (DL) with Convolutional Neural Network (CNN) hypothesis. The model has been evaluated on the NSLKDD dataset. It has been trained by Kddtrain+ and tested twice, once using kddtrain+ and the other using kddtest+. The achieved test accuracies are 99.7% and 98.43% with 0.002 and 0.02 wrong alert rates for the two test scenarios, respectively.
Jianhua, Xing, Jing, Si, Yongjing, Zhang, Wei, Li, Yuning, Zheng.  2021.  Research on Malware Variant Detection Method Based on Deep Neural Network. 2021 IEEE 5th International Conference on Cryptography, Security and Privacy (CSP). :144–147.
To deal with the increasingly serious threat of industrial information malicious code, the simulations and characteristics of the domestic security and controllable operating system and office software were implemented in the virtual sandbox environment based on virtualization technology in this study. Firstly, the serialization detection scheme based on the convolution neural network algorithm was improved. Then, the API sequence was modeled and analyzed by the improved convolution neural network algorithm to excavate more local related information of variant sequences. Finally the variant detection of malicious code was realized. Results showed that this improved method had higher efficiency and accuracy for a large number of malicious code detection, and could be applied to the malicious code detection in security and controllable operating system.
Gong, Jianhu.  2021.  Network Information Security Pipeline Based on Grey Relational Cluster and Neural Networks. 2021 5th International Conference on Computing Methodologies and Communication (ICCMC). :971–975.
Network information security pipeline based on the grey relational cluster and neural networks is designed and implemented in this paper. This method is based on the principle that the optimal selected feature set must contain the feature with the highest information entropy gain to the data set category. First, the feature with the largest information gain is selected from all features as the search starting point, and then the sample data set classification mark is fully considered. For the better performance, the neural networks are considered. The network learning ability is directly determined by its complexity. The learning of general complex problems and large sample data will bring about a core dramatic increase in network scale. The proposed model is validated through the simulation.
Agarwal, Shivam, Khatter, Kiran, Relan, Devanjali.  2021.  Security Threat Sounds Classification Using Neural Network. 2021 8th International Conference on Computing for Sustainable Global Development (INDIACom). :690–694.
Sound plays a key role in human life and therefore sound recognition system has a great future ahead. Sound classification and identification system has many applications such as system for personal security, critical surveillance, etc. The main aim of this paper is to detect and classify the security sound event using the surveillance camera systems with integrated microphone based on the generated spectrograms of the sounds. This will enable to track security events in cases of emergencies. The goal is to propose a security system to accurately detect sound events and make a better security sound event detection system. We propose to use a convolutional neural network (CNN) to design the security sound detection system to detect a security event with minimal sound. We used the spectrogram images to train the CNN. The neural network was trained using different security sounds data which was then used to detect security sound events during testing phase. We used two datasets for our experiment training and testing datasets. Both the datasets contain 3 different sound events (glass break, gun shots and smoke alarms) to train and test the model, respectively. The proposed system yields the good accuracy for the sound event detection even with minimum available sound data. The designed system achieved accuracy was 92% and 90% using CNN on training dataset and testing dataset. We conclude that the proposed sound classification framework which using the spectrogram images of sounds can be used efficiently to develop the sound classification and recognition systems.
Zheng, Shiji.  2021.  Network Intrusion Detection Model Based on Convolutional Neural Network. 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). 5:634–637.
Network intrusion detection is an important research direction of network security. The diversification of network intrusion mode and the increasing amount of network data make the traditional detection methods can not meet the requirements of the current network environment. The development of deep learning technology and its successful application in the field of artificial intelligence provide a new solution for network intrusion detection. In this paper, the convolutional neural network in deep learning is applied to network intrusion detection, and an intelligent detection model which can actively learn is established. The experiment on KDD99 data set shows that it can effectively improve the accuracy and adaptive ability of intrusion detection, and has certain effectiveness and advancement.
Wang, Xiaoyu, Han, Zhongshou, Yu, Rui.  2021.  Security Situation Prediction Method of Industrial Control Network Based on Ant Colony-RBF Neural Network. 2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE). :834–837.
To understand the future trend of network security, the field of network security began to introduce the concept of NSSA(Network Security Situation Awareness). This paper implements the situation assessment model by using game theory algorithms to calculate the situation value of attack and defense behavior. After analyzing the ant colony algorithm and the RBF neural network, the defects of the RBF neural network are improved through the advantages of the ant colony algorithm, and the situation prediction model based on the ant colony-RBF neural network is realized. Finally, the model was verified experimentally.
2021-12-22
Ortega, Alfonso, Fierrez, Julian, Morales, Aythami, Wang, Zilong, Ribeiro, Tony.  2021.  Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and Explainable Automatic Recruitment. 2021 IEEE Winter Conference on Applications of Computer Vision Workshops (WACVW). :78–87.
Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods can become crucial. Inductive Logic Programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the process of data. Learning from Interpretation Transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given blackbox system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains.
2021-12-20
Mikhailova, Vasilisa D., Shulika, Maria G., Basan, Elena S., Peskova, Olga Yu..  2021.  Security architecture for UAV. 2021 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT). :0431–0434.
Cyber-physical systems are used in many areas of human life. But people do not pay enough attention to ensuring the security of these systems. As a result of the resulting security gaps, an attacker can launch an attack, not only shutting down the system, but also having some negative impact on the environment. The article examines denial of service attacks in ad-hoc networks, conducts experiments and considers the consequences of their successful execution. As a result of the research, it was determined that an attack can be detected by changes in transmitted traffic and processor load. The cyber-physical system operates on stable algorithms, and even if legal changes occur, they can be easily distinguished from those caused by the attack. The article shows that the use of statistical methods for analyzing traffic and other parameters can be justified for detecting an attack. This study shows that each attack affects traffic in its own way and creates unique patterns of behavior change. The experiments were carried out according to methodology with changings in the intensity of the attacks, with a change in normal behavior. The results of this study can further be used to implement a system for detecting attacks on cyber-physical systems. The collected datasets can be used to train the neural network.
2021-11-30
Cultice, Tyler, Ionel, Dan, Thapliyal, Himanshu.  2020.  Smart Home Sensor Anomaly Detection Using Convolutional Autoencoder Neural Network. 2020 IEEE International Symposium on Smart Electronic Systems (iSES) (Formerly iNiS). :67–70.
We propose an autoencoder based approach to anomaly detection in smart grid systems. Data collecting sensors within smart home systems are susceptible to many data corruption issues, such as malicious attacks or physical malfunctions. By applying machine learning to a smart home or grid, sensor anomalies can be detected automatically for secure data collection and sensor-based system functionality. In addition, we tested the effectiveness of this approach on real smart home sensor data collected for multiple years. An early detection of such data corruption issues is essential to the security and functionality of the various sensors and devices within a smart home.
Li, Gangqiang, Wu, Sissi Xiaoxiao, Zhang, Shengli, Li, Qiang.  2020.  Detect Insider Attacks Using CNN in Decentralized Optimization. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :8758–8762.
This paper studies the security issue of a gossip-based distributed projected gradient (DPG) algorithm, when it is applied for solving a decentralized multi-agent optimization. It is known that the gossip-based DPG algorithm is vulnerable to insider attacks because each agent locally estimates its (sub)gradient without any supervision. This work leverages the convolutional neural network (CNN) to perform the detection and localization of the insider attackers. Compared to the previous work, CNN can learn appropriate decision functions from the original state information without preprocessing through artificially designed rules, thereby alleviating the dependence on complex pre-designed models. Simulation results demonstrate that the proposed CNN-based approach can effectively improve the performance of detecting and localizing malicious agents, as compared with the conventional pre-designed score-based model.
2021-11-29
Yin, Yifei, Zulkernine, Farhana, Dahan, Samuel.  2020.  Determining Worker Type from Legal Text Data Using Machine Learning. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :444–450.
This project addresses a classic employment law question in Canada and elsewhere using machine learning approach: how do we know whether a worker is an employee or an independent contractor? This is a central issue for self-represented litigants insofar as these two legal categories entail very different rights and employment protections. In this interdisciplinary research study, we collaborated with the Conflict Analytics Lab to develop machine learning models aimed at determining whether a worker is an employee or an independent contractor. We present a number of supervised learning models including a neural network model that we implemented using data labeled by law researchers and compared the accuracy of the models. Our neural network model achieved an accuracy rate of 91.5%. A critical discussion follows to identify the key features in the data that influence the accuracy of our models and provide insights about the case outcomes.
Hu, Shengze, He, Chunhui, Ge, Bin, Liu, Fang.  2020.  Enhanced Word Embedding Method in Text Classification. 2020 6th International Conference on Big Data and Information Analytics (BigDIA). :18–22.
For the task of natural language processing (NLP), Word embedding technology has a certain impact on the accuracy of deep neural network algorithms. Considering that the current word embedding method cannot realize the coexistence of words and phrases in the same vector space. Therefore, we propose an enhanced word embedding (EWE) method. Before completing the word embedding, this method introduces a unique sentence reorganization technology to rewrite all the sentences in the original training corpus. Then, all the original corpus and the reorganized corpus are merged together as the training corpus of the distributed word embedding model, so as to realize the coexistence problem of words and phrases in the same vector space. We carried out experiment to demonstrate the effectiveness of the EWE algorithm on three classic benchmark datasets. The results show that the EWE method can significantly improve the classification performance of the CNN model.
Hou, Xiaolu, Breier, Jakub, Jap, Dirmanto, Ma, Lei, Bhasin, Shivam, Liu, Yang.  2020.  Security Evaluation of Deep Neural Network Resistance Against Laser Fault Injection. 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA). :1–6.
Deep learning is becoming a basis of decision making systems in many application domains, such as autonomous vehicles, health systems, etc., where the risk of misclassification can lead to serious consequences. It is necessary to know to which extent are Deep Neural Networks (DNNs) robust against various types of adversarial conditions. In this paper, we experimentally evaluate DNNs implemented in embedded device by using laser fault injection, a physical attack technique that is mostly used in security and reliability communities to test robustness of various systems. We show practical results on four activation functions, ReLu, softmax, sigmoid, and tanh. Our results point out the misclassification possibilities for DNNs achieved by injecting faults into the hidden layers of the network. We evaluate DNNs by using several different attack strategies to show which are the most efficient in terms of misclassification success rates. Outcomes of this work should be taken into account when deploying devices running DNNs in environments where malicious attacker could tamper with the environmental parameters that would bring the device into unstable conditions. resulting into faults.
She, Dongdong, Chen, Yizheng, Shah, Abhishek, Ray, Baishakhi, Jana, Suman.  2020.  Neutaint: Efficient Dynamic Taint Analysis with Neural Networks. 2020 IEEE Symposium on Security and Privacy (SP). :1527–1543.
Dynamic taint analysis (DTA) is widely used by various applications to track information flow during runtime execution. Existing DTA techniques use rule-based taint-propagation, which is neither accurate (i.e., high false positive rate) nor efficient (i.e., large runtime overhead). It is hard to specify taint rules for each operation while covering all corner cases correctly. Moreover, the overtaint and undertaint errors can accumulate during the propagation of taint information across multiple operations. Finally, rule-based propagation requires each operation to be inspected before applying the appropriate rules resulting in prohibitive performance overhead on large real-world applications.In this work, we propose Neutaint, a novel end-to-end approach to track information flow using neural program embeddings. The neural program embeddings model the target's programs computations taking place between taint sources and sinks, which automatically learns the information flow by observing a diverse set of execution traces. To perform lightweight and precise information flow analysis, we utilize saliency maps to reason about most influential sources for different sinks. Neutaint constructs two saliency maps, a popular machine learning approach to influence analysis, to summarize both coarse-grained and fine-grained information flow in the neural program embeddings.We compare Neutaint with 3 state-of-the-art dynamic taint analysis tools. The evaluation results show that Neutaint can achieve 68% accuracy, on average, which is 10% improvement while reducing 40× runtime overhead over the second-best taint tool Libdft on 6 real world programs. Neutaint also achieves 61% more edge coverage when used for taint-guided fuzzing indicating the effectiveness of the identified influential bytes. We also evaluate Neutaint's ability to detect real world software attacks. The results show that Neutaint can successfully detect different types of vulnerabilities including buffer/heap/integer overflows, division by zero, etc. Lastly, Neutaint can detect 98.7% of total flows, the highest among all taint analysis tools.
Ma, Chuang, You, Haisheng, Wang, Li, Zhang, Jiajun.  2020.  Intelligent Cybersecurity Situational Awareness Model Based on Deep Neural Network. 2020 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC). :76–83.
In recent years, we have faced a series of online threats. The continuous malicious attacks on the network have directly caused a huge threat to the user's spirit and property. In order to deal with the complex security situation in today's network environment, an intelligent network situational awareness model based on deep neural networks is proposed. Use the nonlinear characteristics of the deep neural network to solve the nonlinear fitting problem, establish a network security situation assessment system, take the situation indicators output by the situation assessment system as a guide, and collect on the main data features according to the characteristics of the network attack method, the main data features are collected and the data is preprocessed. This model designs and trains a 4-layer neural network model, and then use the trained deep neural network model to understand and analyze the network situation data, so as to build the network situation perception model based on deep neural network. The deep neural network situational awareness model designed in this paper is used as a network situational awareness simulation attack prediction experiment. At the same time, it is compared with the perception model using gray theory and Support Vector Machine(SVM). The experiments show that this model can make perception according to the changes of state characteristics of network situation data, establish understanding through learning, and finally achieve accurate prediction of network attacks. Through comparison experiments, datatypized neural network deep neural network situation perception model is proved to be effective, accurate and superior.