Visible to the public Biblio

Found 789 results

Filters: Keyword is learning (artificial intelligence)  [Clear All Filters]
2018-05-30
Howard, M., Pfeffer, A., Dalai, M., Reposa, M..  2017.  Predicting Signatures of Future Malware Variants. 2017 12th International Conference on Malicious and Unwanted Software (MALWARE). :126–132.
One of the challenges of malware defense is that the attacker has the advantage over the defender. In many cases, an attack is successful and causes damage before the defender can even begin to prepare a defense. The ability to anticipate attacks and prepare defenses before they occur would be a significant scientific and technological development with practical applications in cybersecurity. In this paper, we present a method to augment machine learning-based malware detection systems by predicting signatures of future malware variants and injecting these variants into the defensive system as a vaccine. Our method uses deep learning to learn patterns of malware evolution from family histories. These evolution patterns are then used to predict future family developments. Our experiments show that a detection system augmented with these future malware signatures is able to detect future malware variants that could not be detected by the detection system alone. In particular, it detected 11 new malware variants without increasing false positives, while providing up to 5 months of lead time between prediction and attack.
2018-05-09
Zeng, Y. G..  2017.  Identifying Email Threats Using Predictive Analysis. 2017 International Conference on Cyber Security And Protection Of Digital Services (Cyber Security). :1–2.

Malicious emails pose substantial threats to businesses. Whether it is a malware attachment or a URL leading to malware, exploitation or phishing, attackers have been employing emails as an effective way to gain a foothold inside organizations of all kinds. To combat email threats, especially targeted attacks, traditional signature- and rule-based email filtering as well as advanced sandboxing technology both have their own weaknesses. In this paper, we propose a predictive analysis approach that learns the differences between legit and malicious emails through static analysis, creates a machine learning model and makes detection and prediction on unseen emails effectively and efficiently. By comparing three different machine learning algorithms, our preliminary evaluation reveals that a Random Forests model performs the best.

Hasan, M. M., Rahman, M. M..  2017.  RansHunt: A Support Vector Machines Based Ransomware Analysis Framework with Integrated Feature Set. 2017 20th International Conference of Computer and Information Technology (ICCIT). :1–7.

Ransomware is one of the most increasing malwares used by cyber-criminals in recent days. This type of malware uses cryptographic technology that encrypts a user's important files, folders makes the computer systems unusable, holds the decryption key and asks for the ransom from the victims for recovery. The recent ransomware families are very sophisticated and difficult to analyze & detect using static features only. On the other hand, latest crypto-ransomwares having sandboxing and IDS evading capabilities. So obviously, static or dynamic analysis of the ransomware alone cannot provide better solution. In this paper, we will present a Machine Learning based approach which will use integrated method, a combination of static and dynamic analysis to detect ransomware. The experimental test samples were taken from almost all ransomware families including the most recent ``WannaCry''. The results also suggest that combined analysis can detect ransomware with better accuracy compared to individual analysis approach. Since ransomware samples show some ``run-time'' and ``static code'' features, it also helps for the early detection of new and similar ransomware variants.

Dering, M. L., Tucker, C. S..  2017.  Generative Adversarial Networks for Increasing the Veracity of Big Data. 2017 IEEE International Conference on Big Data (Big Data). :2595–2602.

This work describes how automated data generation integrates in a big data pipeline. A lack of veracity in big data can cause models that are inaccurate, or biased by trends in the training data. This can lead to issues as a pipeline matures that are difficult to overcome. This work describes the use of a Generative Adversarial Network to generate sketch data, such as those that might be used in a human verification task. These generated sketches are verified as recognizable using a crowd-sourcing methodology, and finds that the generated sketches were correctly recognized 43.8% of the time, in contrast to human drawn sketches which were 87.7% accurate. This method is scalable and can be used to generate realistic data in many domains and bootstrap a dataset used for training a model prior to deployment.

Dali, L., Mivule, K., El-Sayed, H..  2017.  A heuristic attack detection approach using the \#x201C;least weighted \#x201D; attributes for cyber security data. 2017 Intelligent Systems Conference (IntelliSys). :1067–1073.

The continuous advance in recent cloud-based computer networks has generated a number of security challenges associated with intrusions in network systems. With the exponential increase in the volume of network traffic data, involvement of humans in such detection systems is time consuming and a non-trivial problem. Secondly, network traffic data tends to be highly dimensional, comprising of numerous features and attributes, making classification challenging and thus susceptible to the curse of dimensionality problem. Given such scenarios, the need arises for dimensional reduction, feature selection, combined with machine-learning techniques in the classification of such data. Therefore, as a contribution, this paper seeks to employ data mining techniques in a cloud-based environment, by selecting appropriate attributes and features with the least importance in terms of weight for the classification. Often the standard is to select features with better weights while ignoring those with least weights. In this study, we seek to find out if we can make prediction using those features with least weights. The motivation is that adversaries use stealth to hide their activities from the obvious. The question then is, can we predict any stealth activity of an adversary using the least observed attributes? In this particular study, we employ information gain to select attributes with the lowest weights and then apply machine learning to classify if a combination, in this case, of both source and destination ports are attacked or not. The motivation of this investigation is if attributes that are of least importance can be used to predict if an attack could occur. Our preliminary results show that even when the source and destination port attributes are used in combination with features with the least weights, it is possible to classify such network traffic data and predict if an attack will occur or not.

2018-05-02
Gu, P., Khatoun, R., Begriche, Y., Serhrouchni, A..  2017.  Support Vector Machine (SVM) Based Sybil Attack Detection in Vehicular Networks. 2017 IEEE Wireless Communications and Networking Conference (WCNC). :1–6.

Vehicular networks have been drawing special atten- tion in recent years, due to its importance in enhancing driving experience and improving road safety in future smart city. In past few years, several security services, based on cryptography, PKI and pseudonymous, have been standardized by IEEE and ETSI. However, vehicular networks are still vulnerable to various attacks, especially Sybil attack. In this paper, a Support Vector Machine (SVM) based Sybil attack detection method is proposed. We present three SVM kernel functions based classifiers to distinguish the malicious nodes from benign ones via evaluating the variance in their Driving Pattern Matrices (DPMs). The effectiveness of our proposed solution is evaluated through extensive simulations based on SUMO simulator and MATLAB. The results show that the proposed detection method can achieve a high detection rate with low error rate even under a dynamic traffic environment.

Clifford, J., Garfield, K., Towhidnejad, M., Neighbors, J., Miller, M., Verenich, E., Staskevich, G..  2017.  Multi-layer model of swarm intelligence for resilient autonomous systems. 2017 IEEE/AIAA 36th Digital Avionics Systems Conference (DASC). :1–4.

Embry-Riddle Aeronautical University (ERAU) is working with the Air Force Research Lab (AFRL) to develop a distributed multi-layer autonomous UAS planning and control technology for gathering intelligence in Anti-Access Area Denial (A2/AD) environments populated by intelligent adaptive adversaries. These resilient autonomous systems are able to navigate through hostile environments while performing Intelligence, Surveillance, and Reconnaissance (ISR) tasks, and minimizing the loss of assets. Our approach incorporates artificial life concepts, with a high-level architecture divided into three biologically inspired layers: cyber-physical, reactive, and deliberative. Each layer has a dynamic level of influence over the behavior of the agent. Algorithms within the layers act on a filtered view of reality, abstracted in the layer immediately below. Each layer takes input from the layer below, provides output to the layer above, and provides direction to the layer below. Fast-reactive control systems in lower layers ensure a stable environment supporting cognitive function on higher layers. The cyber-physical layer represents the central nervous system of the individual, consisting of elements of the vehicle that cannot be changed such as sensors, power plant, and physical configuration. On the reactive layer, the system uses an artificial life paradigm, where each agent interacts with the environment using a set of simple rules regarding wants and needs. Information is communicated explicitly via message passing and implicitly via observation and recognition of behavior. In the deliberative layer, individual agents look outward to the group, deliberating on efficient resource management and cooperation with other agents. Strategies at all layers are developed using machine learning techniques such as Genetic Algorithm (GA) or NN applied to system training that takes place prior to the mission.

Rjoub, G., Bentahar, J..  2017.  Cloud Task Scheduling Based on Swarm Intelligence and Machine Learning. 2017 IEEE 5th International Conference on Future Internet of Things and Cloud (FiCloud). :272–279.

Cloud computing is the expansion of parallel computing, distributed computing. The technology of cloud computing becomes more and more widely used, and one of the fundamental issues in this cloud environment is related to task scheduling. However, scheduling in Cloud environments represents a difficult issue since it is basically NP-complete. Thus, many variants based on approximation techniques, especially those inspired by Swarm Intelligence (SI) have been proposed. This paper proposes a machine learning algorithm to guide the cloud choose the scheduling technique by using multi criteria decision to optimize the performance. The main contribution of our work is to minimize the makespan of a given task set. The new strategy is simulated using the CloudSim toolkit package where the impact of the algorithm is checked with different numbers of VMs varying from 2 to 50, and different task sizes between 30 bytes and 2700 bytes. Experiment results show that the proposed algorithm minimizes the execution time and the makespan between 7% and 75%, and improves the performance of the load balancing scheduling.

Li, F., Jiang, M., Zhang, Z..  2017.  An adaptive sparse representation model by block dictionary and swarm intelligence. 2017 2nd IEEE International Conference on Computational Intelligence and Applications (ICCIA). :200–203.

The pattern recognition in the sparse representation (SR) framework has been very successful. In this model, the test sample can be represented as a sparse linear combination of training samples by solving a norm-regularized least squares problem. However, the value of regularization parameter is always indiscriminating for the whole dictionary. To enhance the group concentration of the coefficients and also to improve the sparsity, we propose a new SR model called adaptive sparse representation classifier(ASRC). In ASRC, a sparse coefficient strengthened item is added in the objective function. The model is solved by the artificial bee colony (ABC) algorithm with variable step to speed up the convergence. Also, a partition strategy for large scale dictionary is adopted to lighten bee's load and removes the irrelevant groups. Through different data sets, we empirically demonstrate the property of the new model and its recognition performance.

2018-05-01
Wang, X., Zhou, S..  2017.  Accelerated Stochastic Gradient Method for Support Vector Machines Classification with Additive Kernel. 2017 First International Conference on Electronics Instrumentation Information Systems (EIIS). :1–6.

Support vector machines (SVMs) have been widely used for classification in machine learning and data mining. However, SVM faces a huge challenge in large scale classification tasks. Recent progresses have enabled additive kernel version of SVM efficiently solves such large scale problems nearly as fast as a linear classifier. This paper proposes a new accelerated mini-batch stochastic gradient descent algorithm for SVM classification with additive kernel (AK-ASGD). On the one hand, the gradient is approximated by the sum of a scalar polynomial function for each feature dimension; on the other hand, Nesterov's acceleration strategy is used. The experimental results on benchmark large scale classification data sets show that our proposed algorithm can achieve higher testing accuracies and has faster convergence rate.

Kaur, A., Jain, S., Goel, S..  2017.  A Support Vector Machine Based Approach for Code Smell Detection. 2017 International Conference on Machine Learning and Data Science (MLDS). :9–14.

Code smells may be introduced in software due to market rivalry, work pressure deadline, improper functioning, skills or inexperience of software developers. Code smells indicate problems in design or code which makes software hard to change and maintain. Detecting code smells could reduce the effort of developers, resources and cost of the software. Many researchers have proposed different techniques like DETEX for detecting code smells which have limited precision and recall. To overcome these limitations, a new technique named as SVMCSD has been proposed for the detection of code smells, based on support vector machine learning technique. Four code smells are specified namely God Class, Feature Envy, Data Class and Long Method and the proposed technique is validated on two open source systems namely ArgoUML and Xerces. The accuracy of SVMCSD is found to be better than DETEX in terms of two metrics, precision and recall, when applied on a subset of a system. While considering the entire system, SVMCSD detect more occurrences of code smells than DETEX.

2018-04-11
Mayadunna, H., Silva, S. L. De, Wedage, I., Pabasara, S., Rupasinghe, L., Liyanapathirana, C., Kesavan, K., Nawarathna, C., Sampath, K. K..  2017.  Improving Trusted Routing by Identifying Malicious Nodes in a MANET Using Reinforcement Learning. 2017 Seventeenth International Conference on Advances in ICT for Emerging Regions (ICTer). :1–8.

Mobile ad-hoc networks (MANETs) are decentralized and self-organizing communication systems. They have become pervasive in the current technological framework. MANETs have become a vital solution to the services that need flexible establishments, dynamic and wireless connections such as military operations, healthcare systems, vehicular networks, mobile conferences, etc. Hence it is more important to estimate the trustworthiness of moving devices. In this research, we have proposed a model to improve a trusted routing in mobile ad-hoc networks by identifying malicious nodes. The proposed system uses Reinforcement Learning (RL) agent that learns to detect malicious nodes. The work focuses on a MANET with Ad-hoc On-demand Distance Vector (AODV) Protocol. Most of the systems were developed with the assumption of a small network with limited number of neighbours. But with the introduction of reinforcement learning concepts this work tries to minimize those limitations. The main objective of the research is to introduce a new model which has the capability to detect malicious nodes that decrease the performance of a MANET significantly. The malicious behaviour is simulated with black holes that move randomly across the network. After identifying the technology stack and concepts of RL, system design was designed and the implementation was carried out. Then tests were performed and defects and further improvements were identified. The research deliverables concluded that the proposed model arranges for highly accurate and reliable trust improvement by detecting malicious nodes in a dynamic MANET environment.

Hasegawa, K., Yanagisawa, M., Togawa, N..  2017.  Trojan-Feature Extraction at Gate-Level Netlists and Its Application to Hardware-Trojan Detection Using Random Forest Classifier. 2017 IEEE International Symposium on Circuits and Systems (ISCAS). :1–4.

Recently, due to the increase of outsourcing in IC design, it has been reported that malicious third-party vendors often insert hardware Trojans into their ICs. How to detect them is a strong concern in IC design process. The features of hardware-Trojan infected nets (or Trojan nets) in ICs often differ from those of normal nets. To classify all the nets in netlists designed by third-party vendors into Trojan ones and normal ones, we have to extract effective Trojan features from Trojan nets. In this paper, we first propose 51 Trojan features which describe Trojan nets from netlists. Based on the importance values obtained from the random forest classifier, we extract the best set of 11 Trojan features out of the 51 features which can effectively detect Trojan nets, maximizing the F-measures. By using the 11 Trojan features extracted, the machine-learning based hardware Trojan classifier has achieved at most 100% true positive rate as well as 100% true negative rate in several TrustHUB benchmarks and obtained the average F-measure of 74.6%, which realizes the best values among existing machine-learning-based hardware-Trojan detection methods.

Ghanem, K., Aparicio-Navarro, F. J., Kyriakopoulos, K. G., Lambotharan, S., Chambers, J. A..  2017.  Support Vector Machine for Network Intrusion and Cyber-Attack Detection. 2017 Sensor Signal Processing for Defence Conference (SSPD). :1–5.

Cyber-security threats are a growing concern in networked environments. The development of Intrusion Detection Systems (IDSs) is fundamental in order to provide extra level of security. We have developed an unsupervised anomaly-based IDS that uses statistical techniques to conduct the detection process. Despite providing many advantages, anomaly-based IDSs tend to generate a high number of false alarms. Machine Learning (ML) techniques have gained wide interest in tasks of intrusion detection. In this work, Support Vector Machine (SVM) is deemed as an ML technique that could complement the performance of our IDS, providing a second line of detection to reduce the number of false alarms, or as an alternative detection technique. We assess the performance of our IDS against one-class and two-class SVMs, using linear and non- linear forms. The results that we present show that linear two-class SVM generates highly accurate results, and the accuracy of the linear one-class SVM is very comparable, and it does not need training datasets associated with malicious data. Similarly, the results evidence that our IDS could benefit from the use of ML techniques to increase its accuracy when analysing datasets comprising of non- homogeneous features.

Gebhardt, D., Parikh, K., Dzieciuch, I., Walton, M., Hoang, N. A. V..  2017.  Hunting for Naval Mines with Deep Neural Networks. OCEANS 2017 - Anchorage. :1–5.

Explosive naval mines pose a threat to ocean and sea faring vessels, both military and civilian. This work applies deep neural network (DNN) methods to the problem of detecting minelike objects (MLO) on the seafloor in side-scan sonar imagery. We explored how the DNN depth, memory requirements, calculation requirements, and training data distribution affect detection efficacy. A visualization technique (class activation map) was incorporated that aids a user in interpreting the model's behavior. We found that modest DNN model sizes yielded better accuracy (98%) than very simple DNN models (93%) and a support vector machine (78%). The largest DNN models achieved textless;1% efficacy increase at a cost of a 17x increase of trainable parameter count and computation requirements. In contrast to DNNs popularized for many-class image recognition tasks, the models for this task require far fewer computational resources (0.3% of parameters), and are suitable for embedded use within an autonomous unmanned underwater vehicle.

Deliu, I., Leichter, C., Franke, K..  2017.  Extracting Cyber Threat Intelligence from Hacker Forums: Support Vector Machines versus Convolutional Neural Networks. 2017 IEEE International Conference on Big Data (Big Data). :3648–3656.

Hacker forums and other social platforms may contain vital information about cyber security threats. But using manual analysis to extract relevant threat information from these sources is a time consuming and error-prone process that requires a significant allocation of resources. In this paper, we explore the potential of Machine Learning methods to rapidly sift through hacker forums for relevant threat intelligence. Utilizing text data from a real hacker forum, we compared the text classification performance of Convolutional Neural Network methods against more traditional Machine Learning approaches. We found that traditional machine learning methods, such as Support Vector Machines, can yield high levels of performance that are on par with Convolutional Neural Network algorithms.

2018-04-04
Ullah, I., Mahmoud, Q. H..  2017.  A hybrid model for anomaly-based intrusion detection in SCADA networks. 2017 IEEE International Conference on Big Data (Big Data). :2160–2167.

Supervisory Control and Data Acquisition (SCADA) systems complexity and interconnectivity increase in recent years have exposed the SCADA networks to numerous potential vulnerabilities. Several studies have shown that anomaly-based Intrusion Detection Systems (IDS) achieves improved performance to identify unknown or zero-day attacks. In this paper, we propose a hybrid model for anomaly-based intrusion detection in SCADA networks using machine learning approach. In the first part, we present a robust hybrid model for anomaly-based intrusion detection in SCADA networks. Finally, we present a feature selection model for anomaly-based intrusion detection in SCADA networks by removing redundant and irrelevant features. Irrelevant features in the dataset can affect modeling power and reduce predictive accuracy. These models were evaluated using an industrial control system dataset developed at the Distributed Analytics and Security Institute Mississippi State University Starkville, MS, USA. The experimental results show that our proposed model has a key effect in reducing the time and computational complexity and achieved improved accuracy and detection rate. The accuracy of our proposed model was measured as 99.5 % for specific-attack-labeled.

Zekri, M., Kafhali, S. E., Aboutabit, N., Saadi, Y..  2017.  DDoS attack detection using machine learning techniques in cloud computing environments. 2017 3rd International Conference of Cloud Computing Technologies and Applications (CloudTech). :1–7.

Cloud computing is a revolution in IT technology that provides scalable, virtualized on-demand resources to the end users with greater flexibility, less maintenance and reduced infrastructure cost. These resources are supervised by different management organizations and provided over Internet using known networking protocols, standards and formats. The underlying technologies and legacy protocols contain bugs and vulnerabilities that can open doors for intrusion by the attackers. Attacks as DDoS (Distributed Denial of Service) are ones of the most frequent that inflict serious damage and affect the cloud performance. In a DDoS attack, the attacker usually uses innocent compromised computers (called zombies) by taking advantages of known or unknown bugs and vulnerabilities to send a large number of packets from these already-captured zombies to a server. This may occupy a major portion of network bandwidth of the victim cloud infrastructures or consume much of the servers time. Thus, in this work, we designed a DDoS detection system based on the C.4.5 algorithm to mitigate the DDoS threat. This algorithm, coupled with signature detection techniques, generates a decision tree to perform automatic, effective detection of signatures attacks for DDoS flooding attacks. To validate our system, we selected other machine learning techniques and compared the obtained results.

Majumder, R., Som, S., Gupta, R..  2017.  Vulnerability prediction through self-learning model. 2017 International Conference on Infocom Technologies and Unmanned Systems (Trends and Future Directions) (ICTUS). :400–402.

Vulnerability being the buzz word in the modern time is the most important jargon related to software and operating system. Since every now and then, software is developed some loopholes and incompleteness lie in the development phase, so there always remains a vulnerability of abruptness in it which can come into picture anytime. Detecting vulnerability is one thing and predicting its occurrence in the due course of time is another thing. If we get to know the vulnerability of any software in the due course of time then it acts as an active alarm for the developers to again develop sound and improvised software the second time. The proposal talks about the implementation of the idea using the artificial neural network, where different data sets are being given as input for being used for further analysis for successful results. As of now, there are models for studying the vulnerabilities in the software and networks, this paper proposal in addition to the current work, will throw light on the predictability of vulnerabilities over the due course of time.

Rupasinghe, R. A. A., Padmasiri, D. A., Senanayake, S. G. M. P., Godaliyadda, G. M. R. I., Ekanayake, M. P. B., Wijayakulasooriya, J. V..  2017.  Dynamic clustering for event detection and anomaly identification in video surveillance. 2017 IEEE International Conference on Industrial and Information Systems (ICIIS). :1–6.

This work introduces concepts and algorithms along with a case study validating them, to enhance the event detection, pattern recognition and anomaly identification results in real life video surveillance. The motivation for the work underlies in the observation that human behavioral patterns in general continuously evolve and adapt with time, rather than being static. First, limitations in existing work with respect to this phenomena are identified. Accordingly, the notion and algorithms of Dynamic Clustering are introduced in order to overcome these drawbacks. Correspondingly, we propose the concept of maintaining two separate sets of data in parallel, namely the Normal Plane and the Anomaly Plane, to successfully achieve the task of learning continuously. The practicability of the proposed algorithms in a real life scenario is demonstrated through a case study. From the analysis presented in this work, it is evident that a more comprehensive analysis, closely following human perception can be accomplished by incorporating the proposed notions and algorithms in a video surveillance event.

2018-04-02
Odesile, A., Thamilarasu, G..  2017.  Distributed Intrusion Detection Using Mobile Agents in Wireless Body Area Networks. 2017 Seventh International Conference on Emerging Security Technologies (EST). :144–149.

Technological advances in wearable and implanted medical devices are enabling wireless body area networks to alter the current landscape of medical and healthcare applications. These systems have the potential to significantly improve real time patient monitoring, provide accurate diagnosis and deliver faster treatment. In spite of their growth, securing the sensitive medical and patient data relayed in these networks to protect patients' privacy and safety still remains an open challenge. The resource constraints of wireless medical sensors limit the adoption of traditional security measures in this domain. In this work, we propose a distributed mobile agent based intrusion detection system to secure these networks. Specifically, our autonomous mobile agents use machine learning algorithms to perform local and network level anomaly detection to detect various security attacks targeted on healthcare systems. Simulation results show that our system performs efficiently with high detection accuracy and low energy consumption.

Al-Zewairi, M., Almajali, S., Awajan, A..  2017.  Experimental Evaluation of a Multi-Layer Feed-Forward Artificial Neural Network Classifier for Network Intrusion Detection System. 2017 International Conference on New Trends in Computing Sciences (ICTCS). :167–172.

Deep Learning has been proven more effective than conventional machine-learning algorithms in solving classification problem with high dimensionality and complex features, especially when trained with big data. In this paper, a deep learning binomial classifier for Network Intrusion Detection System is proposed and experimentally evaluated using the UNSW-NB15 dataset. Three different experiments were executed in order to determine the optimal activation function, then to select the most important features and finally to test the proposed model on unseen data. The evaluation results demonstrate that the proposed classifier outperforms other models in the literature with 98.99% accuracy and 0.56% false alarm rate on unseen data.

Yousefi-Azar, M., Varadharajan, V., Hamey, L., Tupakula, U..  2017.  Autoencoder-Based Feature Learning for Cyber Security Applications. 2017 International Joint Conference on Neural Networks (IJCNN). :3854–3861.

This paper presents a novel feature learning model for cyber security tasks. We propose to use Auto-encoders (AEs), as a generative model, to learn latent representation of different feature sets. We show how well the AE is capable of automatically learning a reasonable notion of semantic similarity among input features. Specifically, the AE accepts a feature vector, obtained from cyber security phenomena, and extracts a code vector that captures the semantic similarity between the feature vectors. This similarity is embedded in an abstract latent representation. Because the AE is trained in an unsupervised fashion, the main part of this success comes from appropriate original feature set that is used in this paper. It can also provide more discriminative features in contrast to other feature engineering approaches. Furthermore, the scheme can reduce the dimensionality of the features thereby signicantly minimising the memory requirements. We selected two different cyber security tasks: networkbased anomaly intrusion detection and Malware classication. We have analysed the proposed scheme with various classifiers using publicly available datasets for network anomaly intrusion detection and malware classifications. Several appropriate evaluation metrics show improvement compared to prior results.

Alom, M. Z., Taha, T. M..  2017.  Network Intrusion Detection for Cyber Security on Neuromorphic Computing System. 2017 International Joint Conference on Neural Networks (IJCNN). :3830–3837.

In the paper, we demonstrate a neuromorphic cognitive computing approach for Network Intrusion Detection System (IDS) for cyber security using Deep Learning (DL). The algorithmic power of DL has been merged with fast and extremely power efficient neuromorphic processors for cyber security. In this implementation, the data has been numerical encoded to train with un-supervised deep learning techniques called Auto Encoder (AE) in the training phase. The generated weights of AE are used as initial weights for the supervised training phase using neural networks. The final weights are converted to discrete values using Discrete Vector Factorization (DVF) for generating crossbar weight, synaptic weights, and thresholds for neurons. Finally, the generated crossbar weights, synaptic weights, threshold, and leak values are mapped to crossbars and neurons. In the testing phase, the encoded test samples are converted to spiking form by using hybrid encoding technique. The model has been deployed and tested on the IBM Neurosynaptic Core Simulator (NSCS) and on actual IBM TrueNorth neurosynaptic chip. The experimental results show around 90.12% accuracy for network intrusion detection for cyber security on the physical neuromorphic chip. Furthermore, we have investigated the proposed system not only for detection of malicious packets but also for classifying specific types of attacks and achieved 81.31% recognition accuracy. The neuromorphic implementation provides incredible detection and classification accuracy for network intrusion detection with extremely low power.

He, X., Islam, M. M., Jin, R., Dai, H..  2017.  Foresighted Deception in Dynamic Security Games. 2017 IEEE International Conference on Communications (ICC). :1–6.

Deception has been widely considered in literature as an effective means of enhancing security protection when the defender holds some private information about the ongoing rivalry unknown to the attacker. However, most of the existing works on deception assume static environments and thus consider only myopic deception, while practical security games between the defender and the attacker may happen in dynamic scenarios. To better exploit the defender's private information in dynamic environments and improve security performance, a stochastic deception game (SDG) framework is developed in this work to enable the defender to conduct foresighted deception. To solve the proposed SDG, a new iterative algorithm that is provably convergent is developed. A corresponding learning algorithm is developed as well to facilitate the defender in conducting foresighted deception in unknown dynamic environments. Numerical results show that the proposed foresighted deception can offer a substantial performance improvement as compared to the conventional myopic deception.