Biblio
The Software Defined Network (SDN) provides higher programmable functionality for network configuration and management dynamically. Moreover, SDN introduces a centralized management approach by dividing the network into control and data planes. In this paper, we introduce a deep learning enabled intrusion detection and prevention system (DL-IDPS) to prevent secure shell (SSH) brute-force attacks and distributed denial-of-service (DDoS) attacks in SDN. The packet length in SDN switch has been collected as a sequence for deep learning models to identify anomalous and malicious packets. Four deep learning models, including Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM) and Stacked Auto-encoder (SAE), are implemented and compared for the proposed DL-IDPS. The experimental results show that the proposed MLP based DL-IDPS has the highest accuracy which can achieve nearly 99% and 100% accuracy to prevent SSH Brute-force and DDoS attacks, respectively.
Network traffic anomaly detection is of critical importance in cybersecurity due to the massive and rapid growth of sophisticated computer network attacks. Indeed, the more new Internet-related technologies are created, the more elaborate the attacks become. Among all the contemporary high-level attacks, dictionary-based brute-force attacks (BFA) present one of the most unsurmountable challenges. We need to develop effective methods to detect and mitigate such brute-force attacks in realtime. In this paper, we investigate SSH and FTP brute-force attack detection by using the Long Short-Term Memory (LSTM) deep learning approach. Additionally, we made use of machine learning (ML) classifiers: J48, naive Bayes (NB), decision table (DT), random forest (RF) and k-nearest-neighbor (k-NN), for additional detection purposes. We used the well-known labelled dataset CICIDS2017. We evaluated the effectiveness of the LSTM and ML algorithms, and compared their performance. Our results show that the LSTM model outperforms the ML algorithms, with an accuracy of 99.88%.
Security of Internet of vehicles (IoV) is critical as it promises to provide with safer and secure driving. IoV relies on VANETs which is based on V2V (Vehicle to Vehicle) communication. The vehicles are integrated with various sensors and embedded systems allowing them to gather data related to the situation on the road. The collected data can be information associated with a car accident, the congested highway ahead, parked car, etc. This information exchanged with other neighboring vehicles on the road to promote safe driving. IoV networks are vulnerable to various security attacks. The V2V communication comprises specific vulnerabilities which can be manipulated by attackers to compromise the whole network. In this paper, we concentrate on intrusion detection in IoV and propose a multilayer perceptron (MLP) neural network to detect intruders or attackers on an IoV network. Results are in the form of prediction, classification reports, and confusion matrix. A thorough simulation study demonstrates the effectiveness of the new MLP-based intrusion detection system.
A robust and reliable system of detecting spam reviews is a crying need in todays world in order to purchase products without being cheated from online sites. In many online sites, there are options for posting reviews, and thus creating scopes for fake paid reviews or untruthful reviews. These concocted reviews can mislead the general public and put them in a perplexity whether to believe the review or not. Prominent machine learning techniques have been introduced to solve the problem of spam review detection. The majority of current research has concentrated on supervised learning methods, which require labeled data - an inadequacy when it comes to online review. Our focus in this article is to detect any deceptive text reviews. In order to achieve that we have worked with both labeled and unlabeled data and proposed deep learning methods for spam review detection which includes Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN) and a variant of Recurrent Neural Network (RNN) that is Long Short-Term Memory (LSTM). We have also applied some traditional machine learning classifiers such as Nave Bayes (NB), K Nearest Neighbor (KNN) and Support Vector Machine (SVM) to detect spam reviews and finally, we have shown the performance comparison for both traditional and deep learning classifiers.
As a problem solving method, neural networks have shown broad applicability from medical applications, speech recognition, and natural language processing. This success has even led to implementation of neural network algorithms into hardware. In this paper, we explore two questions: (a) to what extent microelectronic variations affects the quality of results by neural networks; and (b) if the answer to first question represents an opportunity to optimize the implementation of neural network algorithms. Regarding first question, variations are now increasingly common in aggressive process nodes and typically manifest as an increased frequency of timing errors. Combating variations - due to process and/or operating conditions - usually results in increased guardbands in circuit and architectural design, thus reducing the gains from process technology advances. Given the inherent resilience of neural networks due to adaptation of their learning parameters, one would expect the quality of results produced by neural networks to be relatively insensitive to the rising timing error rates caused by increased variations. On the contrary, using two frequently used neural networks (MLP and CNN), our results show that variations can significantly affect the inference accuracy. This paper outlines our assessment methodology and use of a cross-layer evaluation approach that extracts hardware-level errors from twenty different operating conditions and then inject such errors back to the software layer in an attempt to answer the second question posed above.
Mathematical formulae are essential in science, but face challenges of ambiguity, due to the use of a small number of identifiers to represent an immense number of concepts. Corresponding to word sense disambiguation in Natural Language Processing, we disambiguate mathematical identifiers. By regarding formulae and natural text as one monolithic information source, we are able to extract the semantics of identifiers in a process we term Mathematical Language Processing (MLP). As scientific communities tend to establish standard (identifier) notations, we use the document domain to infer the actual meaning of an identifier. Therefore, we adapt the software development concept of namespaces to mathematical notation. Thus, we learn namespace definitions by clustering the MLP results and mapping those clusters to subject classification schemata. In addition, this gives fundamental insights into the usage of mathematical notations in science, technology, engineering and mathematics. Our gold standard based evaluation shows that MLP extracts relevant identifier-definitions. Moreover, we discover that identifier namespaces improve the performance of automated identifier-definition extraction, and elevate it to a level that cannot be achieved within the document context alone.