Biblio
Mobile ad hoc networks (MANETs) are a set of mobile wireless nodes that can communicate without the need for an infrastructure. Features of MANETs have made them vulnerable to many security attacks including wormhole attack. In the past few years, different methods have been introduced for detecting, mitigating, and preventing wormhole attacks in MANETs. In this paper, we introduce a new decentralized scheme based on statistical metrics for detecting wormholes that employs “number of new neighbors” along with “number of neighbors” for each node as its parameters. The proposed scheme has considerably low detection delay and does not create any traffic overhead for routing protocols which include neighbor discovery mechanism. Also, it possesses reasonable processing power and memory usage. Our simulation results using NS3 simulator show that the proposed scheme performs well in terms of detection accuracy, false positive rate and mean detection delay.
Accurate model is very important for the control of nonlinear system. The traditional identification method based on shallow BP network is easy to fall into local optimal solution. In this paper, a modeling method for nonlinear system based on improved Deep Belief Network (DBN) is proposed. Continuous Restricted Boltzmann Machine (CRBM) is used as the first layer of the DBN, so that the network can more effectively deal with the actual data collected from the real systems. Then, the unsupervised training and supervised tuning were combine to improve the accuracy of identification. The simulation results show that the proposed method has a higher identification accuracy. Finally, this improved algorithm is applied to identification of diameter model of silicon single crystal and the simulation results prove its excellent ability of parameters identification.
Training the future cybersecurity workforce to respond to emerging threats requires introduction of novel educational interventions into the cybersecurity curriculum. To be effective, these interventions have to incorporate trending knowledge from cybersecurity and other related domains while allowing for experiential learning through hands-on experimentation. To date, the traditional interdisciplinary approach for cybersecurity training has infused political science, law, economics or linguistics knowledge into the cybersecurity curriculum, allowing for limited experimentation. Cybersecurity students were left with little opportunity to acquire knowledge, skills, and abilities in domains outside of these. Also, students in outside majors had no options to get into cybersecurity. With this in mind, we developed an interdisciplinary course for experiential learning in the fields of cybersecurity and interaction design. The inaugural course teaches students from cybersecurity, user interaction design, and visual design the principles of designing for secure use - or secure design - and allows them to apply them for prototyping of Internet-of-Things (IoT) products for smart homes. This paper elaborates on the concepts of secure design and how our approach enhances the training of the future cybersecurity workforce.
The security of image steganography is an important basis for evaluating steganography algorithms. Steganography has recently made great progress in the long-term confrontation with steganalysis. To improve the security of image steganography, steganography must have the ability to resist detection by steganalysis algorithms. Traditional embedding-based steganography embeds the secret information into the content of an image, which unavoidably leaves a trace of the modification that can be detected by increasingly advanced machine-learning-based steganalysis algorithms. The concept of steganography without embedding (SWE), which does not need to modify the data of the carrier image, appeared to overcome the detection of machine-learning-based steganalysis algorithms. In this paper, we propose a novel image SWE method based on deep convolutional generative adversarial networks. We map the secret information into a noise vector and use the trained generator neural network model to generate the carrier image based on the noise vector. No modification or embedding operations are required during the process of image generation, and the information contained in the image can be extracted successfully by another neural network, called the extractor, after training. The experimental results show that this method has the advantages of highly accurate information extraction and a strong ability to resist detection by state-of-the-art image steganalysis algorithms.
Aiming at the phenomenon that the urban traffic is complex at present, the optimization algorithm of the traditional logistic distribution path isn't sensitive to the change of road condition without strong application in the actual logistics distribution, the optimization algorithm research of logistics distribution path based on the deep belief network is raised. Firstly, build the traffic forecast model based on the deep belief network, complete the model training and conduct the verification by learning lots of traffic data. On such basis, combine the predicated road condition with the traffic network to build the time-share traffic network, amend the access set and the pheromone variable of ant algorithm in accordance with the time-share traffic network, and raise the optimization algorithm of logistics distribution path based on the traffic forecasting. Finally, verify the superiority and application value of the algorithm in the actual distribution through the optimization algorithm contrast test with other logistics distribution paths.
Modern industrial control systems (ICS) act as victims of cyber attacks more often in last years. These attacks are hard to detect and their consequences can be catastrophic. Cyber attacks can cause anomalies in the work of the ICS and its technological equipment. The presence of mutual interference and noises in this equipment significantly complicates anomaly detection. Moreover, the traditional means of protection, which used in corporate solutions, require updating with each change in the structure of the industrial process. An approach based on the machine learning for anomaly detection was used to overcome these problems. It complements traditional methods and allows one to detect signal correlations and use them for anomaly detection. Additional Tennessee Eastman Process Simulation Data for Anomaly Detection Evaluation dataset was analyzed as example of industrial process. In the course of the research, correlations between the signals of the sensors were detected and preliminary data processing was carried out. Algorithms from the most common techniques of machine learning (decision trees, linear algorithms, support vector machines) and deep learning models (neural networks) were investigated for industrial process anomaly detection task. It's shown that linear algorithms are least demanding on computational resources, but they don't achieve an acceptable result and allow a significant number of errors. Decision tree-based algorithms provided an acceptable accuracy, but the amount of RAM, required for their operations, relates polynomially with the training sample volume. The deep neural networks provided the greatest accuracy, but they require considerable computing power for internal calculations.
The storage efficiency of hash codes and their application in the fast approximate nearest neighbor search, along with the explosion in the size of available labeled image datasets caused an intensive interest in developing learning based hash algorithms recently. In this paper, we present a learning based hash algorithm that utilize ordinal information of feature vectors. We have proposed a novel mathematically differentiable approximation of argmax function for this hash algorithm. It has enabled seamless integration of hash function with deep neural network architecture which can exploit the rich feature vectors generated by convolutional neural networks. We have also proposed a loss function for the case that the hash code is not binary and its entries are digits of arbitrary k-ary base. The resultant model comprised of feature vector generation and hashing layer is amenable to end-to-end training using gradient descent methods. In contrast to the majority of current hashing algorithms that are either not learning based or use hand-crafted feature vectors as input, simultaneous training of the components of our system results in better optimization. Extensive evaluations on NUS-WIDE, CIFAR-10 and MIRFlickr benchmarks show that the proposed algorithm outperforms state-of-art and classical data agnostic, unsupervised and supervised hashing methods by 2.6% to 19.8% mean average precision under various settings.
In practice, Defenders need a more efficient network detection approach which has the advantages of quick-responding learning capability of new network behavioural features for network intrusion detection purpose. In many applications the capability of Deep Learning techniques has been confirmed to outperform classic approaches. Accordingly, this study focused on network intrusion detection using convolutional neural networks (CNNs) based on LeNet-5 to classify the network threats. The experiment results show that the prediction accuracy of intrusion detection goes up to 99.65% with samples more than 10,000. The overall accuracy rate is 97.53%.
Early detection of new kinds of malware always plays an important role in defending the network systems. Especially, if intelligent protection systems could themselves detect an existence of new malware types in their system, even with a very small number of malware samples, it must be a huge benefit for the organization as well as the social since it help preventing the spreading of that kind of malware. To deal with learning from few samples, term ``one-shot learning'' or ``fewshot learning'' was introduced, and mostly used in computer vision to recognize images, handwriting, etc. An approach introduced in this paper takes advantage of One-shot learning algorithms in solving the malware classification problem by using Memory Augmented Neural Network in combination with malware's API calls sequence, which is a very valuable source of information for identifying malware behavior. In addition, it also use some advantages of the development in Natural Language Processing field such as word2vec, etc. to convert those API sequences to numeric vectors before feeding to the one-shot learning network. The results confirm very good accuracies compared to the other traditional methods.
Healthcare is a vital component of every nation's critical infrastructure, yet it is one of the most vulnerable sector for cyber-attacks. To enforce the knowledge on information security processes and data protection procedures, educational and training schemes should be establishedfor information technology (IT) staff working in healthcare settings. However, only training IT staff is not enough, as many of cybersecurity threats are caused by human errors or lack of awareness. Current awareness and training schemes are often implemented in silos, concentrating on one aspect of cybersecurity at a time. Proactive Resilience Educational Framework (Prosilience EF) provides a holistic cyber resilience and security framework for developing and delivering a multilateral educational and training scheme based on a proactive approach to cybersecurity. The framework is built on the principle that education and training must be interactive, guided, meaningful and directly relevant to the user' operational environment. The framework addresses capacity mapping, cyber resilience level measuring, utilizing available and mapping missing resources, adaptive learning technologies and dynamic content delivery. Prosilience EF launches an iterative process of awareness and training development with relevant stakeholders (end users - hospitals, healthcare authorities, cybersecurity training providers, industry members), evaluating the framework via joint exercises/workshops andfurther developing the framework.
Integrated circuits (ICs) are becoming vulnerable to hardware Trojans. Most of existing works require golden chips to provide references for hardware Trojan detection. However, a golden chip is extremely difficult to obtain. In previous work, we have proposed a classification-based golden chips-free hardware Trojan detection technique. However, the algorithm in the previous work are trained by simulated ICs without considering that there may be a shift which occurs between the simulation and the silicon fabrication. It is necessary to learn from actual silicon fabrication in order to obtain an accurate and effective classification model. We propose a co-training based hardware Trojan detection technique exploiting unlabeled fabricated ICs and inaccurate simulation models, to provide reliable detection capability when facing fabricated ICs, while eliminating the need of fabricated golden chips. First, we train two classification algorithms using simulated ICs. During test-time, the two algorithms can identify different patterns in the unlabeled ICs, and thus be able to label some of these ICs for the further training of the another algorithm. Moreover, we use a statistical examination to choose ICs labeling for the another algorithm in order to help prevent a degradation in performance due to the increased noise in the labeled ICs. We also use a statistical technique for combining the hypotheses from the two classification algorithms to obtain the final decision. The theoretical basis of why the co-training method can work is also described. Experiment results on benchmark circuits show that the proposed technique can detect unknown Trojans with high accuracy (92% 97%) and recall (88% 95%).
Machine learning (ML) models are often trained using private datasets that are very expensive to collect, or highly sensitive, using large amounts of computing power. The models are commonly exposed either through online APIs, or used in hardware devices deployed in the field or given to the end users. This provides an incentive for adversaries to steal these ML models as a proxy for gathering datasets. While API-based model exfiltration has been studied before, the theft and protection of machine learning models on hardware devices have not been explored as of now. In this work, we examine this important aspect of the design and deployment of ML models. We illustrate how an attacker may acquire either the model or the model architecture through memory probing, side-channels, or crafted input attacks, and propose (1) power-efficient obfuscation as an alternative to encryption, and (2) timing side-channel countermeasures.
We investigate a deep learning model for action recognition that simultaneously extracts spatio-temporal information from a raw RGB input data. The proposed multiple spatio-temporal scales recurrent neural network (MSTRNN) model is derived by combining multiple timescale recurrent dynamics with a conventional convolutional neural network model. The architecture of the proposed model imposes both spatial and temporal constraints simultaneously on its neural activities. The constraints vary, with multiple scales in different layers. As suggested by the principle of upward and downward causation, it is assumed that the network can develop a functional hierarchy using its constraints during training. To evaluate and observe the characteristics of the proposed model, we use three human action datasets consisting of different primitive actions and different compositionality levels. The performance capabilities of the MSTRNN model on these datasets are compared with those of other representative deep learning models used in the field. The results show that the MSTRNN outperforms baseline models while using fewer parameters. The characteristics of the proposed model are observed by analyzing its internal representation properties. The analysis clarifies how the spatio-temporal constraints of the MSTRNN model aid in how it extracts critical spatio-temporal information relevant to its given tasks.
At a time when all it takes to open a Twitter account is a mobile phone, the act of authenticating information encountered on social media becomes very complex, especially when we lack measures to verify digital identities in the first place. Because the platform supports anonymity, fake news generated by dubious sources have been observed to travel much faster and farther than real news. Hence, we need valid measures to identify authors of misinformation to avert these consequences. Researchers propose different authorship attribution techniques to approach this kind of problem. However, because tweets are made up of only 280 characters, finding a suitable authorship attribution technique is a challenge. This research aims to classify authors of tweets by comparing machine learning methods like logistic regression and naive Bayes. The processes of this application are fetching of tweets, pre-processing, feature extraction, and developing a machine learning model for classification. This paper illustrates the text classification for authorship process using machine learning techniques. In total, there were 46,895 tweets used as both training and testing data, and unique features specific to Twitter were extracted. Several steps were done in the pre-processing phase, including removal of short texts, removal of stop-words and punctuations, tokenizing and stemming of texts as well. This approach transforms the pre-processed data into a set of feature vector in Python. Logistic regression and naive Bayes algorithms were applied to the set of feature vectors for the training and testing of the classifier. The logistic regression based classifier gave the highest accuracy of 91.1% compared to the naive Bayes classifier with 89.8%.
At present, mobile terminals are widely used in power system and easy to be the target or springboard to attack the power system. It is necessary to have security assessment of power mobile terminal system to enable early warning of potential risks. In the context, this paper builds the security assessment system against to power mobile terminals, with features from security assessment system of general mobile terminals and power application scenarios. Compared with the existing methods, this paper introduces machine learning to the Rank Correlation Analysis method, which relies on expert experience, and uses objective experimental data to optimize the weight parameters of the indicators. From experiments, this paper proves that weights self-learning method can be used to evaluate the security of power mobile terminal system and improve credibility of the result.
Machine learning (ML) algorithms provide a good solution for many security sensitive applications, they themselves, however, face the threats of adversary attacks. As a key problem in machine learning, how to design robust feature selection algorithms against these attacks becomes a hot issue. The current researches on defending evasion attacks mainly focus on wrapped adversarial feature selection algorithm, i.e., WAFS, which is dependent on the classification algorithms, and time cost is very high for large-scale data. Since mRMR (minimum Redundancy and Maximum Relevance) algorithm is one of the most popular filter algorithms for feature selection without considering any classifier during feature selection process. In this paper, we propose a novel adversary-aware feature selection algorithm under filter model based on mRMR, named FAFS. The algorithm, on the one hand, takes the correlation between a single feature and a label, and the redundancy between features into account; on the other hand, when selecting features, it not only considers the generalization ability in the absence of attack, but also the robustness under attack. The performance of four algorithms, i.e., mRMR, TWFS (Traditional Wrapped Feature Selection algorithm), WAFS, and FAFS is evaluated on spam filtering and PDF malicious detection in the Perfect Knowledge attack scenarios. The experiment results show that FAFS has a better performance under evasion attacks with less time complexity, and comparable classification accuracy.
Predicting software faults before software testing activities can help rational distribution of time and resources. Software metrics are used for software fault prediction due to their close relationship with software faults. Thanks to the non-linear fitting ability, Neural networks are increasingly used in the prediction model. We first filter metric set of the embedded software by statistical methods to reduce the dimensions of model input. Then we build a back propagation neural network with simple structure but good performance and apply it to two practical embedded software projects. The verification results show that the model has good ability to predict software faults.