Biblio
Information security is winding up noticeably more vital in information stockpiling and transmission. Images are generally utilised for various purposes. As a result, the protection of image from the unauthorised client is critical. Established encryption techniques are not ready to give a secure framework. To defeat this, image encryption is finished through DNA encoding which is additionally included with confused 1D and 2D logistic maps. The key communication is done through the quantum channel using the BB84 protocol. To recover the encrypted image DNA decoding is performed. Since DNA encryption is invertible, decoding can be effectively done through DNA subtraction. It decreases the complexity and furthermore gives more strength when contrasted with traditional encryption plans. The enhanced strength of the framework is measured utilising measurements like NPCR, UACI, Correlation and Entropy.
Data have become an important asset for analysis and behavioral prediction, especially correlations between data. Privacy protection has aroused academic and social concern given the amount of personal sensitive information involved in data. However, existing works assume that the records are independent of each other, which is unsuitable for associated data. Many studies either fail to achieve privacy protection or lead to excessive loss of information while applying data correlations. Differential privacy, which achieves privacy protection by injecting random noise into the statistical results given the correlation, will improve the background knowledge of adversaries. Therefore, this paper proposes an information entropy differential privacy solution for correlation data privacy issues based on rough set theory. Under the solution, we use rough set theory to measure the degree of association between attributes and use information entropy to quantify the sensitivity of the attribute. The information entropy difference privacy is achieved by clustering based on the correlation and adding personalized noise to each cluster while preserving the correlations between data. Experiments show that our algorithm can effectively preserve the correlation between the attributes while protecting privacy.
In mobile wireless sensor networks (MWSN), data imprecision is a common problem. Decision making in real time applications may be greatly affected by a minor error. Even though there are many existing techniques that take advantage of the spatio-temporal characteristics exhibited in mobile environments, few measure the trustworthiness of sensor data accuracy. We propose a unique online context-aware data cleaning method that measures trustworthiness by employing an initial candidate reduction through the analysis of trust parameters used in financial markets theory. Sensors with similar trajectory behaviors are assigned trust scores estimated through the calculation of “betas” for finding the most accurate data to trust. Instead of devoting all the trust into a single candidate sensor's data to perform the cleaning, a Diversified Trust Portfolio (DTP) is generated based on the selected set of spatially autocorrelated candidate sensors. Our results show that samples cleaned by the proposed method exhibit lower percent error when compared to two well-known and effective data cleaning algorithms in tested outdoor and indoor scenarios.
With the improvement in technology and with the increase in the use of wireless devices there is deficiency of radio spectrum. Cognitive radio is considered as the solution for this problem. Cognitive radio is capable to detect which communication channels are in use and which are free, and immediately move into free channels while avoiding the used ones. This increases the usage of radio frequency spectrum. Any wireless system is prone to attack. Likewise, the main two attacks in the physical layer of cognitive radio are Primary User Emulation Attack (PUEA) and replay attack. This paper focusses on mitigating these two attacks with the aid of authentication tag and distance calculation. Mitigation of these attacks results in error free transmission which in turn fallouts in efficient dynamic spectrum access.
Cyber defense can no longer be limited to intrusion detection methods. These systems require malicious activity to enter an internal network before an attack can be detected. Having advanced, predictive knowledge of future attacks allow a potential victim to heighten security and possibly prevent any malicious traffic from breaching the network. This paper investigates the use of Auto-Regressive Integrated Moving Average (ARIMA) models and Bayesian Networks (BN) to predict future cyber attack occurrences and intensities against two target entities. In addition to incident count forecasting, categorical and binary occurrence metrics are proposed to better represent volume forecasts to a victim. Different measurement periods are used in time series construction to better model the temporal patterns unique to each attack type and target configuration, seeing over 86% improvement over baseline forecasts. Using ground truth aggregated over different measurement periods as signals, a BN is trained and tested for each attack type and the obtained results provided further evidence to support the findings from ARIMA. This work highlights the complexity of cyber attack occurrences; each subset has unique characteristics and is influenced by a number of potential external factors.
Routers are important devices in the networks that carry the burden of transmitting information among the communication devices on the Internet. If a malicious adversary wants to intercept the information or paralyze the network, it can directly attack the routers and then achieve the suspicious goals. Thus, preventing router security is of great importance. However, router systems are notoriously difficult to understand or diagnose for their inaccessibility and heterogeneity. The common way of gaining access to the router system and detecting the anomaly behaviors is to inspect the router syslogs or monitor the packets of information flowing to the routers. These approaches just diagnose the routers from one aspect but do not consider them from multiple views. In this paper, we propose an approach to detect the anomalies and faults of the routers with multiple information learning. We try to use the routers' information not from the developer's view but from the user' s view, which does not need any expert knowledge. First, we do the offline learning to transform the benign or corrupted user actions into the syslogs. Then, we try to decide whether the input routers' conditions are poor or not with clustering. During the detection phase, we use the distance between the event and the cluster to decide if it is the anomaly event and we can provide the corresponding solutions. We have applied our approach in a university network which contains Cisco, Huawei and Dlink routers for three months. We aligned our experiment with former work as a baseline for comparison. Our approach can gain 89.6% accuracy in detecting the attacks which is 5.1% higher than the former work. The results show that our approach performs in limited time as well as memory usages and has high detection and low false positives.
Collaborative filtering (CF) recommender system has been widely used for its well performing in personalized recommendation, but CF recommender system is vulnerable to shilling attacks in which shilling attack profiles are injected into the system by attackers to affect recommendations. Design robust recommender system and propose attack detection methods are the main research direction to handle shilling attacks, among which unsupervised PCA is particularly effective in experiment, but if we have no information about the number of shilling attack profiles, the unsupervised PCA will be suffered. In this paper, a new unsupervised detection method which combine PCA and data complexity has been proposed to detect shilling attacks. In the proposed method, PCA is used to select suspected attack profiles, and data complexity is used to pick out the authentic profiles from suspected attack profiles. Compared with the traditional PCA, the proposed method could perform well and there is no need to determine the number of shilling attack profiles in advance.
Code churn has been successfully used to identify defect inducing changes in software development. Our recent analysis of the cross-release code churn showed that several design metrics exhibit moderate correlation with the number of defects in complex systems. The goal of this paper is to explore whether cross-release code churn can be used to identify critical design change and contribute to prediction of defects for software in evolution. In our case study, we used two types of data from consecutive releases of open-source projects, with and without cross-release code churn, to build standard prediction models. The prediction models were trained on earlier releases and tested on the following ones, evaluating the performance in terms of AUC, GM and effort aware measure Pop. The comparison of their performance was used to answer our research question. The obtained results showed that the prediction model performs better when cross-release code churn is included. Practical implication of this research is to use cross-release code churn to aid in safe planning of next release in software development.
Lately, we are facing the Malware crisis due to various types of malware or malicious programs or scripts available in the huge virtual world - the Internet. But, what is malware? Malware can be a malicious software or a program or a script which can be harmful to the user's computer. These malicious programs can perform a variety of functions, including stealing, encrypting or deleting sensitive data, altering or hijacking core computing functions and monitoring users' computer activity without their permission. There are various entry points for these programs and scripts in the user environment, but only one way to remove them is to find them and kick them out of the system which isn't an easy job as these small piece of script or code can be anywhere in the user system. This paper involves the understanding of different types of malware and how we will use Machine Learning to detect these malwares.
In recent years, deep convolution neural networks (DCNNs) have won many contests in machine learning, object detection, and pattern recognition. Furthermore, deep learning techniques achieved exceptional performance in image classification, reaching accuracy levels beyond human capability. Malware variants from similar categories often contain similarities due to code reuse. Converting malware samples into images can cause these patterns to manifest as image features, which can be exploited for DCNN classification. Techniques for converting malware binaries into images for visualization and classification have been reported in the literature, and while these methods do reach a high level of classification accuracy on training datasets, they tend to be vulnerable to overfitting and perform poorly on previously unseen samples. In this paper, we explore and document a variety of techniques for representing malware binaries as images with the goal of discovering a format best suited for deep learning. We implement a database for malware binaries from several families, stored in hexadecimal format. These malware samples are converted into images using various approaches and are used to train a neural network to recognize visual patterns in the input and classify malware based on the feature vectors. Each image type is assessed using a variety of learning models, such as transfer learning with existing DCNN architectures and feature extraction for support vector machine classifier training. Each technique is evaluated in terms of classification accuracy, result consistency, and time per trial. Our preliminary results indicate that improved image representation has the potential to enable more effective classification of new malware.
The popularity of Android, not only in handsets but also in IoT devices, makes it a very attractive target for malware threats, which are actually expanding at a significant rate. The state-of-the-art in malware mitigation solutions mainly focuses on the detection of malicious Android apps using dynamic and static analysis features to segregate malicious apps from benign ones. Nevertheless, there is a small coverage for the Internet/network dimension of Android malicious apps. In this paper, we present ToGather, an automatic investigation framework that takes Android malware samples as input and produces insights about the underlying malicious cyber infrastructures. ToGather leverages state-of-the-art graph theory techniques to generate actionable, relevant and granular intelligence to mitigate the threat effects induced by the malicious Internet activity of Android malware apps. We evaluate ToGather on a large dataset of real malware samples from various Android families, and the obtained results are both interesting and promising.
MANET is vulnerable to so many attacks like Black hole, Wormhole, Jellyfish, Dos etc. Attackers can easily launch Wormhole attack by faking a route from original within network. In this paper, we propose an algorithm on AD (Absolute Deviation) of statistical approach to avoid and prevent Wormhole attack. Absolute deviation covariance and correlation take less time to detect Wormhole attack than classical one. Any extra necessary conditions, like GPS are not needed in proposed algorithms. From origin to destination, a fake tunnel is created by wormhole attackers, which is a link with good amount of frequency level. A false idea is created by this, that the source and destination of the path are very nearby each other and will take less time. But the original path takes more time. So it is necessary to calculate the time taken to avoid and prevent Wormhole attack. Better performance by absolute deviation technique than AODV is proved by simulation, done by MATLAB simulator for wormhole attack. Then the packet drop pattern is also measured for Wormholes using Absolute Deviation Correlation Coefficient.
T138 combat cyber crimes, electronic evidence have played an increasing role, but in judicial practice the electronic evidence were not highly applied because of the natural contradiction between the epistemic uncertainty of electronic evidence and the principle of discretionary evidence of judge in the court. in this paper, we put forward a layer-built method to analyze the relevancy of electronic evidence, and discussed their analytical process combined with the case study. The initial practice shows the model is feasible and has a consulting value in analyzing the relevancy of electronic evidence.
The Internet of things (IoT) is a distributed, networked system composed of many embedded sensor devices. Unfortunately, these devices are resource constrained and susceptible to malicious data-integrity attacks and failures, leading to unreliability and sometimes to major failure of parts of the entire system. Intrusion detection and failure handling are essential requirements for IoT security. Nevertheless, as far as we know, the area of data-integrity detection for IoT has yet to receive much attention. Most previous intrusion-detection methods proposed for IoT, particularly for wireless sensor networks (WSNs), focus only on specific types of network attacks. Moreover, these approaches usually rely on using precise values to specify abnormality thresholds. However, sensor readings are often imprecise and crisp threshold values are inappropriate. To guarantee a lightweight, dependable monitoring system, we propose a novel hierarchical framework for detecting abnormal nodes in WSNs. The proposed approach uses fuzzy logic in event-condition-action (ECA) rule-based WSNs to detect malicious nodes, while also considering failed nodes. The spatiotemporal semantics of heterogeneous sensor readings are considered in the decision process to distinguish malicious data from other anomalies. Following our experiments with the proposed framework, we stress the significance of considering the sensor correlations to achieve detection accuracy, which has been neglected in previous studies. Our experiments using real-world sensor data demonstrate that our approach can provide high detection accuracy with low false-alarm rates. We also show that our approach performs well when compared to two well-known classification algorithms.
The security level is very important in Bluetooth, because the network or devices using secure communication, are susceptible to many attacks against the transmitted data received through eavesdropping. The cryptosystem designers needs to know the complexity of the designed Bluetooth E0. And what the advantages given by any development performed on any known Bluetooth E0Encryption method. The most important criteria can be used in evaluation method is considered as an important aspect. This paper introduce a proposed fuzzy logic technique to evaluate the complexity of Bluetooth E0Encryption system by choosing two parameters, which are entropy and correlation rate, as inputs to proposed fuzzy logic based Evaluator, which can be applied with MATLAB system.
Malicious traffic has garnered more attention in recent years, owing to the rapid growth of information technology in today's world. In 2007 alone, an estimated loss of 13 billion dollars was made from malware attacks. Malware data in today's context is massive. To understand such information using primitive methods would be a tedious task. In this publication we demonstrate some of the most advanced deep learning techniques available, multilayer perceptron (MLP) and J48 (also known as C4.5 or ID3) on our selected dataset, Advanced Security Network Metrics & Non-Payload-Based Obfuscations (ASNM-NPBO) to show that the answer to managing cyber security threats lie in the fore-mentioned methodologies.
Moving Target Defence (MTD) has been recently proposed and is an emerging proactive approach which provides an asynchronous defensive strategies. Unlike traditional security solutions that focused on removing vulnerabilities, MTD makes a system dynamic and unpredictable by continuously changing attack surface to confuse attackers. MTD can be utilized in cloud computing to address the cloud's security-related problems. There are many literature proposing MTD methods in various contexts, but it still lacks approaches to evaluate the effectiveness of proposed MTD method. In this paper, we proposed a combination of Shuffle and Diversity MTD techniques and investigate on the effects of deploying these techniques from two perspectives lying on two groups of security metrics (i) system risk: which is the cloud providers' perspective and (ii) attack cost and return on attack: which are attacker's point of view. Moreover, we utilize a scalable Graphical Security Model (GSM) to enhance the security analysis complexity. Finally, we show that combining MTD techniques can improve both aforementioned two groups of security metrics while individual technique cannot.
To identify potential risks to the system security presented by time series it is offered to use wavelet analysis, the indicator of time-and-frequency distribution, the correlation analysis of wavelet-spectra for receiving rather complete range of data about the process studied. The indicator of time-and-frequency localization of time series was proposed allowing to estimate the speed of non-stationary changing. The complex approach is proposed to use the wavelet analysis, the time-and-frequency distribution of time series and the wavelet spectra correlation analysis; this approach contributes to obtaining complete information on the studied phenomenon both in numerical terms, and in the form of visualization for identifying and predicting potential system security threats.
Deep neural networks (DNNs) are effective machine learning models to solve a large class of recognition problems, including the classification of nonlinearly separable patterns. The applications of DNNs are, however, limited by the large size and high energy consumption of the networks. Recently, stochastic computation (SC) has been considered to implement DNNs to reduce the hardware cost. However, it requires a large number of random number generators (RNGs) that lower the energy efficiency of the network. To overcome these limitations, we propose the design of an energy-efficient deep belief network (DBN) based on stochastic computation. An approximate SC activation unit (A-SCAU) is designed to implement different types of activation functions in the neurons. The A-SCAU is immune to signal correlations, so the RNGs can be shared among all neurons in the same layer with no accuracy loss. The area and energy of the proposed design are 5.27% and 3.31% (or 26.55% and 29.89%) of a 32-bit floating-point (or an 8-bit fixed-point) implementation. It is shown that the proposed SC-DBN design achieves a higher classification accuracy compared to the fixed-point implementation. The accuracy is only lower by 0.12% than the floating-point design at a similar computation speed, but with a significantly lower energy consumption.