Biblio
Organizations are exposed to various cyber-attacks. When a component is exploited, the overall computed damage is impacted by the number of components the network includes. This work is focuses on estimating the Target Distribution characteristic of an attacked network. According existing security assessment models, Target Distribution is assessed by using ordinal values based on users' intuitive knowledge. This work is aimed at defining a formula which enables measuring quantitatively the attacked components' distribution. The proposed formula is based on the real-time configuration of the system. Using the proposed measure, firms can quantify damages, allocate appropriate budgets to actual real risks and build their configuration while taking in consideration the risks impacted by components' distribution. The formula is demonstrated as part of a security continuous monitoring system.
Along with the growing popularisation of Cloud Computing. Cloud storage technology has been paid more and more attention as an emerging network storage technology which is extended and developed by cloud computing concepts. Cloud computing environment depends on user services such as high-speed storage and retrieval provided by cloud computing system. Meanwhile, data security is an important problem to solve urgently for cloud storage technology. In recent years, There are more and more malicious attacks on cloud storage systems, and cloud storage system of data leaking also frequently occurred. Cloud storage security concerns the user's data security. The purpose of this paper is to achieve data security of cloud storage and to formulate corresponding cloud storage security policy. Those were combined with the results of existing academic research by analyzing the security risks of user data in cloud storage and approach a subject of the relevant security technology, which based on the structural characteristics of cloud storage system.
With the continuous development of mobile based Wireless technologies, Bluetooth plays a vital role in smart-phone Era. In such scenario, the security measures are needed to be enhanced for Bluetooth. We propose a Node Energy Based Virus Propagation Model (NBV) for Bluetooth. The algorithm works with key features of node capacity and node energy in Bluetooth network. This proposed NBV model works along with E-mail worm Propagation model. Finally, this work simulates and compares the virus propagation with respect to Node Energy and network traffic.
Deep Neural Network (DNN) has recently become the “de facto” technique to drive the artificial intelligence (AI) industry. However, there also emerges many security issues as the DNN based intelligent systems are being increasingly prevalent. Existing DNN security studies, such as adversarial attacks and poisoning attacks, are usually narrowly conducted at the software algorithm level, with the misclassification as their primary goal. The more realistic system-level attacks introduced by the emerging intelligent service supply chain, e.g. the third-party cloud based machine learning as a service (MLaaS) along with the portable DNN computing engine, have never been discussed. In this work, we propose a low-cost modular methodology-Stealth Infection on Neural Network, namely “SIN2”, to demonstrate the novel and practical intelligent supply chain triggered neural Trojan attacks. Our “SIN2” well leverages the attacking opportunities built upon the static neural network model and the underlying dynamic runtime system of neural computing framework through a bunch of neural Trojaning techniques. We implement a variety of neural Trojan attacks in Linux sandbox by following proposed “SIN2”. Experimental results show that our modular design can rapidly produce and trigger various Trojan attacks that can easily evade the existing defenses.
The heterogeneous SIS model for virus spread in any finite size graph characterizes the influence of factors of SIS model and could be analyzed by the extended N-Intertwined model introduced in [1]. We specifically focus on the heterogeneous virus spread in the star network in this paper. The epidemic threshold and the average meta-stable state fraction of infected nodes are derived for virus spread in the star network. Our results illustrate the effect of the factors of SIS model on the steady state infection.
Mobile ad hoc networks (MANET) is a type of networks that consists of autonomous nodes connecting directly without a top-down network architecture or central controller. Absence of base stations in MANET force the nodes to rely on their adjacent nodes in transmitting messages. The dynamic nature of MANET makes the relationship between nodes untrusted due to mobility of nodes. A malicious node may start denial of service attack at network layer to discard the packets instead of forwarding them to destination which is known as black hole attack. In this paper a secure and trust based approach based on ad hoc on demand distance vector (STAODV) has been proposed to improve the security of AODV routing protocol. The approach isolates the malicious nodes that try to attack the network depending on their previous information. A trust level is attached to each participating node to detect the level of trust of that node. Each incoming packet will be examined to prevent the black hole attack.
Physically unclonable functions (PUFs) are used to uniquely identify electronic devices. Here, we introduce a hybrid silicon CMOS-nanotube PUF circuit that uses the variations of nanotube transistors to generate a random response. An analog silicon circuit subsequently converts the nanotube response to zero or one bits. We fabricate an array of nanotube transistors to study and model their device variability. The behavior of the hybrid CMOS-nanotube PUF is then simulated. The parameters of the analog circuit are tuned to achieve the desired normalized Hamming inter-distance of 0.5. The co-design of the nanotube array and the silicon CMOS is an attractive feature for increasing the immunity of the hybrid PUF against an unauthorized duplication. The heterogeneous integration of nanotubes with silicon CMOS offers a new strategy for realizing security tokens that are strong, low-cost, and reliable.
This paper outlines the IoT Databox model as a means of making the Internet of Things (IoT) accountable to individuals. Accountability is a key to building consumer trust and mandated in data protection legislation. We briefly outline the `external' data subject accountability requirement specified in actual legislation in Europe and proposed legislation in the US, and how meeting requirement this turns on surfacing the invisible actions and interactions of connected devices and the social arrangements in which they are embedded. The IoT Databox model is proposed as an in principle means of enabling accountability and providing individuals with the mechanisms needed to build trust in the IoT.
Intrusion detection systems do not perform well when it comes to detecting zero-day attacks, therefore improving their performance in that regard is an active research topic. In this study, to detect zero-day attacks with high accuracy, we proposed two deep learning based anomaly detection models using autoencoder and denoising autoencoder respectively. The key factor that directly affects the accuracy of the proposed models is the threshold value which was determined using a stochastic approach rather than the approaches available in the current literature. The proposed models were tested using the KDDTest+ dataset contained in NSL-KDD, and we achieved an accuracy of 88.28% and 88.65% respectively. The obtained results show that, as a singular model, our proposed anomaly detection models outperform any other singular anomaly detection methods and they perform almost the same as the newly suggested hybrid anomaly detection models.
Anomaly detection for cyber-security defence hasgarnered much attention in recent years providing an orthogonalapproach to traditional signature-based detection systems.Anomaly detection relies on building probability models ofnormal computer network behaviour and detecting deviationsfrom the model. Most data sets used for cyber-security havea mix of user-driven events and automated network events,which most often appears as polling behaviour. Separating theseautomated events from those caused by human activity is essentialto building good statistical models for anomaly detection. This articlepresents a changepoint detection framework for identifyingautomated network events appearing as periodic subsequences ofevent times. The opening event of each subsequence is interpretedas a human action which then generates an automated, periodicprocess. Difficulties arising from the presence of duplicate andmissing data are addressed. The methodology is demonstrated usingauthentication data from Los Alamos National Laboratory'senterprise computer network.
Recently, Jung et al. [1] proposed a data access privilege scheme and claimed that their scheme addresses data and identity privacy as well as multi-authority, and provides data access privilege for attribute-based encryption. In this paper, we show that this scheme, and also its former and latest versions (i.e. [2] and [3] respectively) suffer from a number of weaknesses in terms of finegrained access control, users and authorities collusion attack, user authorization, and user anonymity protection. We then propose our new scheme that overcomes these shortcomings. We also prove the security of our scheme against user collusion attacks, authority collusion attacks and chosen plaintext attacks. Lastly, we show that the efficiency of our scheme is comparable with existing related schemes.
The blockchain technology has emerged as an attractive solution to address performance and security issues in distributed systems. Blockchain's public and distributed peer-to-peer ledger capability benefits cloud computing services which require functions such as, assured data provenance, auditing, management of digital assets, and distributed consensus. Blockchain's underlying consensus mechanism allows to build a tamper-proof environment, where transactions on any digital assets are verified by set of authentic participants or miners. With use of strong cryptographic methods, blocks of transactions are chained together to enable immutability on the records. However, achieving consensus demands computational power from the miners in exchange of handsome reward. Therefore, greedy miners always try to exploit the system by augmenting their mining power. In this paper, we first discuss blockchain's capability in providing assured data provenance in cloud and present vulnerabilities in blockchain cloud. We model the block withholding (BWH) attack in a blockchain cloud considering distinct pool reward mechanisms. BWH attack provides rogue miner ample resources in the blockchain cloud for disrupting honest miners' mining efforts, which was verified through simulations.
Many popular web applications incorporate end-toend secure messaging protocols, which seek to ensure that messages sent between users are kept confidential and authenticated, even if the web application's servers are broken into or otherwise compelled into releasing all their data. Protocols that promise such strong security guarantees should be held up to rigorous analysis, since protocol flaws and implementations bugs can easily lead to real-world attacks. We propose a novel methodology that allows protocol designers, implementers, and security analysts to collaboratively verify a protocol using automated tools. The protocol is implemented in ProScript, a new domain-specific language that is designed for writing cryptographic protocol code that can both be executed within JavaScript programs and automatically translated to a readable model in the applied pi calculus. This model can then be analyzed symbolically using ProVerif to find attacks in a variety of threat models. The model can also be used as the basis of a computational proof using CryptoVerif, which reduces the security of the protocol to standard cryptographic assumptions. If ProVerif finds an attack, or if the CryptoVerif proof reveals a weakness, the protocol designer modifies the ProScript protocol code and regenerates the model to enable a new analysis. We demonstrate our methodology by implementing and analyzing a variant of the popular Signal Protocol with only minor differences. We use ProVerif and CryptoVerif to find new and previously-known weaknesses in the protocol and suggest practical countermeasures. Our ProScript protocol code is incorporated within the current release of Cryptocat, a desktop secure messenger application written in JavaScript. Our results indicate that, with disciplined programming and some verification expertise, the systematic analysis of complex cryptographic web applications is now becoming practical.
Artificial neural networks are complex biologically inspired algorithms made up of highly distributed, adaptive and self-organizing structures that make them suitable for optimization problems. They are made up of a group of interconnected nodes, similar to the great networks of neurons in the human brain. So far, artificial neural networks have not been applied to user modeling in multi-criteria recommender systems. This paper presents neural networks-based user modeling technique that exploits some of the characteristics of biological neurons for improving the accuracy of multi-criteria recommendations. The study was based upon the aggregation function approach that computes the overall rating as a function of the criteria ratings. The proposed technique was evaluated using different evaluation metrics, and the empirical results of the experiments were compared with that of the single rating-based collaborative filtering and two other similarity-based modeling approaches. The two similarity-based techniques used are: the worst-case and the average similarity techniques. The results of the comparative analysis have shown that the proposed technique is more efficient than the two similarity-based techniques and the single rating collaborative filtering technique.
In cloud storage systems, users can upload their data along with associated tags (authentication information) to cloud storage servers. To ensure the availability and integrity of the outsourced data, provable data possession (PDP) schemes convince verifiers (users or third parties) that the outsourced data stored in the cloud storage server is correct and unchanged. Recently, several PDP schemes with designated verifier (DV-PDP) were proposed to provide the flexibility of arbitrary designated verifier. A designated verifier (private verifier) is trustable and designated by a user to check the integrity of the outsourced data. However, these DV-PDP schemes are either inefficient or insecure under some circumstances. In this paper, we propose the first non-repudiable PDP scheme with designated verifier (DV-NRPDP) to address the non-repudiation issue and resolve possible disputations between users and cloud storage servers. We define the system model, framework and adversary model of DV-NRPDP schemes. Afterward, a concrete DV-NRPDP scheme is presented. Based on the computing discrete logarithm assumption, we formally prove that the proposed DV-NRPDP scheme is secure against several forgery attacks in the random oracle model. Comparisons with the previously proposed schemes are given to demonstrate the advantages of our scheme.
The continuous advance in recent cloud-based computer networks has generated a number of security challenges associated with intrusions in network systems. With the exponential increase in the volume of network traffic data, involvement of humans in such detection systems is time consuming and a non-trivial problem. Secondly, network traffic data tends to be highly dimensional, comprising of numerous features and attributes, making classification challenging and thus susceptible to the curse of dimensionality problem. Given such scenarios, the need arises for dimensional reduction, feature selection, combined with machine-learning techniques in the classification of such data. Therefore, as a contribution, this paper seeks to employ data mining techniques in a cloud-based environment, by selecting appropriate attributes and features with the least importance in terms of weight for the classification. Often the standard is to select features with better weights while ignoring those with least weights. In this study, we seek to find out if we can make prediction using those features with least weights. The motivation is that adversaries use stealth to hide their activities from the obvious. The question then is, can we predict any stealth activity of an adversary using the least observed attributes? In this particular study, we employ information gain to select attributes with the lowest weights and then apply machine learning to classify if a combination, in this case, of both source and destination ports are attacked or not. The motivation of this investigation is if attributes that are of least importance can be used to predict if an attack could occur. Our preliminary results show that even when the source and destination port attributes are used in combination with features with the least weights, it is possible to classify such network traffic data and predict if an attack will occur or not.
Modeling and simulation of real-world environments has in recent times being widely used. The modeling of environments whose examination in particular is difficult and the examination via the model becomes easier. The parameters of the modeled systems and the values they can obtain are quite large, and manual tuning is tedious and requires a lot of effort while it often it is almost impossible to get the desired results. For this reason, there is a need for the parameter space to be set. The studies conducted in recent years were reviewed, it has been observed that there are few studies for parameter tuning problem in modeling and simulations. In this study, work has been done for a solution to be found to the problem of parameter tuning with swarm intelligence optimization algorithms Particle swarm optimization and Firefly algorithms. The performance of these algorithms in the parameter tuning process has been tested on 2 different agent based model studies. The performance of the algorithms has been observed by manually entering the parameters found for the model. According to the obtained results, it has been seen that the Firefly algorithm where the Particle swarm optimization algorithm works faster has better parameter values. With this study, the parameter tuning problem of the models in the different fields were solved.
Nowadays, an increasing number of IoT vendors have complied and deployed third-party code bases across different architectures. Therefore, to avoid the firmware from being affected by the same known vulnerabilities, searching known vulnerabilities in binary firmware across different architectures is more crucial than ever. However, most of existing vulnerability search methods are limited to the same architecture, there are only a few researches on cross-architecture cases, of which the accuracy is not high. In this paper, to promote the accuracy of existing cross-architecture vulnerability search methods, we propose a new approach based on Support Vector Machine (SVM) and Attributed Control Flow Graph (ACFG) to search known vulnerability in firmware across different architectures at function level. We employ a known vulnerability function to recognize suspicious functions in other binary firmware. First, considering from the internal and external characteristics of the functions, we extract the function level features and basic-block level features of the functions to be inspected. Second, we employ SVM to recognize a little part of suspicious functions based on function level features. After the preliminary screening, we compute the graph similarity between the vulnerability function and suspicious functions based on their ACFGs. We have implemented our approach CVSSA, and employed the training samples to train the model with previous knowledge to improve the accuracy. We also search several vulnerabilities in the real-world firmware images, the experimental results show that CVSSA can be applied to the realistic scenarios.
Interval uncertainty can cause uncontrollable variations in the objective and constraint values, which could seriously deteriorate the performance or even change the feasibility of the optimal solutions. Robust optimization is to obtain solutions that are optimal and minimally sensitive to uncertainty. In this paper, a sequential multi-objective robust optimization (MORO) approach based on support vector machines (SVM) is proposed. Firstly, a sequential optimization structure is adopted to ease the computational burden. Secondly, SVM is used to construct a classification model to classify design alternatives into feasible or infeasible. The proposed approach is tested on a numerical example and an engineering case. Results illustrate that the proposed approach can reasonably approximate solutions obtained from the existing sequential MORO approach (SMORO), while the computational costs are significantly reduced compared with those of SMORO.
Internet of Things (IoT) is characterized by heterogeneous devices that interact with each other on a collaborative basis to fulfill a common goal. In this scenario, some of the deployed devices are expected to be constrained in terms of memory usage, power consumption and processing resources. To address the specific properties and constraints of such networks, a complete stack of standardized protocols has been developed, among them the Routing Protocol for Low-Power and lossy networks (RPL). However, this protocol is exposed to a large variety of attacks from the inside of the network itself. To fill this gap, this paper focuses on the design and the integration of a novel Link reliable and Trust aware model into the RPL protocol. Our approach aims to ensure Trust among entities and to provide QoS guarantees during the construction and the maintenance of the network routing topology. Our model targets both node and link Trust and follows a multidimensional approach to enable an accurate Trust value computation for IoT entities. To prove the efficiency of our proposal, this last has been implemented and tested successfully within an IoT environment. Therefore, a set of experiments has been made to show the high accuracy level of our system.
Explosive naval mines pose a threat to ocean and sea faring vessels, both military and civilian. This work applies deep neural network (DNN) methods to the problem of detecting minelike objects (MLO) on the seafloor in side-scan sonar imagery. We explored how the DNN depth, memory requirements, calculation requirements, and training data distribution affect detection efficacy. A visualization technique (class activation map) was incorporated that aids a user in interpreting the model's behavior. We found that modest DNN model sizes yielded better accuracy (98%) than very simple DNN models (93%) and a support vector machine (78%). The largest DNN models achieved textless;1% efficacy increase at a cost of a 17x increase of trainable parameter count and computation requirements. In contrast to DNNs popularized for many-class image recognition tasks, the models for this task require far fewer computational resources (0.3% of parameters), and are suitable for embedded use within an autonomous unmanned underwater vehicle.
Supervisory Control and Data Acquisition (SCADA) systems complexity and interconnectivity increase in recent years have exposed the SCADA networks to numerous potential vulnerabilities. Several studies have shown that anomaly-based Intrusion Detection Systems (IDS) achieves improved performance to identify unknown or zero-day attacks. In this paper, we propose a hybrid model for anomaly-based intrusion detection in SCADA networks using machine learning approach. In the first part, we present a robust hybrid model for anomaly-based intrusion detection in SCADA networks. Finally, we present a feature selection model for anomaly-based intrusion detection in SCADA networks by removing redundant and irrelevant features. Irrelevant features in the dataset can affect modeling power and reduce predictive accuracy. These models were evaluated using an industrial control system dataset developed at the Distributed Analytics and Security Institute Mississippi State University Starkville, MS, USA. The experimental results show that our proposed model has a key effect in reducing the time and computational complexity and achieved improved accuracy and detection rate. The accuracy of our proposed model was measured as 99.5 % for specific-attack-labeled.