Biblio
The Sensor Web is evolving into a complex information space, where large volumes of sensor observation data are often consumed by complex applications. Provenance has become an important issue in the Sensor Web, since it allows applications to answer “what”, “when”, “where”, “who”, “why”, and “how” queries related to observations and consumption processes, which helps determine the usability and reliability of data products. This paper investigates characteristics and requirements of provenance in the Sensor Web and proposes an interoperable approach to building a provenance model for the Sensor Web. Our provenance model extends the W3C PROV Data Model with Sensor Web domain vocabularies. It is developed using Semantic Web technologies and thus allows provenance information of sensor observations to be exposed in the Web of Data using the Linked Data approach. A use case illustrates the applicability of the approach.
In Sybil attacks, a physical adversary takes multiple fabricated or stolen identities to maliciously manipulate the network. These attacks are very harmful for Internet of Things (IoT) applications. In this paper we implemented and evaluated the performance of RPL (Routing Protocol for Low-Power and Lossy Networks) routing protocol under mobile sybil attacks, namely SybM, with respect to control overhead, packet delivery and energy consumption. In SybM attacks, Sybil nodes take the advantage of their mobility and the weakness of RPL to handle identity and mobility, to flood the network with fake control messages from different locations. To counter these type of attacks we propose a trust-based intrusion detection system based on RPL.
Data provenance provides a way for scientists to observe how experimental data originates, conveys process history, and explains influential factors such as experimental rationale and associated environmental factors from system metrics measured at runtime. The US Department of Energy Office of Science Integrated end-to-end Performance Prediction and Diagnosis for Extreme Scientific Workflows (IPPD) project has developed a provenance harvester that is capable of collecting observations from file based evidence typically produced by distributed applications. To achieve this, file based evidence is extracted and transformed into an intermediate data format inspired in part by W3C CSV on the Web recommendations, called the Harvester Provenance Application Interface (HAPI) syntax. This syntax provides a general means to pre-stage provenance into messages that are both human readable and capable of being written to a provenance store, Provenance Environment (ProvEn). HAPI is being applied to harvest provenance from climate ensemble runs for Accelerated Climate Modeling for Energy (ACME) project funded under the U.S. Department of Energy's Office of Biological and Environmental Research (BER) Earth System Modeling (ESM) program. ACME informally provides provenance in a native form through configuration files, directory structures, and log files that contain success/failure indicators, code traces, and performance measurements. Because of its generic format, HAPI is also being applied to harvest tabular job management provenance from Belle II DIRAC scheduler relational database tables as well as other scientific applications that log provenance related information.
Smart IoT applications require connecting multiple IoT devices and networks with multiple services running in fog and cloud computing platforms. One approach to connecting IoT devices with cloud and fog services is to create a federated virtual network. The main benefit of this approach is that IoT devices can then interact with multiple remote services using an application specific federated network where no traffic from other applications passes. This federated network spans multiple cloud platforms and IoT networks but it can be managed as a single entity. From the point of view of security, federated virtual networks can be managed centrally and be secured with a coherent global network security policy. This does not mean that the same security policy applies everywhere, but that the different security policies are specified in a single coherent security policy. In this paper we propose to extend a federated cloud networking security architecture so that it can secure IoT devices and networks. The federated network is extended to the edge of IoT networks by integrating a federation agent in an IoT gateway or network controller (Can bus, 6LowPan, Lora, ...). This allows communication between the federated cloud network and the IoT network. The security architecture is based on the concepts of network function virtualisation (NFV) and service function chaining (SFC) for composing security services. The IoT network and devices can then be protected by security virtual network functions (VNF) running at the edge of the IoT network.
In the Internet of Things (IoT), smart devices are connected using various communication protocols, such as Wi-Fi, ZigBee. Some IoT devices have multiple built-in communication modules. If an IoT device equipped with multiple communication protocols is compromised by an attacker using one communication protocol (e.g., Wi-Fi), it can be exploited as an entry point to the IoT network. Another protocol (e.g., ZigBee) of this IoT device could be used to exploit vulnerabilities of other IoT devices using the same communication protocol. In order to find potential attacks caused by this kind of cross-protocol devices, we group IoT devices based on their communication protocols and construct a graphical security model for each group of devices using the same communication protocol. We combine the security models via the cross-protocol devices and compute hidden attack paths traversing different groups of devices. We use two use cases in the smart home scenario to demonstrate our approach and discuss some feasible countermeasures.
Node compromising is still the most hard attack in Wireless Sensor Networks (WSNs). It affects key distribution which is a building block in securing communications in any network. The weak point of several roposed key distribution schemes in WSNs is their lack of resilience to node compromising attacks. When a node is compromised, all its key material is revealed leading to insecure communication links throughout the network. This drawback is more harmful for long-lived WSNs that are deployed in multiple phases, i.e., Multi-phase WSNs (MPWSNs). In the last few years, many key management schemes were proposed to ensure security in WSNs. However, these schemes are conceived for single phase WSNs and their security degrades with time when an attacker captures nodes. To deal with this drawback and enhance the resilience to node compromising over the whole lifetime of the network, we propose in this paper, a new key pre-distribution scheme adapted to MPWSNs. Our scheme takes advantage of the resilience improvement of Q-composite key scheme and adds self-healing which is the ability of the scheme to decrease the effect of node compromising over time. Self-healing is achieved by pre-distributing each generation with fresh keys. The evaluation of our scheme proves that it has a good key connectivity and a high resilience to node compromising attack compared to existing key management schemes.
There are vast amounts of information in our world. Accessing the most accurate information in a speedy way is becoming more difficult and complicated. A lot of relevant information gets ignored which leads to much duplication of work and effort. The focuses tend to provide rapid and intelligent retrieval systems. Information retrieval (IR) is the process of searching for information that is related to some topics of interest. Due to the massive search results, the user will normally have difficulty in identifying the relevant ones. To alleviate this problem, a recommendation system is used. A recommendation system is a sort of filtering information system, which predicts the relevance of retrieved information to the user's needs according to some criteria. Hence, it can provide the user with the results that best fit their needs. The services provided through the web normally provide massive information about any requested item or service. An efficient recommendation system is required to classify this information result. A recommendation system can be further improved if augmented with a level of trust information. That is, recommendations are ranked according to their level of trust. In our research, we produced a recommendation system combined with an efficient level of trust system to guarantee that the posts, comments and feedbacks from users are trusted. We customized the concept of LoT (Level of Trust) [1] since it can cover medical, shopping and learning through social media. The proposed system TRS\_LoT provides trusted recommendations to the users with a high percentage of accuracy. Whereas a 300 post with more than 5000 comments from ``Amazon'' was selected to be used as a dataset, the experiment has been conducted by using same dataset based on ``post rating''.
The presented work of this paper is to propose the implementation of chaotic crypto-system with the new key generator using chaos in digital filter for data encryption and decryption. The chaos in digital filter of the second order system is produced by the coefficients which are initialed in the key generator to produce other new coefficients. Private key system using the initial coefficients value condition and dynamic input as password of 16 characters is to generate the coefficients for crypto-system. In addition, we have tension specifically to propose the solution of data security in lightweight cryptography based on external and internal key in which conducts with the appropriate key sensitivity plus high performance. The chaos in digital filter has functioned as the main major in the system. The experimental results illustrate that the proposed data encryption with new key generator system is the high sensitive system with accuracy key test 99% and can make data more secure with high performance.
In this paper a random number generation method based on a piecewise linear one dimensional (PL1D) discrete time chaotic maps is proposed for applications in cryptography and steganography. Appropriate parameters are determined by examining the distribution of underlying chaotic signal and random number generator (RNG) is numerically verified by four fundamental statistical test of FIPS 140-2. Proposed design is practically realized on the field programmable analog and digital arrays (FPAA-FPGA). Finally it is experimentally verified that the presented RNG fulfills the NIST 800-22 randomness test without post processing.
In the information age of today, with the rapid development and wide application of communication technology and network technology, more and more information has been transmitted through the network and information security and protection is becoming more and more important, the cryptography theory and technology have become an important research field in Information Science and technology. In recent years, many researchers have found that there is a close relationship between chaos and cryptography. Chaotic system to initial conditions is extremely sensitive and can produce a large number of with good cryptographic properties of class randomness, correlation, complexity and wide spectrum sequence, provides a new and effective means for data encryption. But chaotic cryptography, as a new cross discipline, is still in its initial stage of development. Although many chaotic encryption schemes have been proposed, the method of chaotic cryptography is not yet fully mature. The research is carried out under such a background, to be used in chaotic map of the chaotic cipher system, chaotic sequence cipher, used for key generation of chaotic random number generators and other key problems is discussed. For one-dimensional chaotic encryption algorithm, key space small, security is not higher defect, this paper selects logistic mapping coupled to generate twodimensional hyper chaotic system as the research object, the research focus on the hyper chaotic sequence in the application of data encryption, in chaotic data encryption algorithm to make some beneficial attempts, at the same time, the research on applications of chaos in data encryption to do some exploring.
Efficient management and control of modern and next-gen networks is of paramount importance as networks have to maintain highly reliable service quality whilst supporting rapid growth in traffic demand and new application services. Rapid mitigation of network service degradations is a key factor in delivering high service quality. Automation is vital to achieving rapid mitigation of issues, particularly at the network edge where the scale and diversity is the greatest. This automation involves the rapid detection, localization and (where possible) repair of service-impacting faults and performance impairments. However, the most significant challenge here is knowing what events to detect, how to correlate events to localize an issue and what mitigation actions should be performed in response to the identified issues. These are defined as policies to systems such as ECOMP. In this paper, we present AESOP, a data-driven intelligent system to facilitate automatic learning of policies and rules for triggering remedial actions in networks. AESOP combines best operational practices (domain knowledge) with a variety of measurement data to learn and validate operational policies to mitigate service issues in networks. AESOP's design addresses the following key challenges: (i) learning from high-dimensional noisy data, (ii) capturing multiple fault models, (iii) modeling the high service-cost of false positives, and (iv) accounting for the evolving network infrastructure. We present the design of our system and show results from our ongoing experiments to show the effectiveness of our policy leaning framework.
Zero dynamics attack is lethal to cyber-physical systems in the sense that it is stealthy and there is no way to detect it. Fortunately, if the given continuous-time physical system is of minimum phase, the effect of the attack is negligible even if it is not detected. However, the situation becomes unfavorable again if one uses digital control by sampling the sensor measurement and using the zero-order-hold for actuation because of the `sampling zeros.' When the continuous-time system has relative degree greater than two and the sampling period is small, the sampled-data system must have unstable zeros (even if the continuous-time system is of minimum phase), so that the cyber-physical system becomes vulnerable to `sampling zero dynamics attack.' In this paper, we begin with its demonstration by a few examples. Then, we present an idea to protect the system by allocating those discrete-time zeros into stable ones. This idea is realized by employing the so-called `generalized hold' which replaces the zero-order-hold.
Network coding is a potential method that numerous investigators have move forwarded due to its significant advantages to enhance the proficiency of data communication. In this work, utilize simulations to assess the execution of various network topologies employing network coding. By contrasting the results of network and without network coding, it insists that network coding can improve the throughput, end-to-end delays, Packet Delivery Rate (PDR) and consistency. This paper presents the comparative performance analysis of network coding such as, XOR, LNC, and RLNC. The results demonstrates the XOR technique has attractive outcomes and can improve the real time performance metrics i.e.; throughput, end-to-end delay and PDR by substantial limitations. The analysis has been carried out based on packet size and also number of packets to be transmitted. Results illustrates that the network coding facilitate in dependence between networks.
Bitcoin, a peer-to-peer payment system and digital currency, is often involved in illicit activities such as scamming, ransomware attacks, illegal goods trading, and thievery. At the time of writing, the Bitcoin ecosystem has not yet been mapped and as such there is no estimate of the share of illicit activities. This paper provides the first estimation of the portion of cyber-criminal entities in the Bitcoin ecosystem. Our dataset consists of 854 observations categorised into 12 classes (out of which 5 are cybercrime-related) and a total of 100,000 uncategorised observations. The dataset was obtained from the data provider who applied three types of clustering of Bitcoin transactions to categorise entities: co-spend, intelligence-based, and behaviour-based. Thirteen supervised learning classifiers were then tested, of which four prevailed with a cross-validation accuracy of 77.38%, 76.47%, 78.46%, 80.76% respectively. From the top four classifiers, Bagging and Gradient Boosting classifiers were selected based on their weighted average and per class precision on the cybercrime-related categories. Both models were used to classify 100,000 uncategorised entities, showing that the share of cybercrime-related is 29.81% according to Bagging, and 10.95% according to Gradient Boosting with number of entities as the metric. With regard to the number of addresses and current coins held by this type of entities, the results are: 5.79% and 10.02% according to Bagging; and 3.16% and 1.45% according to Gradient Boosting.
Collaborative Filtering (CF) is a successful technique that has been implemented in recommender systems and Privacy Preserving Collaborative Filtering (PPCF) aroused increasing concerns of the society. Current solutions mainly focus on cryptographic methods, obfuscation methods, perturbation methods and differential privacy methods. But these methods have some shortcomings, such as unnecessary computational cost, lower data quality and hard to calibrate the magnitude of noise. This paper proposes a (k, p, I)-anonymity method that improves the existing k-anonymity method in PPCF. The method works as follows: First, it applies Latent Factor Model (LFM) to reduce matrix sparsity. Then it improves Maximum Distance to Average Vector (MDAV) microaggregation algorithm based on importance partitioning to increase homogeneity among records in each group which can retain better data quality and (p, I)-diversity model where p is attacker's prior knowledge about users' ratings and I is the diversity among users in each group to improve the level of privacy preserving. Theoretical and experimental analyses show that our approach ensures a higher level of privacy preserving based on lower information loss.
Internet of Things (IoT) devices are resource constrained devices in terms of power, memory, bandwidth, and processing. On the other hand, multicast communication is considered more efficient in group oriented applications compared to unicast communication as transmission takes place using fewer resources. That is why many of IoT applications rely on multicast in their transmission. This multicast traffic need to be secured specially for critical applications involving actuators control. Securing multicast traffic by itself is cumbersome as it requires an efficient and scalable Group Key Management (GKM) protocol. In case of IoT, the situation is more difficult because of the dynamic nature of IoT scenarios. This paper introduces a solution based on using context aware security server accompanied with a group of key servers to efficiently distribute group encryption keys to IoT devices in order to secure the multicast sessions. The proposed solution is evaluated relative to the Logical Key Hierarchy (LKH) protocol. The comparison shows that the proposed scheme efficiently reduces the load on the key servers. Moreover, the key storage cost on both members and key servers is reduced.
Interval uncertainty can cause uncontrollable variations in the objective and constraint values, which could seriously deteriorate the performance or even change the feasibility of the optimal solutions. Robust optimization is to obtain solutions that are optimal and minimally sensitive to uncertainty. In this paper, a sequential multi-objective robust optimization (MORO) approach based on support vector machines (SVM) is proposed. Firstly, a sequential optimization structure is adopted to ease the computational burden. Secondly, SVM is used to construct a classification model to classify design alternatives into feasible or infeasible. The proposed approach is tested on a numerical example and an engineering case. Results illustrate that the proposed approach can reasonably approximate solutions obtained from the existing sequential MORO approach (SMORO), while the computational costs are significantly reduced compared with those of SMORO.
This paper addresses the problem of state estimation of a linear time-invariant system when some of the sensors or/and actuators are under adversarial attack. In our set-up, the adversarial agent attacks a sensor (actuator) by manipulating its measurement (input), and we impose no constraint on how the measurements (inputs) are corrupted. We introduce the notion of ``sparse strong observability'' to characterize systems for which the state estimation is possible, given bounds on the number of attacked sensors and actuators. Furthermore, we develop a secure state estimator based on Satisfiability Modulo Theory (SMT) solvers.
Code smells may be introduced in software due to market rivalry, work pressure deadline, improper functioning, skills or inexperience of software developers. Code smells indicate problems in design or code which makes software hard to change and maintain. Detecting code smells could reduce the effort of developers, resources and cost of the software. Many researchers have proposed different techniques like DETEX for detecting code smells which have limited precision and recall. To overcome these limitations, a new technique named as SVMCSD has been proposed for the detection of code smells, based on support vector machine learning technique. Four code smells are specified namely God Class, Feature Envy, Data Class and Long Method and the proposed technique is validated on two open source systems namely ArgoUML and Xerces. The accuracy of SVMCSD is found to be better than DETEX in terms of two metrics, precision and recall, when applied on a subset of a system. While considering the entire system, SVMCSD detect more occurrences of code smells than DETEX.
Named Data Networking (NDN) is a content-oriented future Internet architecture, which well suits the increasingly mobile and information-intensive applications that dominate today's Internet. NDN relies on in-network caching to facilitate content delivery. This makes it challenging to enforce access control since the content has been cached in the routers and the content producer has lost the control over it. Due to its salient advantages in content delivery, network coding has been introduced into NDN to improve content delivery effectiveness. In this paper, we design ACNC, the first Access Control solution specifically for Network Coding-based NDN. By combining a novel linear AONT (All Or Nothing Transform) and encryption, we can ensure that only the legitimate user who possesses the authorization key can successfully recover the encoding matrix for network coding, and hence can recover the content being transmitted. In addition, our design has two salient merits: 1) the linear AONT well suits the linear nature of network coding; 2) only one vector of the encoding matrix needs to be encrypted/decrypted, which only incurs small computational overhead. Security analysis and experimental evaluation in ndnSIM show that our design can successfully enforce access control on network coding-based NDN with an acceptable overhead.
Volumetric DDoS attacks continue to inflict serious damage. Many proposed defenses for mitigating such attacks assume that a monitoring system has already detected the attack. However, many proposed DDoS monitoring systems do not focus on efficiently analyzing high volume network traffic to provide important characterizations of the attack in real-time to downstream traffic filtering systems. We propose a scalable real-time framework for an effective volumetric DDoS monitoring system that leverages modern big data technologies for streaming analytics of high volume network traffic to accurately detect and characterize attacks.
Entity authentication is one of the fundamental information security properties for secure transactions and communications. The combination of biometrics with cryptography is an emerging topic for authentication protocol design. Among the existing biometrics (e.g., fingerprint, face, iris, voice, heart), the heart-signal contains liveness property of biometric samples. In this paper, a remote entity authentication protocol has been proposed based on the randomness of heart biometrics combined with chaos cryptography. To this end, initial keys are generated for chaotic logistic maps based on the heart-signal. The authentication parameters are generated from the initial keys that can be used for claimants and verifiers to authenticate and verify each other, respectively. In this proposed technique, as each session of communication is different from others, therefore many session-oriented attacks are prevented. Experiments have been conducted on sample heart-signal for remote authentication. The results show that the randomness property of the heart-signal can help to implement one of the famous secure encryption, namely one-time pad encryption.