Biblio
The Internet of Things (IoT) and RFID devices are essential parts of the new information technology generation. They are mostly characterized by their limited power and computing resources. In order to ensure their security under computing and power constraints, a number of lightweight cryptography algorithms has emerged. This paper outlines the performance analysis of six lightweight blocks crypto ciphers with different structures - LED, PRESENT, HIGHT, LBlock, PICCOLO and TWINE on a LEON3 open source processor. We have implemented these crypto ciphers on the FPGA board using the C language and the LEON3 processor. Analysis of these crypto ciphers is evaluated after considering various benchmark parameters like throughput, execution time, CPU performance, AHB bandwidth, Simulator performance, and speed. These metrics are tested with different key sizes provided by each crypto algorithm.
It is a research hotspot that using blockchain technology to solve the security problems of the Internet of Things (IoT). Although many related ideas have been proposed, there are very few literatures with theoretical and data support. This paper focuses on the research of model construction and performance evaluation. First, an IoT security model is established based on blockchain and InterPlanetary File System (IPFS). In this model, many security risks of traditional IoT architectures can be avoided, and system performance is significantly improved in distributed large capacity storage, concurrency and query. Secondly, the performance of the proposed model is evaluated through the average latency and throughput, which are meaningful for further research and optimization of this direction. Analysis and test results demonstrate the effectiveness of the blockchain-based security model.
In cognitive radio networks (CRNs), secondary users (SUs) are vulnerable to malicious attacks because an SU node's opportunistic access cannot be protected from adversaries. How to design a channel hopping scheme to protect SU nodes from jamming attacks is thus an important issue in CRNs. Existing anti-jamming channel hopping schemes have some limitations: Some require SU nodes to exchange secrets in advance; some require an SU node to be either a receiver or a sender, and some are not flexible enough. Another issue for existing anti-jamming channel hopping schemes is that they do not consider different nodes may have different traffic loads. In this paper, we propose an anti-jamming channel hopping protocol, Load Awareness Anti-jamming channel hopping (LAA) scheme. Nodes running LAA are able to change their channel hopping sequences based on their sending and receiving traffic. Simulation results verify that LAA outperforms existing anti-jamming schemes.
This paper studies the physical layer security performance of a Simultaneous Wireless Information and Power Transfer (SWIPT) millimeter wave (mmWave) ultra-dense network under a stochastic geometry framework. Specifically, we first derive the energy-information coverage probability and secrecy probability in the considered system under time switching policies. Then the effective secrecy throughput (EST) which can characterize the trade-off between the energy coverage, secure and reliable transmission performance is derived. Theoretical analyses and simulation results reveal the design insights into the effects of various network parameters like, transmit power, time switching factor, transmission rate, confidential information rate, etc, on the secrecy performance. Specifically, it is impossible to realize the effective secrecy throughput improvement just by increasing the transmit power.
Covert or low probability of detection communication is crucial to protect user privacy and provide a strong security. We analyze the joint impact of imperfect knowledge of the channel gain (channel uncertainty) and noise power (noise uncertainty) on the average probability of detection error at the eavesdropper and the covert throughput in Rayleigh fading channel. We characterize the covert throughput gain provided by the channel uncertainty as well as the covert throughput loss caused by the channel fading as a function of the noise uncertainty. Our result shows that the channel fading is essential to hiding the signal transmission, particularly when the noise uncertainty is below a threshold and/or the receive SNR is above a threshold. The impact of the channel uncertainty on the average probability of detection error and covert throughput is more significant when the noise uncertainty is larger.
The existing research on the Internet of Things(IoT) security mainly focuses on attack and defense on a single protocol layer. Increasing and ubiquitous use of loT also makes it vulnerable to many attacks. An attacker try to performs the intelligent, brutal and stealthy attack that can reduce the risk of being detected. In these kinds of attacks, the attackers not only restrict themselves to a single layer of protocol stack but they also try to decrease the network performance and throughput by a simultaneous and coordinated attack on different layers. A new class of attacks, termed as cross-layer attack became prominent due to lack of interaction between MAC, routing and upper layers. These attacks achieve the better effect with reduced cost. Research has been done on cross-layer attacks in other domains like Cognitive Radio Network(CRN), Wireless Sensor Networks(WSN) and ad-hoc networks. However, our proposed scheme of cross-layer attack in IoT is the first paper to the best of our knowledge. In this paper, we have proposed Rank Manipulation and Drop Delay(RMDD) cross-layer attack in loT, we have investigated how small intensity attack on Routing protocol for low power lossy networks (RPL) degrades the overall application throughput. We have exploited the Rank system of the RPL protocol to implement the attacks. Rank is given to each node in the graph, and it shows its position in the network. If the rank could be manipulated in some manner, then the network topology can be modified. Simulation results demonstrate that the proposed attacks degrade network performance very much in terms of the throughput, latency, and connectivity.
MANET is vulnerable to so many attacks like Black hole, Wormhole, Jellyfish, Dos etc. Attackers can easily launch Wormhole attack by faking a route from original within network. In this paper, we propose an algorithm on AD (Absolute Deviation) of statistical approach to avoid and prevent Wormhole attack. Absolute deviation covariance and correlation take less time to detect Wormhole attack than classical one. Any extra necessary conditions, like GPS are not needed in proposed algorithms. From origin to destination, a fake tunnel is created by wormhole attackers, which is a link with good amount of frequency level. A false idea is created by this, that the source and destination of the path are very nearby each other and will take less time. But the original path takes more time. So it is necessary to calculate the time taken to avoid and prevent Wormhole attack. Better performance by absolute deviation technique than AODV is proved by simulation, done by MATLAB simulator for wormhole attack. Then the packet drop pattern is also measured for Wormholes using Absolute Deviation Correlation Coefficient.
Mobile Ad-Hoc Networks (MANETs) are prone to many security attacks. One such attack is the blackhole attack. This work proposes a simple and effective application layer based intrusion detection scheme in a MANET to detect blackholes. The proposed algorithm utilizes mobile agents (MA) and wtracert (modified version of Traceroute for MANET) to detect multiple black holes in a DSR protocol based MANET. Use of MAs ensure that no modifications need to be carried out in the underlying routing algorithms or other lower layers. Simulation results show successful detection of single and multiple blackhole nodes, using the proposed detection mechanism, across varying mobility speeds of the nodes.
In Mobile Ad-hoc Network (MANET), we cannot predict the clear picture of the topology of a node because of its varying nature. Without notice participation and departure of nodes results in lack of trust relationship between nodes. In such circumstances, there is no guarantee that path between two nodes would be secure or free of malicious nodes. The presence of single malicious node could lead repeatedly compromised node. After providing security to route and data packets still, there is a need for the implementation of defense mechanism that is intrusion detection system(IDS) against compromised nodes. In this paper, we have implemented IDS, which defend against some routing attacks like the black hole and gray hole successfully. After measuring performance we get marginally increased Packet delivery ratio and Throughput.
The analysis of security-related event logs is an important step for the investigation of cyber-attacks. It allows tracing malicious activities and lets a security operator find out what has happened. However, since IT landscapes are growing in size and diversity, the amount of events and their highly different representations are becoming a Big Data challenge. Unfortunately, current solutions for the analysis of security-related events, so called Security Information and Event Management (SIEM) systems, are not able to keep up with the load. In this work, we propose a distributed SIEM platform that makes use of highly efficient distributed normalization and persists event data into an in-memory database. We implement the normalization on common distribution frameworks, i.e. Spark, Storm, Trident and Heron, and compare their performance with our custom-built distribution solution. Additionally, different tuning options are introduced and their speed advantage is presented. In the end, we show how the writing into an in-memory database can be tuned to achieve optimal persistence speed. Using the proposed approach, we are able to not only fully normalize, but also persist more than 20 billion events per day with relatively small client hardware. Therefore, we are confident that our approach can handle the load of events in even very large IT landscapes.
Smart Grid cybersecurity is one of the key ingredients for successful and wide scale adaptation of the Smart Grid by utilities and governments around the world. The implementation of the Smart Grid relies mainly on the highly distributed sensing and communication functionalities of its components such as Wireless Sensor Networks (WSNs), Phasor Measurement Units (PMUs) and other protection devices. This distributed nature and the high number of connected devices are the main challenges for implementing cybersecurity in the smart grid. As an example, the North American Electric Reliability Corporation (NERC) issued the Critical Infrastructure Protection (CIP) standards (CIP-002 through CIP-009) to define cybersecurity requirements for critical power grid infrastructure. However, NERC CIP standards do not specify cybersecurity for different communication technologies such as WSNs, fiber networks and other network types. Implementing security mechanisms in WSNs is a challenging task due to the limited resources of the sensor devices. WSN security mechanisms should not only focus on reducing the power consumption of the sensor devices, but they should also maintain high reliability and throughput needed by Smart Grid applications. In this paper, we present a WSN cybersecurity mechanism suitable for smart grid monitoring application. Our mechanism can detect and isolate various attacks in a smart grid environment, such as denial of sleep, forge and replay attacks in an energy efficient way. Simulation results show that our mechanism can outperform existing techniques while meeting the NERC CIP requirements.
Dual Connectivity(DC) is one of the key technologies standardized in Release 12 of the 3GPP specifications for the Long Term Evolution (LTE) network. It attempts to increase the per-user throughput by allowing the user equipment (UE) to maintain connections with the MeNB (master eNB) and SeNB (secondary eNB) simultaneously, which are inter-connected via non-ideal backhaul. In this paper, we focus on one of the use cases of DC whereby the downlink U-plane data is split at the MeNB and transmitted to the UE via the associated MeNB and SeNB concurrently. In this case, out-of-order packet delivery problem may occur at the UE due to the delay over the non-ideal backhaul link, as well as the dynamics of channel conditions over the MeNB-UE and SeNB-UE links, which will introduce extra delay for re-ordering the packets. As a solution, we propose to adopt the RaptorQ FEC code to encode the source data at the MeNB, and then the encoded symbols are separately transmitted through the MeNB and SeNB. The out-of-order problem can be effectively eliminated since the UE can decode the original data as long as it receives enough encoded symbols from either the MeNB or SeNB. We present detailed protocol design for the RaptorQ code based concurrent transmission scheme, and simulation results are provided to illustrate the performance of the proposed scheme.
Secure routing over VANET is a major issue due to its high mobility environment. Due to dynamic topology, routes are frequently updated and also suffers from link breaks due to the obstacles i.e. buildings, tunnels and bridges etc. Frequent link breaks can cause packet drop and thus result in degradation of network performance. In case of VANETs, it becomes very difficult to identify the reason of the packet drop as it can also occur due to the presence of a security threat. VANET is a type of wireless adhoc network and suffer from common attacks which exist for mobile adhoc network (MANET) i.e. Denial of Services (DoS), Black hole, Gray hole and Sybil attack etc. Researchers have already developed various security mechanisms for secure routing over MANET but these solutions are not fully compatible with unique attributes of VANET i.e. vehicles can communicate with each other (V2V) as well as communication can be initiated with infrastructure based network (V2I). In order to secure the routing for both types of communication, there is need to develop a solution. In this paper, a method for secure routing is introduced which can identify as well as eliminate the existing security threat.
Secure routing in the field of mobile ad hoc network (MANET) is one of the most flourishing areas of research. Devising a trustworthy security protocol for ad hoc routing is a challenging task due to the unique network characteristics such as lack of central authority, rapid node mobility, frequent topology changes, insecure operational environment, and confined availability of resources. Due to low configuration and quick deployment, MANETs are well-suited for emergency situations like natural disasters or military applications. Therefore, data transfer between two nodes should necessarily involve security. A black-hole attack in the mobile ad-hoc network (MANET) is an offense occurring due to malicious nodes, which attract the data packets by incorrectly publicizing a fresh route to the destination. A clustering direction in AODV routing protocol for the detection and prevention of black-hole attack in MANET has been put forward. Every member of the unit will ping once to the cluster head, to detect the exclusive difference between the number of data packets received and forwarded by the particular node. If the fault is perceived, all the nodes will obscure the contagious nodes from the network. The reading of the system performance has been done in terms of packet delivery ratio (PDR), end to end delay (ETD) throughput and Energy simulation inferences are recorded using ns2 simulator.
A MANET is a group of wireless mobile nodes which cooperate in forwarding packets over a wireless links. Due to the lack of an infrastructure and open nature of MANET, security has become an essential and challenging issue. The mobile nature and selfishness of malicious node is a critical issue in causing the security problem. The MANETs are more defenseless to the security attacks; some of them are black hole and gray hole attacks. One of its key challenges is to find black hole attack. In this paper, researchers propose a secure AODV protocol (SAODV) for detection and removal of black hole and gray hole attacks in MANTEs. The proposed method is simulated using NS-2 and it seems that the proposed methodology is more secure than the existing one.
We develop and validate Internet path measurement techniques to distinguish congestion experienced when a flow self-induces congestion in the path from when a flow is affected by an already congested path. One application of this technique is for speed tests, when the user is affected by congestion either in the last mile or in an interconnect link. This difference is important because in the latter case, the user is constrained by their service plan (i.e., what they are paying for), and in the former case, they are constrained by forces outside of their control. We exploit TCP congestion control dynamics to distinguish these cases for Internet paths that are predominantly TCP traffic. In TCP terms, we re-articulate the question: was a TCP flow bottlenecked by an already congested (possibly interconnect) link, or did it induce congestion in an otherwise idle (possibly a last-mile) link? TCP congestion control affects the round-trip time (RTT) of packets within the flow (i.e., the flow RTT): an endpoint sends packets at higher throughput, increasing the occupancy of the bottleneck buffer, thereby increasing the RTT of packets in the flow. We show that two simple, statistical metrics derived from the flow RTT during the slow start period—its coefficient of variation, and the normalized difference between the maximum and minimum RTT—can robustly identify which type of congestion the flow encounters. We use extensive controlled experiments to demonstrate that our technique works with up to 90% accuracy. We also evaluate our techniques using two unique real-world datasets of TCP throughput measurements using Measurement Lab data and the Ark platform. We find up to 99% accuracy in detecting self-induced congestion, and up to 85% accuracy in detecting external congestion. Our results can benefit regulators of interconnection markets, content providers trying to improve customer service, and users trying to understand whether poor performance is something they can fix by upgrading their service tier.
Network coding is a potential method that numerous investigators have move forwarded due to its significant advantages to enhance the proficiency of data communication. In this work, utilize simulations to assess the execution of various network topologies employing network coding. By contrasting the results of network and without network coding, it insists that network coding can improve the throughput, end-to-end delays, Packet Delivery Rate (PDR) and consistency. This paper presents the comparative performance analysis of network coding such as, XOR, LNC, and RLNC. The results demonstrates the XOR technique has attractive outcomes and can improve the real time performance metrics i.e.; throughput, end-to-end delay and PDR by substantial limitations. The analysis has been carried out based on packet size and also number of packets to be transmitted. Results illustrates that the network coding facilitate in dependence between networks.
This paper presents a 28nm SoC with a programmable FC-DNN accelerator design that demonstrates: (1) HW support to exploit data sparsity by eliding unnecessary computations (4× energy reduction); (2) improved algorithmic error tolerance using sign-magnitude number format for weights and datapath computation; (3) improved circuit-level timing violation tolerance in datapath logic via timeborrowing; (4) combined circuit and algorithmic resilience with Razor timing violation detection to reduce energy via VDD scaling or increase throughput via FCLK scaling; and (5) high classification accuracy (98.36% for MNIST test set) while tolerating aggregate timing violation rates \textbackslashtextgreater10-1. The accelerator achieves a minimum energy of 0.36μJ/pred at 667MHz, maximum throughput at 1.2GHz and 0.57μJ/pred, or a 10%-margined operating point at 1GHz and 0.58μJ/pred.
To reduce the complex communication problem that arise as the number of on-chip component increases, the use of Network-on-Chip (NoC) as interconnection architectures have become more promising to solve complex on-chip communication problems. However, providing a suitable test base to measure and verify functionality of any NoC is a compulsory. Universal Verification Methodology (UVM) is introduced as a standardized and reusable methodology for verifying integrated circuit design. In this research, a scalable and reconfigurable verification and benchmark environment for NoC is proposed.
Internet of Things (IoT) is an emerging paradigm in information technology (IT) that integrates advancements in sensing, computing and communication to offer enhanced services in everyday life. IoTs are vulnerable to sybil attacks wherein an adversary fabricates fictitious identities or steals the identities of legitimate nodes. In this paper, we model sybil attacks in IoT and evaluate its impact on performance. We also develop a defense mechanism based on behavioural profiling of nodes. We develop an enhanced AODV (EAODV) protocol by using the behaviour approach to obtain the optimal routes. In EAODV, the routes are selected based on the trust value and hop count. Sybil nodes are identified and discarded based on the feedback from neighbouring nodes. Evaluation of our protocol in ns-2 simulator demonstrates the effectiveness of our approach in identifying and detecting sybil nodes in IoT network.
Explosive naval mines pose a threat to ocean and sea faring vessels, both military and civilian. This work applies deep neural network (DNN) methods to the problem of detecting minelike objects (MLO) on the seafloor in side-scan sonar imagery. We explored how the DNN depth, memory requirements, calculation requirements, and training data distribution affect detection efficacy. A visualization technique (class activation map) was incorporated that aids a user in interpreting the model's behavior. We found that modest DNN model sizes yielded better accuracy (98%) than very simple DNN models (93%) and a support vector machine (78%). The largest DNN models achieved textless;1% efficacy increase at a cost of a 17x increase of trainable parameter count and computation requirements. In contrast to DNNs popularized for many-class image recognition tasks, the models for this task require far fewer computational resources (0.3% of parameters), and are suitable for embedded use within an autonomous unmanned underwater vehicle.