Biblio
Mobile Ad-hoc Network (MANET) is a prominent technology in the wireless networking field in which the movables nodes operates in distributed manner and collaborates with each other in order to provide the multi-hop communication between the source and destination nodes. Generally, the main assumption considered in the MANET is that each node is trusted node. However, in the real scenario, there are some unreliable nodes which perform black hole attack in which the misbehaving nodes attract all the traffic towards itself by giving false information of having the minimum path towards the destination with a very high destination sequence number and drops all the data packets. In the paper, we have presented different categories for black hole attack mitigation techniques and also presented the summary of various techniques along with its drawbacks that need to be considered while designing an efficient protocol.
Image encryption takes been used by armies and governments to help top-secret communication. Nowadays, this one is frequently used for guarding info among various civilian systems. To perform secure image encryption by means of various chaotic maps, in such system a legal party may perhaps decrypt the image with the support of encryption key. This reversible chaotic encryption technique makes use of Arnold's cat map, in which pixel shuffling offers mystifying the image pixels based on the number of iterations decided by the authorized image owner. This is followed by other chaotic encryption techniques such as Logistic map and Tent map, which ensures secure image encryption. The simulation result shows the planned system achieves better NPCR, UACI, MSE and PSNR respectively.
One challenge for engineered cyber physical systems (CPSs) is the possibility for a malicious intruder to change the data transmitted across the cyber channel as a means to degrade the performance of the physical system. In this paper, we consider a data injection attack on a cyber physical system. We propose a hybrid framework for detecting the presence of an attack and operating the plant in spite of the attack. Our method uses an observer-based detection mechanism and a passivity balance defense framework in the hybrid architecture. By switching the controller, passivity and exponential stability are established under the proposed framework.
Internet Protocol version 6 (IPv6) over Low power Wireless Personal Area Networks (6LoWPAN) is extensively used in wireless sensor networks (WSNs) due to its ability to transmit IPv6 packet with low bandwidth and limited resources. 6LoWPAN has several operations in each layer. Most existing security challenges are focused on the network layer, which is represented by its routing protocol for low-power and lossy network (RPL). RPL components include WSN nodes that have constrained resources. Therefore, the exposure of RPL to various attacks may lead to network damage. A sinkhole attack is a routing attack that could affect the network topology. This paper aims to investigate the existing detection mechanisms used in detecting sinkhole attack on RPL-based networks. This work categorizes and presents each mechanism according to certain aspects. Then, their advantages and drawbacks with regard to resource consumption and false positive rate are discussed and compared.
Proactive security reviews and test efforts are a necessary component of the software development lifecycle. Resource limitations often preclude reviewing the entire code base. Making informed decisions on what code to review can improve a team's ability to find and remove vulnerabilities. Risk-based attack surface approximation (RASA) is a technique that uses crash dump stack traces to predict what code may contain exploitable vulnerabilities. The goal of this research is to help software development teams prioritize security efforts by the efficient development of a risk-based attack surface approximation. We explore the use of RASA using Mozilla Firefox and Microsoft Windows stack traces from crash dumps. We create RASA at the file level for Firefox, in which the 15.8% of the files that were part of the approximation contained 73.6% of the vulnerabilities seen for the product. We also explore the effect of random sampling of crashes on the approximation, as it may be impractical for organizations to store and process every crash received. We find that 10-fold random sampling of crashes at a rate of 10% resulted in 3% less vulnerabilities identified than using the entire set of stack traces for Mozilla Firefox. Sampling crashes in Windows 8.1 at a rate of 40% resulted in insignificant differences in vulnerability and file coverage as compared to a rate of 100%.
Complex systems are prevalent in many fields such as finance, security and industry. A fundamental problem in system management is to perform diagnosis in case of system failure such that the causal anomalies, i.e., root causes, can be identified for system debugging and repair. Recently, invariant network has proven a powerful tool in characterizing complex system behaviors. In an invariant network, a node represents a system component, and an edge indicates a stable interaction between two components. Recent approaches have shown that by modeling fault propagation in the invariant network, causal anomalies can be effectively discovered. Despite their success, the existing methods have a major limitation: they typically assume there is only a single and global fault propagation in the entire network. However, in real-world large-scale complex systems, it's more common for multiple fault propagations to grow simultaneously and locally within different node clusters and jointly define the system failure status. Inspired by this key observation, we propose a two-phase framework to identify and rank causal anomalies. In the first phase, a probabilistic clustering is performed to uncover impaired node clusters in the invariant network. Then, in the second phase, a low-rank network diffusion model is designed to backtrack causal anomalies in different impaired clusters. Extensive experimental results on real-life datasets demonstrate the effectiveness of our method.
In Energy Internet mode, a large number of alarm information is generated when equipment exception and multiple faults in large power grid, which seriously affects the information collection, fault analysis and delays the accident treatment for the monitors. To this point, this paper proposed a method for power grid monitoring to monitor and diagnose fault in real time, constructed the equipment fault logical model based on five section alarm information, built the standard fault information set, realized fault information optimization, fault equipment location, fault type diagnosis, false-report message and missing-report message analysis using matching algorithm. The validity and practicality of the proposed method by an actual case was verified, which can shorten the time of obtaining and analyzing fault information, accelerate the progress of accident treatment, ensure the safe and stable operation of power grid.
This paper presents a quantitative study of adaptive filtering to cancel the EMG artifact from ECG signals. The proposed adaptive algorithm operates in real time; it adjusts its coefficients simultaneously with signals acquisition minimizing a cost function, the summation of weighted least square errors (LSE). The obtained results prove the success and the effectiveness of the proposed algorithm. The best ones were obtained for the forgetting factor equals to 0.99 and the regularization parameter equals to 0.02..
Trying to solve the risk of data privacy disclosure in classification process, a Random Forest algorithm under differential privacy named DPRF-gini is proposed in the paper. In the process of building decision tree, the algorithm first disturbed the process of feature selection and attribute partition by using exponential mechanism, and then meet the requirement of differential privacy by adding Laplace noise to the leaf node. Compared with the original algorithm, Empirical results show that protection of data privacy is further enhanced while the accuracy of the algorithm is slightly reduced.
Dual Connectivity(DC) is one of the key technologies standardized in Release 12 of the 3GPP specifications for the Long Term Evolution (LTE) network. It attempts to increase the per-user throughput by allowing the user equipment (UE) to maintain connections with the MeNB (master eNB) and SeNB (secondary eNB) simultaneously, which are inter-connected via non-ideal backhaul. In this paper, we focus on one of the use cases of DC whereby the downlink U-plane data is split at the MeNB and transmitted to the UE via the associated MeNB and SeNB concurrently. In this case, out-of-order packet delivery problem may occur at the UE due to the delay over the non-ideal backhaul link, as well as the dynamics of channel conditions over the MeNB-UE and SeNB-UE links, which will introduce extra delay for re-ordering the packets. As a solution, we propose to adopt the RaptorQ FEC code to encode the source data at the MeNB, and then the encoded symbols are separately transmitted through the MeNB and SeNB. The out-of-order problem can be effectively eliminated since the UE can decode the original data as long as it receives enough encoded symbols from either the MeNB or SeNB. We present detailed protocol design for the RaptorQ code based concurrent transmission scheme, and simulation results are provided to illustrate the performance of the proposed scheme.
One of the specially designated versatile networks, commonly referred to as MANET, performs on the basics that each and every one grouping in nodes totally operate in self-sorting out limits. In any case, performing in a group capacity maximizes quality and different sources. Mobile ad hoc network is a wireless infrastructureless network. Due to its unique features, various challenges are faced under MANET when the role of routing and its security comes into play. The review has demonstrated that the impact of failures during the information transmission has not been considered in the existing research. The majority of strategies for ad hoc networks just determines the path and transmits the data which prompts to packet drop in case of failures, thus resulting in low dependability. The majority of the existing research has neglected the use of the rejoining processing of the root nodes network. Most of the existing techniques are based on detecting the failures but the use of path re-routing has also been neglected in the existing methods. Here, we have proposed a method of path re-routing for managing the authorized nodes and managing the keys for group in ad hoc environment. Securing Schemes, named as 2ACK and the EGSR schemes have been proposed, which may be truly interacted to most of the routing protocol. The path re-routing has the ability to reduce the ratio of dropped packets. The comparative analysis has clearly shown that the proposed technique outperforms the available techniques in terms of various quality metrics.
This paper proposed a feedback shift register structure which can be split, it is based on a research of operating characteristics about 70 kinds of cryptographic algorithms and the research shows that the “different operations similar structure” reconfigurable design is feasible. Under the configuration information, the proposed structure can implement the multiplication in finite field GF(2n), the multiply/divide linear feedback shift register and other operations. Finally, this paper did a logic synthesis based on 55nm CMOS standard-cell library and the results show that the proposed structure gets a hardware resource saving of nearly 32%, the average power consumption saving of nearly 55% without the critical delay increasing significantly. Therefore, the “different operations similar structure” reconfigurable design is a new design method and the proposed feedback shift register structure can be an important processing unit for coarse-grained reconfigurable cryptologic array.
Reliability and robustness of Internet of Things (IoT)-cloud-based communication is an important issue for prospective development of the IoT concept. In this regard, a robust and unique client-to-cloud communication physical layer is required. Physical Unclonable Function (PUF) is regarded as a suitable physics-based random identification hardware, but suffers from reliability problems. In this paper, we propose novel hardware concepts and furthermore an analysis method in CMOS technology to improve the hardware-based robustness of the generated PUF word from its first point of generation to the last cloud-interfacing point in a client. Moreover, we present a spectral analysis for an inexpensive high-yield implementation in a 65nm generation. We also offer robust monitoring concepts for the PUF-interfacing communication physical layer hardware.
According to the new Tor network (6.0.5 version) can help the domestic users easily realize "over the wall", and of course criminals may use it to visit deep and dark website also. The paper analyzes the core technology of the new Tor network: the new flow obfuscation technology based on meek plug-in and real instance is used to verify the new Tor network's fast connectivity. On the basis of analyzing the traffic confusion mechanism and the network crime based on Tor, it puts forward some measures to prevent the using of Tor network to implement network crime.
Recognizing Families In the Wild (RFIW) is a large-scale, multi-track automatic kinship recognition evaluation, supporting both kinship verification and family classification on scales much larger than ever before. It was organized as a Data Challenge Workshop hosted in conjunction with ACM Multimedia 2017. This was achieved with the largest image collection that supports kin-based vision tasks. In the end, we use this manuscript to summarize evaluation protocols, progress made and some technical background and performance ratings of the algorithms used, and a discussion on promising directions for both research and engineers to be taken next in this line of work.
Choosing how to write natural language scenarios is challenging, because stakeholders may over-generalize their descriptions or overlook or be unaware of alternate scenarios. In security, for example, this can result in weak security constraints that are too general, or missing constraints. Another challenge is that analysts are unclear on where to stop generating new scenarios. In this paper, we introduce the Multifactor Quality Method (MQM) to help requirements analysts to empirically collect system constraints in scenarios based on elicited expert preferences. The method combines quantitative statistical analysis to measure system quality with qualitative coding to extract new requirements. The method is bootstrapped with minimal analyst expertise in the domain affected by the quality area, and then guides an analyst toward selecting expert-recommended requirements to monotonically increase system quality. We report the results of applying the method to security. This include 550 requirements elicited from 69 security experts during a bootstrapping stage, and subsequent evaluation of these results in a verification stage with 45 security experts to measure the overall improvement of the new requirements. Security experts in our studies have an average of 10 years of experience. Our results show that using our method, we detect an increase in the security quality ratings collected in the verification stage. Finally, we discuss how our proposed method helps to improve security requirements elicitation, analysis, and measurement.
Most of the social media platforms generate a massive amount of raw data that is slow-paced. On the other hand, Internet Relay Chat (IRC) protocol, which has been extensively used by hacker community to discuss and share their knowledge, facilitates fast-paced and real-time text communications. Previous studies of malicious IRC behavior analysis were mostly either offline or batch processing. This results in a long response time for data collection, pre-processing, and threat detection. However, since the threats can use the latest vulnerabilities to exploit systems (e.g. zero-day attack) and which can spread fast using IRC channels. Current IRC channel monitoring techniques cannot provide the required fast detection and alerting. In this paper, we present an alternative approach to overcome this limitation by providing real-time and autonomic threat detection in IRC channels. We demonstrate the capabilities of our approach using as an example the shadow brokers' leak exploit (the exploit leveraged by WannaCry ransomware attack) that was captured and detected by our framework.
To overcome the current cybersecurity challenges of protecting our cyberspace and applications, we present an innovative cloud-based architecture to offer resilient Dynamic Data Driven Application Systems (DDDAS) as a cloud service that we refer to as resilient DDDAS as a Service (rDaaS). This architecture integrates Service Oriented Architecture (SOA) and DDDAS paradigms to offer the next generation of resilient and agile DDDAS-based cyber applications, particularly convenient for critical applications such as Battle and Crisis Management applications. Using the cloud infrastructure to offer resilient DDDAS routines and applications, large scale DDDAS applications can be developed by users from anywhere and by using any device (mobile or stationary) with the Internet connectivity. The rDaaS provides transformative capabilities to achieve superior situation awareness (i.e., assessment, visualization, and understanding), mission planning and execution, and resilient operations.
The UHF Radiofrequency Identification technology offers nowadays a viable technological solution for the implementation of low-level environmental monitoring of connected critical infrastructures to be protected from both physical threats and cyber attacks. An RFID sensor network was developed within the H2020 SCISSOR project, by addressing the design of both hardware components, that is a new family of multi-purpose wireless boards, and of control software handling the network topology. The hierarchical system is able to the detect complex, potentially dangerous, events such as the un-authorized access to a restricted area, anomalies of the electrical equipments, or the unusual variation of environmental parameters. The first real-world test-bed has been deployed inside an operational smart-grid on the Favignana Island. Currently, the network is fully working and remotely accessible.