Biblio
Tactical networks are generally simple ad-hoc networks in design, however, this simple design often gets complicated, when heterogeneous wireless technologies have to work together to enable seamless multi-hop communications across multiple sessions. In recent years, there has been some significant advances in computational, radio, localization, and networking te, and session's rate i.e., aggregate capacity averaged over a 4-time-slot frame)chnologies, which motivate a clean slate design of the control plane for multi-hop tactical wireless networks. In this paper, we develop a global network optimization framework, which characterizes the control plane for multi-hop wireless tactical networks. This framework abstracts the underlying complexity of tactical wireless networks and orchestrates the the control plane functions. Specifically, we develop a cross-layer optimization framework, which characterizes the interaction between the physical, link, and network layers. By applying the framework to a throughput maximization problem, we show how the proposed framework can be utilized to solve a broad range of wireless multi-hop tactical networking problems.
Named Data Networking (NDN) is one of the future internet architectures, which is a clean-slate approach. NDN provides intelligent data retrieval using the principles of name-based symmetrical forwarding of Interest/Data packets and innetwork caching. The continually increasing demand for rapid dissemination of large-scale scientific data is driving the use of NDN in data-intensive science experiments. In this paper, we establish an intercontinental NDN testbed. In the testbed, an NDN-based application that targets climate science as an example data intensive science application is designed and implemented, which has differentiated features compared to those of previous studies. We verify experimental justification of using NDN for climate science in the intercontinental network, through performance comparisons between classical delivery techniques and NDN-based climate data delivery.
A5-1 algorithm is a stream cipher used to encrypt voice data in GSM, which needs to be realized with high performance due to real-time requirements. Traditional implementation on FPGA or ASIC can't obtain a trade-off among performance, cost and flexibility. To this aim, this paper introduces CGRCA to implement A5-1, and in order to optimize the performance and resource consumption, this paper proposes a resource-based path seeking (RPS) algorithm to develop an advanced implementation. Experimental results show that final optimal throughput of A5-1 implemented on CGRCA is 162.87Mbps when the frequency is 162.87MHz, and the set-up time is merely 87 cycles, which is optimal among similar works.
Supervisory Control and Data Acquisition(SCADA) communications are often subjected to various sophisticated cyber-attacks mostly because of their static system characteristics, enabling an attacker for easier profiling of the target system(s) and thereby impacting the Critical Infrastructures(CI). In this Paper, a novel approach to mitigate such static vulnerabilities is proposed by implementing a Moving Target Defense (MTD) strategy in a power grid SCADA environment, leveraging the existing communication network with an end-to-end IP-Hopping technique among trusted peers. The main contribution involves the design and implementation of MTD Architecture on Iowa State's PowerCyber testbed for targeted cyber-attacks, without compromising the availability of a SCADA system and studying the delay and throughput characteristics for different hopping rates in a realistic environment. Finally, we study two cases and provide mitigations for potential weaknesses of the proposed mechanism. Also, we propose to incorporate port mutation to further increase attack complexity as part of future work.
Security threats such as jamming and route manipulation can have significant consequences on the performance of modern wireless networks. To increase the efficacy and stealthiness of such threats, a number of extremely challenging, next-generation cross-layer attacks have been recently unveiled. Although existing research has thoroughly addressed many single-layer attacks, the problem of detecting and mitigating cross-layer attacks still remains unsolved. For this reason, in this paper we propose a novel framework to analyze and address cross-layer attacks in wireless networks. Specifically, our framework consists of a detection and a mitigation component. The attack detection component is based on a Bayesian learning detection scheme that constructs a model of observed evidence to identify stealthy attack activities. The mitigation component comprises a scheme that achieves the desired trade-off between security and performance. We specialize and evaluate the proposed framework by considering a specific cross-layer attack that uses jamming as an auxiliary tool to achieve route manipulation. Simulations and experimental results obtained with a testbed made up by USRP software-defined radios demonstrate the effectiveness of the proposed methodology.
Cognitive radio networks (CRNs) enable secondary users (SU) to make use of licensed spectrum without interfering with the signal generated by primary users (PUs). To avoid such interference, the SU is required to sense the medium for a period of time and eventually use it only if the band is perceived to be idle. In this context, the encryption process is carried out for the SU requests prior to their transmission whilst the strength of the security in CRNs is directly proportional to the length of the encryption key. If a request of a PU on arrival finds an SU request being either encrypted or transmitted, then the SU is preempted from service. However, excessive sensing time for the detection of free spectrum by SUs as well as extended periods of the CRN being at an insecure state have an adverse impact on network performance. To this end, a generalized stochastic Petri net (GSPN) is proposed in order to investigate sensing vs. security vs. performance trade-offs, leading to an efficient use of the spectrum band. Typical numerical simulation experiments are carried out, based on the application of the Mobius Petri Net Package and associated interpretations are made.
The paper presents a joint optimization algorithm for coverage and capacity in heterogeneous cellular networks. A joint optimization objective related to capacity loss considering both coverage hole and overlap area based on power density distribution is proposed. The optimization object is a NP problem due to that the adjusting parameters are mixed with discrete and continuous, so the bacterial foraging (BF) algorithm is improved based on network performance analysis result to find a more effective direction than randomly selected. The results of simulation show that the optimization object is feasible gains a better effect than traditional method.
Denial of Service (DoS) attacks is one of the major threats and among the hardest security problems in the Internet world. Of particular concern are Distributed Denial of Service (DDoS) attacks, whose impact can be proportionally severe. With little or no advance warning, an attacker can easily exhaust the computing resources of its victim within a short period of time. In this paper, we study the impact of a UDP flood attack on TCP throughput, round-trip time, and CPU utilization for a Web Server with the new generation of Linux platform, Linux Ubuntu 13. This paper also evaluates the impact of various defense mechanisms, including Access Control Lists (ACLs), Threshold Limit, Reverse Path Forwarding (IP Verify), and Network Load Balancing. Threshold Limit is found to be the most effective defense.
Distributed Denial of Service (DoS) attacks is one of the major threats and among the hardest security problems in the Internet world. In this paper, we study the impact of a UDP flood attack on TCP throughputs, round-trip time, and CPU utilization on the latest version of Windows and Linux platforms, namely, Windows Server 2012 and Linux Ubuntu 13. This paper also evaluates several defense mechanisms including Access Control Lists (ACLs), Threshold Limit, Reverse Path Forwarding (IP Verify), and Network Load Balancing. Threshold Limit defense gave better results than the other solutions.
We build upon the clean-slate, holistic approach to the design of secure protocols for wireless ad-hoc networks proposed in part one. We consider the case when the nodes are not synchronized, but instead have local clocks that are relatively affine. In addition, the network is open in that nodes can enter at arbitrary times. To account for this new behavior, we make substantial revisions to the protocol in part one. We define a game between protocols for open, unsynchronized nodes and the strategies of adversarial nodes. We show that the same guarantees in part one also apply in this game: the protocol not only achieves the max-min utility, but the min-max utility as well. That is, there is a saddle point in the game, and furthermore, the adversarial nodes are effectively limited to either jamming or conforming with the protocol.
Turbo code has been one of the important subjects in coding theory since 1993. This code has low Bit Error Rate (BER) but decoding complexity and delay are big challenges. On the other hand, considering the complexity and delay of separate blocks for coding and encryption, if these processes are combined, the security and reliability of communication system are guaranteed. In this paper a secure decoding algorithm in parallel on General-Purpose Graphics Processing Units (GPGPU) is proposed. This is the first prototype of a fast and parallel Joint Channel-Security Coding (JCSC) system. Despite of encryption process, this algorithm maintains desired BER and increases decoding speed. We considered several techniques for parallelism: (1) distribute decoding load of a code word between multiple cores, (2) simultaneous decoding of several code words, (3) using protection techniques to prevent performance degradation. We also propose two kinds of optimizations to increase the decoding speed: (1) memory access improvement, (2) the use of new GPU properties such as concurrent kernel execution and advanced atomics to compensate buffering latency.
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
Storage area networking is driving commodity data center switches to support lossless Ethernet (DCB). Unfortunately, to enable DCB for all traffic on arbitrary network topologies, we must address several problems that can arise in lossless networks, e.g., large buffering delays, unfairness, head of line blocking, and deadlock. We propose TCP-Bolt, a TCP variant that not only addresses the first three problems but reduces flow completion times by as much as 70%. We also introduce a simple, practical deadlock-free routing scheme that eliminates deadlock while achieving aggregate network throughput within 15% of ECMP routing. This small compromise in potential routing capacity is well worth the gains in flow completion time. We note that our results on deadlock-free routing are also of independent interest to the storage area networking community. Further, as our hardware testbed illustrates, these gains are achievable today, without hardware changes to switches or NICs.
As the interconnect delay is becoming a larger fraction of the clock cycle time, the conventional global stalling mechanism, which is used to correct error in general synchronous circuits, would be no longer feasible because of the expensive timing cost for the stalling signal to travel across the circuit. In this paper, we propose recovery-based resilient latency-insensitive systems (RLISs) that efficiently integrate error-recovery techniques with latency-insensitive design to replace the global stalling. We first demonstrate a baseline RLIS as the motivation of our work that uses additional output buffer which guarantees that only correct data can enter the output channel. However this baseline RLIS suffers from performance degradations even when errors do not occur. We propose a novel improved RLIS that allows erroneous data to propagate in the system. Equipped with improved queues that prevent accumulation of erroneous data, the improved RLIS retains the system performance. We provide theoretical study that analyzes the impact of errors on system performance and the queue sizing problem. We also theoretically prove that the improved RLIS performs no worse than the global stalling mechanism. Experimental results show that the improved RLIS has 40.3% and even 3.1% throughput improvements compared to the baseline RLIS and the infeasible global stalling mechanism respectively, with less than 10% hardware overhead.
Mobile Ad-hoc Network is highly susceptible towards the security attacks due to its dynamic topology, resource constraint, energy constraint operations, limited physical security and lack of infrastructure. Misleading routing attack (MIRA) in MANET intend to delay packet to its fullest in order to generate time outs at the source as packets will not reach in time. Its main objective is to generate delay and increase network overhead. It is a variation to the sinkhole attack. In this paper, we have proposed a detection scheme to detect the malicious nodes at route discovery as well as at packet transmissions. The simulation results of MIRA attack indicate that though delay is increased by 91.30% but throughput is not affected which indicates that misleading routing attack is difficult to detect. The proposed detection scheme when applied to misleading routing attack suggests a significant decrease in delay.
Hash tables form a core component of many algorithms as well as network devices. Because of their large size, they often require a combined memory model, in which some of the elements are stored in a fast memory (for example, cache or on-chip SRAM) while others are stored in much slower memory (namely, the main memory or off-chip DRAM). This makes the implementation of real-life hash tables particularly delicate, as a suboptimal choice of the hashing scheme parameters may result in a higher average query time, and therefore in a lower throughput. In this paper, we focus on multiple-choice hash tables. Given the number of choices, we study the tradeoff between the load of a hash table and its average lookup time. The problem is solved by analyzing an equivalent problem: the expected maximum matching size of a random bipartite graph with a fixed left-side vertex degree. Given two choices, we provide exact results for any finite system, and also deduce asymptotic results as the fast memory size increases. In addition, we further consider other variants of this problem and model the impact of several parameters. Finally, we evaluate the performance of our models on Internet backbone traces, and illustrate the impact of the memories speed difference on the choice of parameters. In particular, we show that the common intuition of entirely avoiding slow memory accesses by using highly efficient schemes (namely, with many fast-memory choices) is not always optimal.
The emergence of new network applications, such as the network intrusion detection system and packet-level accounting, requires packet classification to report all matched rules instead of only the best matched rule. Although several schemes have been proposed recently to address the multimatch packet classification problem, most of them require either huge memory or expensive ternary content addressable memory (TCAM) to store the intermediate data structure, or they suffer from steep performance degradation under certain types of classifiers. In this paper, we decompose the operation of multimatch packet classification from the complicated multidimensional search to several single-dimensional searches, and present an asynchronous pipeline architecture based on a signature tree structure to combine the intermediate results returned from single-dimensional searches. By spreading edges of the signature tree across multiple hash tables at different stages, the pipeline can achieve a high throughput via the interstage parallel access to hash tables. To exploit further intrastage parallelism, two edge-grouping algorithms are designed to evenly divide the edges associated with each stage into multiple work-conserving hash tables. To avoid collisions involved in hash table lookup, a hybrid perfect hash table construction scheme is proposed. Extensive simulation using realistic classifiers and traffic traces shows that the proposed pipeline architecture outperforms HyperCuts and B2PC schemes in classification speed by at least one order of magnitude, while having a similar storage requirement. Particularly, with different types of classifiers of 4K rules, the proposed pipeline architecture is able to achieve a throughput between 26.8 and 93.1 Gb/s using perfect hash tables.
This paper proposes a new cross-layer based packet scheduling scheme for multimedia traffic in satellite Long Term Evolution (LTE) network which adopts MIMO technology. The Satellite LTE air interface will provide global coverage and hence complement its terrestrial counterpart in the provision of mobile services (especially multimedia services) to users across the globe. A dynamic packet scheduling scheme is very important towards actualizing an effective utilization of the limited available resources in satellite LTE networks without compromise to the Quality of Service (QoS) demands of multimedia traffic. Hence, the need for an effective packet scheduling algorithm cannot be overemphasized. The aim of this paper is to propose a new scheduling algorithm tagged Cross-layer Based Queue-Aware (CBQA) Scheduler that will provide a good trade-off among QoS, fairness and throughput. The newly proposed scheduler is compared to existing ones through simulations and various performance indices have been used. A land mobile dual-polarized GEO satellite system has been considered for this work.
Automated server parameter tuning is crucial to performance and availability of Internet applications hosted in cloud environments. It is challenging due to high dynamics and burstiness of workloads, multi-tier service architecture, and virtualized server infrastructure. In this paper, we investigate automated and agile server parameter tuning for maximizing effective throughput of multi-tier Internet applications. A recent study proposed a reinforcement learning based server parameter tuning approach for minimizing average response time of multi-tier applications. Reinforcement learning is a decision making process determining the parameter tuning direction based on trial-and-error, instead of quantitative values for agile parameter tuning. It relies on a predefined adjustment value for each tuning action. However it is nontrivial or even infeasible to find an optimal value under highly dynamic and bursty workloads. We design a neural fuzzy control based approach that combines the strengths of fast online learning and self-adaptiveness of neural networks and fuzzy control. Due to the model independence, it is robust to highly dynamic and bursty workloads. It is agile in server parameter tuning due to its quantitative control outputs. We implemented the new approach on a testbed of virtualized data center hosting RUBiS and WikiBench benchmark applications. Experimental results demonstrate that the new approach significantly outperforms the reinforcement learning based approach for both improving effective system throughput and minimizing average response time.
Theft or loss of a mobile device could be an information security risk as it can result in loss of con fidential personal data. Traditional cryptographic algorithms are not suitable for resource constrained and handheld devices. In this paper, we have developed an efficient and user friendly tool called “NCRYPT” on Android platform. “NCRYPT” application is used to secure the data at rest on Android thus making it inaccessible to unauthorized users. It is based on lightweight encryption scheme i.e. Hummingbird-2. The application provides secure storage by making use of password based authentication so that an adversary cannot access the confidential data stored on the mobile device. The cryptographic key is derived through the password based key generation method PBKDF2 from the standard SUN JCE cryptographic provider. Various tools for encryption are available in the market which are based on AES or DES encryption schemes. Ihe reported tool is based on Hummingbird-2 and is faster than most of the other existing schemes. It is also resistant to most of attacks applicable to Block and Stream Ciphers. Hummingbird-2 has been coded in C language and embedded in Android platform with the help of JNI (Java Native Interface) for faster execution. This application provides choice for en crypting the entire data on SD card or selective files on the smart phone and protect p ersonal or confidential information available in such devices.
In this paper we present WiMesh, a software tool we developed during the last ten years of research conducted in the field of multi-radio wireless mesh networks. WiMesh serves two main purposes: (i) to run different algorithms for the assignment of channels, transmission rate and power to the available network radios; (ii) to automatically setup and run ns-3 simulations based on the network configuration returned by such algorithms. WiMesh basically consists of three libraries and three corresponding utilities that allow to easily conduct experiments. All such utilities accept as input an XML configuration file where a number of options can be specified. WiMesh is freely available to the research community, with the purpose of easing the development of new algorithms and the verification of their performances.
Wireless mesh networks (WMNs) are attracting more and more real time applications. This kind of applications is constrained in terms of Quality of Service (QoS). Existing works in this area are mostly designed for mobile ad hoc networks, which, unlike WMNs, are mainly sensitive to energy and mobility. However, WMNs have their specific characteristics (e.g. static routers and heavy traffic load), which require dedicated QoS protocols. This paper proposes a novel traffic regulation scheme for multimedia support in WMNs. The proposed scheme aims to regulate the traffic sending rate according to the network state, based on the buffer evolution at mesh routers and on the priority of each traffic type. By monitoring the buffer evolution at mesh routers, our scheme is able to predict possible congestion, or QoS violation, early enough before their occurrence; each flow is then regulated according to its priority and to its QoS requirements. The idea behind the proposed scheme is to maintain lightly loaded buffers in order to minimize the queuing delays, as well as, to avoid congestion. Moreover, the regulation process is made smoothly in order to ensure the continuity of real time and interactive services. We use the interval type-2 fuzzy logic system (IT2 FLS), known by its adequacy to uncertain environments, to make suitable regulation decisions. The performance of our scheme is proved through extensive simulations in different network and traffic load scales.
Wireless Mesh Network (WMN) is a promising wireless network architecture having potential of last few miles connectivity. There has been considerable research work carried out on various issues like design, performance, security etc. in WMN. Due to increasing interest in WMN and use of smart devices with bandwidth hungry applications, WMN must be designed with objective of energy efficient communication. Goal of this paper is to summarize importance of energy efficiency in WMN. Various techniques to bring energy efficient solutions have also been reviewed.
We characterize the secrecy level of communication under Uncoordinated Frequency Hopping, a spread spectrum scheme where a transmitter and a receiver randomly hop through a set of frequencies with the goal of deceiving an adversary. In our work, the goal of the legitimate parties is to land on a given frequency without the adversary eavesdroppers doing so, therefore being able to communicate securely in that period, that may be used for secret-key exchange. We also consider the effect on secrecy of the availability of friendly jammers that can be used to obstruct eavesdroppers by causing them interference. Our results show that tuning the number of frequencies and adding friendly jammers are effective countermeasures against eavesdroppers.
The emerging paradigm of cloud computing, e.g., Amazon Elastic Compute Cloud (EC2), promises a highly flexible yet robust environment for large-scale applications. Ideally, while multiple virtual machines (VM) share the same physical resources (e.g., CPUs, caches, DRAM, and I/O devices), each application should be allocated to an independently managed VM and isolated from one another. Unfortunately, the absence of physical isolation inevitably opens doors to a number of security threats. In this paper, we demonstrate in EC2 a new type of security vulnerability caused by competition between virtual I/O workloads-i.e., by leveraging the competition for shared resources, an adversary could intentionally slow down the execution of a targeted application in a VM that shares the same hardware. In particular, we focus on I/O resources such as hard-drive throughput and/or network bandwidth-which are critical for data-intensive applications. We design and implement Swiper, a framework which uses a carefully designed workload to incur significant delays on the targeted application and VM with minimum cost (i.e., resource consumption). We conduct a comprehensive set of experiments in EC2, which clearly demonstrates that Swiper is capable of significantly slowing down various server applications while consuming a small amount of resources.