Biblio
Publicly available blacklists are popular tools to capture and spread information about misbehaving entities on the Internet. In some cases, their straight-forward utilization leads to many false positives. In this work, we propose a system that combines blacklists with network flow data while introducing automated evaluation techniques to avoid reporting unreliable alerts. The core of the system is formed by an Adaptive Filter together with an Evaluator module. The assessment of the system was performed on data obtained from a national backbone network. The results show the contribution of such a system to the reduction of unreliable alerts.
Bluetooth Classic (BT) remains the de facto connectivity technology in car stereo systems, wireless headsets, laptops, and a plethora of wearables, especially for applications that require high data rates, such as audio streaming, voice calling, tethering, etc. Unlike in Bluetooth Low Energy (BLE), where address randomization is a feature available to manufactures, BT addresses are not randomized because they are largely believed to be immune to tracking attacks. We analyze the design of BT and devise a robust de-anonymization technique that hinges on the apparently benign information leaking from frame encoding, to infer a piconet's clock, hopping sequence, and ultimately the Upper Address Part (UAP) of the master device's physical address, which are never exchanged in clear. Used together with the Lower Address Part (LAP), which is present in all frames transmitted, this enables tracking of the piconet master, thereby debunking the privacy guarantees of BT. We validate this attack by developing the first Software-defined Radio (SDR) based sniffer that allows full BT spectrum analysis (79 MHz) and implements the proposed de-anonymization technique. We study the feasibility of privacy attacks with multiple testbeds, considering different numbers of devices, traffic regimes, and communication ranges. We demonstrate that it is possible to track BT devices up to 85 meters from the sniffer, and achieve more than 80% device identification accuracy within less than 1 second of sniffing and 100% detection within less than 4 seconds. Lastly, we study the identified privacy attack in the wild, capturing BT traffic at a road junction over 5 days, demonstrating that our system can re-identify hundreds of users and infer their commuting patterns.
With the development of IoT and 5G networks, the demand for the next-generation intelligent transportation system has been growing at a rapid pace. Dynamic mapping has been considered one of the key technologies to reduce traffic accidents and congestion in the intelligent transportation system. However, as the number of vehicles keeps growing, a huge volume of mapping traffic may overload the central cloud, leading to serious performance degradation. In this paper, we propose and prototype a CUPS (control and user plane separation)-based edge computing architecture for the dynamic mapping and quantify its benefits by prototyping. There are a couple of merits of our proposal: (i) we can mitigate the overhead of the networks and central cloud because we only need to abstract and send global dynamic mapping information from the edge servers to the central cloud; (ii) we can reduce the response latency since the dynamic mapping traffic can be isolated from other data traffic by being generated and distributed from a local edge server that is deployed closer to the vehicles than the central server in cloud. The capabilities of our system have been quantified. The experimental results have shown our system achieves throughput improvement by more than four times, and response latency reduction by 67.8% compared to the conventional central cloud-based approach. Although these results are still obtained from the preliminary evaluations using our prototype system, we believe that our proposed architecture gives insight into how we utilize CUPS and edge computing to enable efficient dynamic mapping applications.
Software Defined Networking (SDN) provides opportunities for flexible and dynamic traffic engineering. However, in current SDN systems, routing strategies are based on traditional mechanisms which lack in real-time modification and less efficient resource utilization. To overcome these limitations, deep learning is used in this paper to improve the routing computation in SDN. This paper proposes Convolutional Deep Reinforcement Learning (CoDRL) model which is based on deep reinforcement learning agent for routing optimization in SDN to minimize the mean network delay and packet loss rate. The CoDRL model consists of Deep Deterministic Policy Gradients (DDPG) deep agent coupled with Convolution layer. The proposed model tends to automatically adapts the dynamic packet routing using network data obtained through the SDN controller, and provides the routing configuration that attempts to reduce network congestion and minimize the mean network delay. Hence, the proposed deep agent exhibits good convergence towards providing routing configurations that improves the network performance.
In multi-tenant datacenters, the hardware may be homogeneous but the traffic often is not. For instance, customers who pay an equal amount of money can get an unequal share of the bottleneck capacity when they do not open the same number of TCP connections. To address this problem, several recent proposals try to manipulate the traffic that TCP sends from the VMs. VCC and AC/DC are two new mechanisms that let the hypervisor control traffic by influencing the TCP receiver window (rwnd). This avoids changing the guest OS, but has limitations (it is not possible to make TCP increase its rate faster than it normally would). Seawall, on the other hand, completely rewrites TCP's congestion control, achieving fairness but requiring significant changes to both the hypervisor and the guest OS. There seems to be a need for a middle ground: a method to control TCP's sending rate without requiring a complete redesign of its congestion control. We introduce a minimally-invasive solution that is flexible enough to cater for needs ranging from weighted fairness in multi-tenant datacenters to potentially offering Internet-wide benefits from reduced interflow competition.
The idea to use multiple paths to transport TCP traffic seems very attractive due to its potential benefits it may offer for both redundancy and better utilization of available resources by load balancing. Fixed and mobile network providers employ frequently load-balancers that use multiple paths on either per-flow or per-destination level, but very seldom on per-packet level. Despite of the benefits of packet-level load balancing mechanisms (e.g., low computational complexity and high bandwidth utilization) network providers can't use them mainly because of TCP packet reorderings that harm TCP performance. Emerging network architectures also support multiple paths, but they face with the same obstacle in balancing their load to multiple paths. Indeed, packet level load balancing research is paralyzed by the reordering vulnerability of TCP.A couple of TCP variants exist that deal with TCP packet reordering problem, but due to lack of end-to-end transparency they were not widely deployed and adopted. In this paper, we revisit TCP's packet reorderings problem and present a transparent and light-weight algorithm, Out-of-Order Robustness for TCP with Transparent Acknowledgment (ACK) Intervention (ORTA), to deal with out-of-order deliveries.ORTA works as a transparent thin layer below TCP and hides harmful side-effects of packet-level load balancing. ORTA monitors all TCP flow packets and uses ACK traffic shaping, without any modifications to either TCP sender or receiver sides. Since it is transparent to TCP end-points, it can be easily deployed on TCP sender end-hosts (EHs), gateway (GW) routers, or access points (APs). ORTA opens a door for network providers to use per-packet load balancing.The proposed ORTA algorithm is implemented and tested in NS-2. The results show that ORTA can prevent TCP performance decrease when per-packet load balancing is used.
In recent years, integration of Passive Optical Net-work(PON) and WiMAX (Worldwide Interoperability Microwave Access Network) network is attracting huge interest among many researchers. The continuous demand for large bandwidths with wider coverage area are the key drivers to this technology. This integration has led to high speed and cost efficient solution for internet accessibility. This paper investigates the issues related to traffic grooming, routing and resource allocation in the hybrid networks. The Elastic Optical Network forms Backbone and is integrated with WiMAX. In this novel approach, traffic grooming is carried out using light trail technique to minimize the bandwidth blocking ratio and also reduce the network resource consumption. The simulation is performed on different network topologies, where in the traffic is routed through three modes namely the pure Wireless Network, the Wireless-Optical/Optical-Wireless Network, the pure Optical Network keeping the network congestion in mind. The results confirm reduction in bandwidth blocking ratio in all the given networks coupled with minimum network resource utilization.
Communication between two Internet hosts using parallel connections may result in unwanted interference between the connections. In this dissertation, we propose a sender-side solution to address this problem by letting the congestion controllers of the different connections collaborate, correctly taking congestion control logic into account. Real-life experiments and simulations show that our solution works for a wide variety of congestion control mechanisms, provides great flexibility when allocating application traffic to the connections, and results in lower queuing delay and less packet loss.
Increasing number of Internet-scale applications, such as video streaming, incur huge amount of wide area traffic. Such traffic over the unreliable Internet without bandwidth guarantee suffers unpredictable network performance. This result, however, is unappealing to the application providers. Fortunately, Internet giants like Google and Microsoft are increasingly deploying their private wide area networks (WANs) to connect their global datacenters. Such high-speed private WANs are reliable, and can provide predictable network performance. In this paper, we propose a new type of service-inter-datacenter network as a service (iDaaS), where traditional application providers can reserve bandwidth from those Internet giants to guarantee their wide area traffic. Specifically, we design a bandwidth trading market among multiple iDaaS providers and application providers, and concentrate on the essential bandwidth pricing problem. The involved challenging issue is that the bandwidth price of each iDaaS provider is not only influenced by other iDaaS providers, but also affected by the application providers. To address this issue, we characterize the interaction between iDaaS providers and application providers using a Stackelberg game model, and analyze the existence and uniqueness of the equilibrium. We further present an efficient bandwidth pricing algorithm by blending the advantage of a geometrical Nash bargaining solution and the demand segmentation method. For comparison, we present two bandwidth reservation algorithms, where each iDaaS provider's bandwidth is reserved in a weighted fair manner and a max-min fair manner, respectively. Finally, we conduct comprehensive trace-driven experiments. The evaluation results show that our proposed algorithms not only ensure the revenue of iDaaS providers, but also provide bandwidth guarantee for application providers with lower bandwidth price per unit.
Satellite networks play an important role in realizing the combination of the space networks and ground networks as well as the global coverage of the Internet. However, due to the limitation of bandwidth resource, compared with ground network, space backbone networks are more likely to become victims of DDoS attacks. Therefore, we hypothesize an attack scenario that DDoS attackers make reflection amplification attacks, colluding with terminal devices accessing space backbone network, and exhaust bandwidth resources, resulting in degradation of data transmission and service delivery. Finally, we propose some plain countermeasures to provide solutions for future researchers.
Denial of service (DoS) is a process of injecting malicious packets into the network. Intrusion detection system (IDS) is a system used to investigate malicious packets in the network. Software-defined network (SDN) physically separates control plane and data plane. The control plane is moved to a centralized controller, and it makes a decision in the network from a global view. The combination between IDS and SDN allows the prevention of malicious packets to be more efficient due to the advantage of the global view in SDN. IDS needs to communicate with switches to have an access to all end-to-end traffic in the network. The high traffic in the link between switches and IDS results in congestion. The congestion between switches and IDS delays the detection and prevention of malicious traffic. To address this problem, we propose a historical database (Hdb), a scheme to reduce the traffic between switches and IDS, based on the historical information of a sender. The simulation shows that in the average, 54.1% of traffic mirrored to IDS is reduced compared to the conventional schemes.
Intrusion Detection system (IDS) was an application which was aimed to monitor network activity or system and it could find if there was a dangerous operation. Implementation of IDS on Software Define Network architecture (SDN) has drawbacks. IDS on SDN architecture might decreasing network Quality of Service (QoS). So the network could not provide services to the existing network traffic. Throughput, delay and packet loss were important parameters of QoS measurement. Snort IDS and bro IDS were tools in the application of IDS on the network. Both had differences, one of which was found in the detection method. Snort IDS used a signature based detection method while bro IDS used an anomaly based detection method. The difference between them had effects in handling the network traffic through it. In this research, we compared both tools. This comparison are done with testing parameters such as throughput, delay, packet loss, CPU usage, and memory usage. From this test, it was found that bro outperform snort IDS for throughput, delay , and packet loss parameters. However, CPU usage and memory usage on bro requires higher resource than snort.
A dynamic overlay system is presented for supporting transport service needs of dispersed computing applications for moving data and/or code between network computation points and end-users in IoT or IoBT. The Network Backhaul Layered Architecture (Nebula) system combines network discovery and QoS monitoring, dynamic path optimization, online learning, and per-hop tunnel transport protocol optimization and synthesis over paths, to carry application traffic flows transparently over overlay tunnels. An overview is provided of Nebula's overlay system, software architecture, API, and implementation in the NRL CORE network emulator. Experimental emulation results demonstrate the performance benefits that Nebula provides under challenging networking conditions.