Biblio
The concept of Virtualized Network Functions (VNFs) aims to move Network Functions (NFs) out of dedicated hardware devices into software that runs on commodity hardware. A single NF consists of multiple VNF instances, usually running on virtual machines in a cloud infrastructure. The elastic management of an NF refers to load management across the VNF instances and the autonomic scaling of the number of VNF instances as the load on the NF changes. In this paper, we present EL-SEC, an autonomic framework to elastically manage security NFs on a virtualized infrastructure. As a use case, we deploy the Snort Intrusion Detection System as the NF on the GENI testbed. Concepts from control theory are used to create an Elastic Manager, which implements various controllers - in this paper, Proportional Integral (PI) and Proportional Integral Derivative (PID) - to direct traffic across the VNF Snort instances by monitoring the current load. RINA (a clean-slate Recursive InterNetwork Architecture) is used to build a distributed application that monitors load and collects Snort alerts, which are processed by the Elastic Manager and an Attack Analyzer, respectively. Software Defined Networking (SDN) is used to steer traffic through the VNF instances, and to block attack traffic. Our results show that virtualized security NFs can be easily deployed using our EL-SEC framework. With the help of real-time graphs, we show that PI and PID controllers can be used to easily scale the system, which leads to quicker detection of attacks.
Despite decades of research on the Internet security, we constantly hear about mega data breaches and malware infections affecting hundreds of millions of hosts. The key reason is that the current threat model of the Internet relies on two assumptions that no longer hold true: (1) Web servers, hosting the content, are secure, (2) each Internet connection starts from the original content provider and terminates at the content consumer. Internet security is today merely patched on top of the TCP/IP protocol stack. In order to achieve comprehensive security for the Internet, we believe that a clean-slate approach must be adopted where a content based security model is employed. Named Data Networking (NDN) is a step in this direction which is envisioned to be the next generation Internet architecture based on a content centric communication model. NDN is currently being designed with security as a key requirement, and thus to support content integrity, authenticity, confidentiality and privacy. However, in order to meet such a requirement, one needs to overcome several challenges, especially in either large operational environments or resource constrained networks. In this paper, we explore the security challenges in achieving comprehensive content security in NDN and propose a research agenda to address some of the challenges.
The gap is widening between the processor clock speed of end-system architectures and network throughput capabilities. It is now physically possible to provide single-flow throughput of speeds up to 100 Gbps, and 400 Gbps will soon be possible. Most current research into high-speed data networking focuses on managing expanding network capabilities within datacenter Local Area Networks (LANs) or efficiently multiplexing millions of relatively small flows through a Wide Area Network (WAN). However, datacenter hyper-convergence places high-throughput networking workloads on general-purpose hardware, and distributed High-Performance Computing (HPC) applications require time-sensitive, high-throughput end-to-end flows (also referred to as ``elephant flows'') to occur over WANs. For these applications, the bottleneck is often the end-system and not the intervening network. Since the problem of the end-system bottleneck was uncovered, many techniques have been developed which address this mismatch with varying degrees of effectiveness. In this survey, we describe the most promising techniques, beginning with network architectures and NIC design, continuing with operating and end-system architectures, and concluding with clean-slate protocol design.
Dynamic data race detectors are valuable tools for testing and validating concurrent software, but to achieve good performance they are typically implemented using sophisticated concurrent algorithms. Thus, they are ironically prone to the exact same kind of concurrency bugs they are designed to detect. To address these problems, we have developed VerifiedFT, a clean slate redesign of the FastTrack race detector [19]. The VerifiedFT analysis provides the same precision guarantee as FastTrack, but is simpler to implement correctly and efficiently, enabling us to mechanically verify an implementation of its core algorithm using CIVL [27]. Moreover, VerifiedFT provides these correctness guarantees without sacrificing any performance over current state-of-the-art (but complex and unverified) FastTrack implementations for Java.
In Software Defined Infrastructure (SDI), virtualization techniques are used to decouple applications and higher-level services from their underlying physical compute, storage, and network resources. The approach offers a set of powerful new capabilities (isolation, encapsulation, portability, interposition), including the formation of a software-based, infrastructure-wide control plane for orchestrated management. In this position paper, we identify opportunities for revisiting ongoing cybersecurity challenges using SDI as a powerful new toolset. Benefits of this approach can be broadly utilized in public, private, and hybrid clouds, data centers, enterprise computing, IoT deployments, and more. The discussion motivates the research challenge underlying VMware's partnership with the National Science Foundation to fund novel and foundational research in this area. Known as the NSF/VMware Partnership on Software Defined Infrastructure as a Foundation for Clean-Slate Computing Security (SDI-CSCS), the jointly funded university research program is set to begin in the fall of 2017.
This paper advocates programming high-performance code using partial evaluation. We present a clean-slate programming system with a simple, annotation-based, online partial evaluator that operates on a CPS-style intermediate representation. Our system exposes code generation for accelerators (vectorization/parallelization for CPUs and GPUs) via compiler-known higher-order functions that can be subjected to partial evaluation. This way, generic implementations can be instantiated with target-specific code at compile time. In our experimental evaluation we present three extensive case studies from image processing, ray tracing, and genome sequence alignment. We demonstrate that using partial evaluation, we obtain high-performance implementations for CPUs and GPUs from one language and one code base in a generic way. The performance of our codes is mostly within 10%, often closer to the performance of multi man-year, industry-grade, manually-optimized expert codes that are considered to be among the top contenders in their fields.
The number of sensors and embedded devices in an urban area can be on the order of thousands. New low-power wide area (LPWA) wireless network technologies have been proposed to support this large number of asynchronous, low-bandwidth devices. Among them, the Cooperative UltraNarrowband (C-UNB) is a clean-slate cellular network technology to connect these devices to a remote site or data collection server. C-UNB employs small bandwidth channels, and a lightweight random access protocol. In this paper, a new application is investigated - the use of C-UNB wireless networks to support the Advanced Metering Infrastructure (AMI), in order to facilitate the communication between smart meters and utilities. To this end, we adapted a mathematical model for C-UNB, and implemented a network simulation module in NS-3 to represent C-UNB's physical and medium access control layer. For the application layer, we implemented the DLMS-COSEM protocol, or Device Language Message Specification - Companion Specification for Energy Metering. Details of the simulation module are presented and we conclude that it supports the results of the mathematical model.
This paper investigates the use of deep reinforcement learning (DRL) in the design of a "universal" MAC protocol referred to as Deep-reinforcement Learning Multiple Access (DLMA). The design framework is partially inspired by the vision of DARPA SC2, a 3-year competition whereby competitors are to come up with a clean-slate design that "best share spectrum with any network(s), in any environment, without prior knowledge, leveraging on machine-learning technique". While the scope of DARPA SC2 is broad and involves the redesign of PHY, MAC, and Network layers, this paper's focus is narrower and only involves the MAC design. In particular, we consider the problem of sharing time slots among a multiple of time-slotted networks that adopt different MAC protocols. One of the MAC protocols is DLMA. The other two are TDMA and ALOHA. The DRL agents of DLMA do not know that the other two MAC protocols are TDMA and ALOHA. Yet, by a series of observations of the environment, its own actions, and the rewards - in accordance with the DRL algorithmic framework - a DRL agent can learn the optimal MAC strategy for harmonious co-existence with TDMA and ALOHA nodes. In particular, the use of neural networks in DRL (as opposed to traditional reinforcement learning) allows for fast convergence to optimal solutions and robustness against perturbation in hyper- parameter settings, two essential properties for practical deployment of DLMA in real wireless networks.
Mobile military networks are uniquely challenging to build and maintain, because of their wireless nature and the unfriendliness of the environment, resulting in unreliable and capacity limited performance. Currently, most tactical networks implement TCP/IP, which was designed for fairly stable, infrastructure-based environments, and requires sophisticated and often application-specific extensions to address the challenges of the communication scenario. Information Centric Networking (ICN) is a clean slate networking approach that does not depend on stable connections to retrieve information and naturally provides support for node mobility and delay/disruption tolerant communications - as a result it is particularly interesting for tactical applications. However, despite ICN seems to offer some structural benefits for tactical environments over TCP/IP, a number of challenges including naming, security, performance tuning, etc., still need to be addressed for practical adoption. This document, prepared within NATO IST-161 RTG, evaluates the effectiveness of Named Data Networking (NDN), the de facto standard implementation of ICN, in the context of tactical edge networks and its potential for adoption.
In the last years, networking scenarios have been evolving, hand-in-hand with new and varied applications with heterogeneous Quality of Service (QoS) requirements. These requirements must be efficiently and effectively delivered. Given its static layered structure and almost complete lack of built-in QoS support, the current TCP/IP-based Internet hinders such an evolution. In contrast, the clean-slate Recursive InterNetwork Architecture (RINA) proposes a new recursive and programmable networking model capable of evolving with the network requirements, solving in this way most, if not all, TCP/IP protocol stack limitations. Network providers can better deliver communication services across their networks by taking advantage of the RINA architecture and its support for QoS. This support allows providing complete information of the QoS needs of the supported traffic flows, and thus, fulfilment of these needs becomes possible. In this work, we focus on the importance of path selection to better ensure QoS guarantees in long-haul RINA networks. We propose and evaluate a programmable strategy for path selection based on flow QoS parameters, such as the maximum allowed latency and packet losses, comparing its performance against simple shortest-path, fastest-path and connection-oriented solutions.
While research on Information-Centric Networking (ICN) flourishes, its adoption seems to be an elusive goal. In this paper we propose Edge-ICN: a novel approach for deploying ICN in a single large network, such as the network of an Internet Service Provider. Although Edge-ICN requires nothing beyond an SDN-based network supporting the OpenFlow protocol, with ICN-aware nodes only at the edges of the network, it still offers the same benefits as a clean-slate ICN architecture but without the deployment hassles. Moreover, by proxying legacy traffic and transparently forwarding it through the Edge-ICN nodes, all existing applications can operate smoothly, while offering significant advantages to applications such as native support for scalable anycast, multicast, and multi-source forwarding. In this context, we show how the proposed functionality at the edge of the network can specifically benefit CoAP-based IoT applications. Our measurements show that Edge-ICN induces on average the same control plane overhead for name resolution as a centralized approach, while also enabling IoT applications to build on anycast, multicast, and multi-source forwarding primitives.
Ideally, minimizing the flow completion time (FCT) requires millions of priorities supported by the underlying network so that each flow has its unique priority. However, in production datacenters, the available switch priority queues for flow scheduling are very limited (merely 2 or 3). This practical constraint seriously degrades the performance of previous approaches. In this paper, we introduce Explicit Priority Notification (EPN), a novel scheduling mechanism which emulates fine-grained priorities (i.e., desired priorities or DP) using only two switch priority queues. EPN can support various flow scheduling disciplines with or without flow size information. We have implemented EPN on commodity switches and evaluated its performance with both testbed experiments and extensive simulations. Our results show that, with flow size information, EPN achieves comparable FCT as pFabric that requires clean-slate switch hardware. And EPN also outperforms TCP by up to 60.5% if it bins the traffic into two priority queues according to flow size. In information-agnostic setting, EPN outperforms PIAS with two priority queues by up to 37.7%. To the best of our knowledge, EPN is the first system that provides millions of priorities for flow scheduling with commodity switches.
Clean slate design of computing system is an emerging topic for continuing growth of warehouse-scale computers. A famous custom design is rackscale (RS) computing by considering a single rack as a computer that consists of a number of processors, storages and accelerators customized to a target application. In RS, each user is expected to occupy a single or more than one rack. However, new users frequently appear and the users often change their application scales and parameters that would require different numbers of processors, storages and accelerators in a rack. The reconfiguration of interconnection networks on their components is potentially needed to support the above demand in RS. In this context, we propose the inter-rackscale (IRS) architecture that disaggregates various hardware resources into different racks according to their own areas. The heart of IRS is to use free-space optics (FSO) for tightly-coupled connections between processors, storages and GPUs distributed in different racks, by swapping endpoints of FSO links to change network topologies. Through a large IRS system simulation, we show that by utilizing FSO links for interconnection between racks, the FSO-equipped IRS architecture can provide comparable communication latency between heterogeneous resources to that of the counterpart RS architecture. A utilization of 3 FSO terminals per rack can improve at least 87.34% of inter-CPU/SSD(GPU) communication over Fat-tree and improve at least 92.18% of that over 2-D Torus. We verify the advantages of IRS over RS in job scheduling performance.
While the clean slate approach proposed by Software Defined Networking (SDN) promises radical changes in the stagnant state of network management, SDN innovation has not gone beyond the intra-domain level. For the inter-domain ecosystem to benefit from the advantages of SDN, Internet Exchange Points (IXPs) are the ideal place: a central interconnection hub through which a large share of the Internet can be affected. In this demo, we showcase the ENDEAVOUR platform: a new software defined exchange approach readily deployable in commercial IXPs. We demonstrate here our implementations of traffic engineering and Distributed Denial of Service mitigation, as well as how member networks cash in on the advanced SDN-features of ENDEAVOUR, typically not available in legacy networks.
Tactical networks are generally simple ad-hoc networks in design, however, this simple design often gets complicated, when heterogeneous wireless technologies have to work together to enable seamless multi-hop communications across multiple sessions. In recent years, there has been some significant advances in computational, radio, localization, and networking te, and session's rate i.e., aggregate capacity averaged over a 4-time-slot frame)chnologies, which motivate a clean slate design of the control plane for multi-hop tactical wireless networks. In this paper, we develop a global network optimization framework, which characterizes the control plane for multi-hop wireless tactical networks. This framework abstracts the underlying complexity of tactical wireless networks and orchestrates the the control plane functions. Specifically, we develop a cross-layer optimization framework, which characterizes the interaction between the physical, link, and network layers. By applying the framework to a throughput maximization problem, we show how the proposed framework can be utilized to solve a broad range of wireless multi-hop tactical networking problems.
Named Data Networking (NDN) is one of the future internet architectures, which is a clean-slate approach. NDN provides intelligent data retrieval using the principles of name-based symmetrical forwarding of Interest/Data packets and innetwork caching. The continually increasing demand for rapid dissemination of large-scale scientific data is driving the use of NDN in data-intensive science experiments. In this paper, we establish an intercontinental NDN testbed. In the testbed, an NDN-based application that targets climate science as an example data intensive science application is designed and implemented, which has differentiated features compared to those of previous studies. We verify experimental justification of using NDN for climate science in the intercontinental network, through performance comparisons between classical delivery techniques and NDN-based climate data delivery.
Named Data Networks provide a clean-slate redesign of the Future Internet for efficient content distribution. Because Internet of Things are expected to compose a significant part of Future Internet, most content will be managed by constrained devices. Such devices are often equipped with limited CPU, memory, bandwidth, and energy supply. However, the current Named Data Networks design neglects the specific requirements of Internet of Things scenarios and many data structures need to be further optimized. The purpose of this research is to provide an efficient strategy to route in Named Data Networks by constructing a Forwarding Information Base using Iterated Bloom Filters defined as I(FIB)F. We propose the use of content names based on iterative hashes. This strategy leads to reduce the overhead of packets. Moreover, the memory and the complexity required in the forwarding strategy are lower than in current solutions. We compare our proposal with solutions based on hierarchical names and Standard Bloom Filters. We show how to further optimize I(FIB)F by exploiting the structure information contained in hierarchical content names. Finally, two strategies may be followed to reduce: (i) the overall memory for routing or (ii) the probability of false positives.
The Named-Data Networking (NDN) has emerged as a clean-slate Internet proposal on the wave of Information-Centric Networking. Although the NDN's data-plane seems to offer many advantages, e.g., native support for multicast communications and flow balance, it also makes the network infrastructure vulnerable to a specific DDoS attack, the Interest Flooding Attack (IFA). In IFAs, a botnet issuing unsatisfiable content requests can be set up effortlessly to exhaust routers' resources and cause a severe performance drop to legitimate users. So far several countermeasures have addressed this security threat, however, their efficacy was proved by means of simplistic assumptions on the attack model. Therefore, we propose a more complete attack model and design an advanced IFA. We show the efficiency of our novel attack scheme by extensively assessing some of the state-of-the-art countermeasures. Further, we release the software to perform this attack as open source tool to help design future more robust defense mechanisms.
- « first
- ‹ previous
- 1
- 2
- 3
- 4