Visible to the public Biblio

Found 201 results

Filters: Keyword is Throughput  [Clear All Filters]
2023-02-03
Wang, Yingsen, Li, Yixiao, Zhao, Juanjuan, Wang, Guibin, Jiao, Weihan, Qiang, Yan, Li, Keqin.  2022.  A Fast and Secured Peer-to-Peer Energy Trading Using Blockchain Consensus. 2022 IEEE Industry Applications Society Annual Meeting (IAS). :1–8.
The architecture and functioning of the electricity markets are rapidly evolving in favour of solutions based on real-time data sharing and decentralised, distributed, renewable energy generation. Peer-to-peer (P2P) energy markets allow two individuals to transact with one another without the need of intermediaries, reducing the load on the power grid during peak hours. However, such a P2P energy market is prone to various cyber attacks. Blockchain technology has been proposed to implement P2P energy trading to support this change. One of the most crucial components of blockchain technology in energy trading is the consensus mechanism. It determines the effectiveness and security of the blockchain for energy trading. However, most of the consensus used in energy trading today are traditional consensus such as Proof-of-Work (PoW) and Practical Byzantine Fault Tolerance (PBFT). These traditional mechanisms cannot be directly adopted in P2P energy trading due to their huge computational power, low throughput, and high latency. Therefore, we propose the Block Alliance Consensus (BAC) mechanism based on Hashgraph. In a massive P2P energy trading network, BAC can keep Hashgraph's throughput while resisting Sybil attacks and supporting the addition and deletion of energy participants. The high efficiency and security of BAC and the blockchain-based energy trading platform are verified through experiments: our improved BAC has an average throughput that is 2.56 times more than regular BFT, 5 times greater than PoW, and 30% greater than the original BAC. The improved BAC has an average latency that is 41% less than BAC and 81% less than original BFT. Our energy trading blockchain (ETB)'s READ performance can achieve the most outstanding throughput of 1192 tps at a workload of 1200 tps, while WRITE can achieve 682 tps at a workload of 800 tps with a success rate of 95% and 0.18 seconds of latency.
ISSN: 2576-702X
2023-02-02
Debnath, Jayanta K., Xie, Derock.  2022.  CVSS-based Vulnerability and Risk Assessment for High Performance Computing Networks. 2022 IEEE International Systems Conference (SysCon). :1–8.
Common Vulnerability Scoring System (CVSS) is intended to capture the key characteristics of a vulnerability and correspondingly produce a numerical score to indicate the severity. Important efforts are conducted for building a CVSS stochastic model in order to provide a high-level risk assessment to better support cybersecurity decision-making. However, these efforts consider nothing regarding HPC (High-Performance Computing) networks using a Science Demilitary Zone (DMZ) architecture that has special design principles to facilitate data transition, analysis, and store through in a broadband backbone. In this paper, an HPCvul (CVSS-based vulnerability and risk assessment) approach is proposed for HPC networks in order to provide an understanding of the ongoing awareness of the HPC security situation under a dynamic cybersecurity environment. For such a purpose, HPCvul advocates the standardization of the collected security-related data from the network to achieve data portability. HPCvul adopts an attack graph to model the likelihood of successful exploitation of a vulnerability. It is able to merge multiple attack graphs from different HPC subnets to yield a full picture of a large HPC network. Substantial results are presented in this work to demonstrate HPCvul design and its performance.
2023-01-20
Yong, Li, Mu, Chen, ZaoJian, Dai, Lu, Chen.  2022.  Security situation awareness method of power mobile application based on big data architecture. 2022 5th International Conference on Data Science and Information Technology (DSIT). :1–6.

According to the characteristics of security threats and massive users in power mobile applications, a mobile application security situational awareness method based on big data architecture is proposed. The method uses open-source big data technology frameworks such as Kafka, Flink, Elasticsearch, etc. to complete the collection, analysis, storage and visual display of massive power mobile application data, and improve the throughput of data processing. The security situation awareness method of power mobile application takes the mobile terminal threat index as the core, divides the risk level for the mobile terminal, and predicts the terminal threat index through support vector machine regression algorithm (SVR), so as to construct the security profile of the mobile application operation terminal. Finally, through visualization services, various data such as power mobile applications and terminal assets, security operation statistics, security strategies, and alarm analysis are displayed to guide security operation and maintenance personnel to carry out power mobile application security monitoring and early warning, banning disposal and traceability analysis and other decision-making work. The experimental analysis results show that the method can meet the requirements of security situation awareness for threat assessment accuracy and response speed, and the related results have been well applied in a power company.

Rahim, Usva, Siddiqui, Muhammad Faisal, Javed, Muhammad Awais, Nafi, Nazmus.  2022.  Architectural Implementation of AES based 5G Security Protocol on FPGA. 2022 32nd International Telecommunication Networks and Applications Conference (ITNAC). :1–6.
Confidentiality and integrity security are the key challenges in future 5G networks. To encounter these challenges, various signature and key agreement protocols are being implemented in 5G systems to secure high-speed mobile-to-mobile communication. Many security ciphers such as SNOW 3G, Advanced Encryption Standard (AES), and ZUC are used for 5G security. Among these protocols, the AES algorithm has been shown to achieve higher hardware efficiency and throughput in the literature. In this paper, we implement the AES algorithm on Field Programmable Gate Array (FPGA) and real-time performance factors of the AES algorithm were exploited to best fit the needs and requirements of 5G. In addition, several modifications such as partial pipelining and deep pipelining (partial pipelining with sub-module pipelining) are implemented on Virtex 6 FPGA ML60S board to improve the throughput of the proposed design.
2023-01-13
Chen, Ju, Wang, Jinghan, Song, Chengyu, Yin, Heng.  2022.  JIGSAW: Efficient and Scalable Path Constraints Fuzzing. 2022 IEEE Symposium on Security and Privacy (SP). :18—35.
Coverage-guided testing has shown to be an effective way to find bugs. If we model coverage-guided testing as a search problem (i.e., finding inputs that can cover more branches), then its efficiency mainly depends on two factors: (1) the accuracy of the searching algorithm and (2) the number of inputs that can be evaluated per unit time. Therefore, improving the search throughput has shown to be an effective way to improve the performance of coverage-guided testing.In this work, we present a novel design to improve the search throughput: by evaluating newly generated inputs with JIT-compiled path constraints. This approach allows us to significantly improve the single thread throughput as well as scaling to multiple cores. We also developed several optimization techniques to eliminate major bottlenecks during this process. Evaluation of our prototype JIGSAW shows that our approach can achieve three orders of magnitude higher search throughput than existing fuzzers and can scale to multiple cores. We also find that with such high throughput, a simple gradient-guided search heuristic can solve path constraints collected from a large set of real-world programs faster than SMT solvers with much more sophisticated search heuristics. Evaluation of end-to-end coverage-guided testing also shows that our JIGSAW-powered hybrid fuzzer can outperform state-of-the-art testing tools.
2022-12-20
Hussain, G K Jakir, Shruthe, M, Rithanyaa, S, Madasamy, Saravana Rajesh, Velu, Nandagopal S.  2022.  Visible Light Communication using Li-Fi. 2022 6th International Conference on Devices, Circuits and Systems (ICDCS). :257–262.
Over earlier years of huge technical developments, the need for a communication system has risen tremendously. Inrecent times, public realm interaction has been a popular area, hence the research group is emphasizing the necessity of quick and efficient broadband speeds, as well as upgraded security protocols. The main objective of this project work is to combine conventional Li-Fi and VLC techniques for video communication. VLC is helping to deliver fast data speeds, bandwidth efficiency, and a relatively secure channel of communication. Li-Fi is an inexpensive wireless communication (WC) system. Li-Fi can transmit information (text, audio, and video) to any electronic device via the LEDs that are positioned in the space to provide lighting. Li-Fi provides more advantages than Wi-Fi, such as security, high efficiency, speed, throughput, and low latency. The information can be transferred based on the flash property of the LED. Communication is accomplished by turning on and off LED lights at a faster pace than the human visual system can detect.
ISSN: 2644-1802
2022-12-09
Legashev, Leonid, Grishina, Luybov.  2022.  Development of an Intrusion Detection System Prototype in Mobile Ad Hoc Networks Based on Machine Learning Methods. 2022 International Russian Automation Conference (RusAutoCon). :171—175.
Wireless ad hoc networks are characterized by dynamic topology and high node mobility. Network attacks on wireless ad hoc networks can significantly reduce performance metrics, such as the packet delivery ratio from the source to the destination node, overhead, throughput, etc. The article presents an experimental study of an intrusion detection system prototype in mobile ad hoc networks based on machine learning. The experiment is carried out in a MANET segment of 50 nodes, the detection and prevention of DDoS and cooperative blackhole attacks are investigated. The dependencies of features on the type of network traffic and the dependence of performance metrics on the speed of mobile nodes in the network are investigated. The conducted experimental studies show the effectiveness of an intrusion detection system prototype on simulated data.
Cody, Tyler, Adams, Stephen, Beling, Peter, Freeman, Laura.  2022.  On Valuing the Impact of Machine Learning Faults to Cyber-Physical Production Systems. 2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS). :1—6.
Machine learning (ML) has been applied in prognostics and health management (PHM) to monitor and predict the health of industrial machinery. The use of PHM in production systems creates a cyber-physical, omni-layer system. While ML offers statistical improvements over previous methods, and brings statistical models to bear on new systems and PHM tasks, it is susceptible to performance degradation when the behavior of the systems that ML is receiving its inputs from changes. Natural changes such as physical wear and engineered changes such as maintenance and rebuild procedures are catalysts for performance degradation, and are both inherent to production systems. Drawing from data on the impact of maintenance procedures on ML performance in hydraulic actuators, this paper presents a simulation study that investigates how long it takes for ML performance degradation to create a difference in the throughput of serial production system. In particular, this investigation considers the performance of an ML model learned on data collected before a rebuild procedure is conducted on a hydraulic actuator and an ML model transfer learned on data collected after the rebuild procedure. Transfer learning is able to mitigate performance degradation, but there is still a significant impact on throughput. The conclusion is drawn that ML faults can have drastic, non-linear effects on the throughput of production systems.
2022-12-06
Hkiri, Amal, Karmani, Mouna, Machhout, Mohsen.  2022.  The Routing Protocol for low power and lossy networks (RPL) under Attack: Simulation and Analysis. 2022 5th International Conference on Advanced Systems and Emergent Technologies (IC_ASET). :143-148.

Routing protocol for low power and lossy networks (RPL) is the underlying routing protocol of 6LoWPAN, a core communication standard for the Internet of Things. In terms of quality of service (QoS), device management, and energy efficiency, RPL beats competing wireless sensor and ad hoc routing protocols. However, several attacks could threaten the network due to the problem of unauthenticated or unencrypted control frames, centralized root controllers, compromised or unauthenticated devices. Thus, in this paper, we aim to investigate the effect of topology and Resources attacks on RPL.s efficiency. The Hello Flooding attack, Increase Number attack and Decrease Rank attack are the three forms of Resources attacks and Topology attacks respectively chosen to work on. The simulations were done to understand the impact of the three different attacks on RPL performances metrics including End-to-End Delay (E2ED), throughput, Packet Delivery Ratio (PDR) and average power consumption. The findings show that the three attacks increased the E2ED, decreased the PDR and the network throughput, and degrades the network’, which further raises the power consumption of the network nodes.

Aneja, Sakshi, Mittal, Sumit, Sharma, Dhirendra.  2022.  An Optimized Mobility Management Framework for Routing Protocol Lossy Networks using Optimization Algorithm. 2022 International Conference on Communication, Computing and Internet of Things (IC3IoT). :1-8.

As a large number of sensor nodes as well as limited resources such as energy, memory, computing power, as well as bandwidth. Lossy linkages connect these nodes together. In early 2008,IETF working group looked into using current routing protocols for LLNs. Routing Over minimum power and Lossy networksROLL standardizes an IPv6 routing solution for LLNs because of the importance of LLNs in IoT.IPv6 Routing Protocol is based on the 6LoWPAN standard. RPL has matured significantly. The research community is becoming increasingly interested in it. The topology of RPL can be built in a variety of ways. It creates a topology in advance. Due to the lack of a complete review of RPL, in this paper a mobility management framework has been proposed along with experimental evaluation by applying parameters likePacket Delivery Ratio, throughput, end to end delay, consumed energy on the basis of the various parameters and its analysis done accurately. Finally, this paper can help academics better understand the RPL and engage in future research projects to improve it.

2022-12-02
Choi, Jong-Young, Park, Jiwoong, Lim, Sung-Hwa, Ko, Young-Bae.  2022.  A RSSI-Based Mesh Routing Protocol based IEEE 802.11p/WAVE for Smart Pole Networks. 2022 24th International Conference on Advanced Communication Technology (ICACT). :1—5.
This paper proposes a RSSI-based routing protocol for smart pole mesh networks equipped with multiple IEEE 802.11p/WAVE radios. In the IEEE 802.11p based multi-radio multi-channel environments, the performance of traditional mesh routing protocols is severely degraded because of metric measurement overhead. The periodic probe messages for measuring the quality of each channel incurs a large overhead due to the channel switching delay. To solve such an overhead problem, we introduce a routing metric that estimates expected transmission time and proposes a light-weight channel allocation algorithm based on RSSI value only. We evaluate the performance of the proposed solution through simulation experiments with NS-3. Simulation results show that it can improve the network performance in terms of latency and throughput, compared to the legacy WCETT routing scheme.
Rethfeldt, Michael, Brockmann, Tim, Eckhardt, Richard, Beichler, Benjamin, Steffen, Lukas, Haubelt, Christian, Timmermann, Dirk.  2022.  Extending the FLExible Network Tester (Flent) for IEEE 802.11s WLAN Mesh Networks. 2022 IEEE International Symposium on Measurements & Networking (M&N). :1—6.
Mesh networks based on the wireless local area network (WLAN) technology, as specified by the standards amendment IEEE 802.11s, provide for a flexible and low-cost interconnection of devices and embedded systems for various use cases. To assess the real-world performance of WLAN mesh networks and potential optimization strategies, suitable testbeds and measurement tools are required. Designed for highly automated transport-layer throughput and latency measurements, the software FLExible Network Tester (Flent) is a promising candidate. However, so far Flent does not integrate information specific to IEEE 802.11s networks, such as peer link status data or mesh routing metrics. Consequently, we propose Flent extensions that allow to additionally capture IEEE 802.11s information as part of the automated performance tests. For the functional validation of our extensions, we conduct Flent measurements in a mesh mobility scenario using the network emulation framework Mininet-WiFi.
Macabale, Nemesio A..  2022.  On the Stability of Load Adaptive Routing Over Wireless Community Mesh and Sensor Networks. 2022 24th International Conference on Advanced Communication Technology (ICACT). :21—26.
Wireless mesh networks are increasingly deployed as a flexible and low-cost alternative for providing wireless services for a variety of applications including community mesh networking, medical applications, and disaster ad hoc communications, sensor and IoT applications. However, challenges remain such as interference, contention, load imbalance, and congestion. To address these issues, previous work employ load adaptive routing based on load sensitive routing metrics. On the other hand, such approach does not immediately improve network performance because the load estimates used to choose routes are themselves affected by the resulting routing changes in a cyclical manner resulting to oscillation. Although this is not a new phenomenon and has been studied in wired networks, it has not been investigated extensively in wireless mesh and/or sensor networks. We present these instabilities and how they pose performance, security, and energy issues to these networks. Accordingly, we present a feedback-aware mapping system called FARM that handles these instabilities in a manner analogous to a control system with feedback control. Results show that FARM stabilizes routes that improves network performance in throughput, delay, energy efficiency, and security.
2022-12-01
Fang, Xiaojie, Yin, Xinyu, Zhang, Ning, Sha, Xuejun, Zhang, Hongli, Han, Zhu.  2021.  Demonstrating Physical Layer Security Via Weighted Fractional Fourier Transform. IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–2.
Recently, there has been significant enthusiasms in exploiting physical (PHY-) layer characteristics for secure wireless communication. However, most existing PHY-layer security paradigms are information theoretical methodologies, which are infeasible to real and practical systems. In this paper, we propose a weighted fractional Fourier transform (WFRFT) pre-coding scheme to enhance the security of wireless transmissions against eavesdropping. By leveraging the concept of WFRFT, the proposed scheme can easily change the characteristics of the underlying radio signals to complement and secure upper-layer cryptographic protocols. We demonstrate a running prototype based on the LTE-framework. First, the compatibility between the WFRFT pre-coding scheme and the conversational LTE architecture is presented. Then, the security mechanism of the WFRFT pre-coding scheme is demonstrated. Experimental results validate the practicability and security performance superiority of the proposed scheme.
2022-11-08
Boo, Yoonho, Shin, Sungho, Sung, Wonyong.  2020.  Quantized Neural Networks: Characterization and Holistic Optimization. 2020 IEEE Workshop on Signal Processing Systems (SiPS). :1–6.
Quantized deep neural networks (QDNNs) are necessary for low-power, high throughput, and embedded applications. Previous studies mostly focused on developing optimization methods for the quantization of given models. However, quantization sensitivity depends on the model architecture. Also, the characteristics of weight and activation quantization are quite different. This study proposes a holistic approach for the optimization of QDNNs, which contains QDNN training methods as well as quantization-friendly architecture design. Synthesized data is used to visualize the effects of weight and activation quantization. The results indicate that deeper models are more prone to activation quantization, while wider models improve the resiliency to both weight and activation quantization.
2022-10-06
Fahrianto, Feri, Kamiyama, Noriaki.  2021.  The Dual-Channel IP-to-NDN Translation Gateway. 2021 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN). :1–2.
The co-existence between Internet Protocol (IP) and Named-Data Networking (NDN) protocol is inevitable during the transition period. We propose a privacy-preserving translation method between IP and NDN called the dual-channel translation gateway. The gateway provides two different channels dedicated to the interest and the data packet to translate the IP to the NDN protocol and vice versa. Additionally, the name resolution table is provided at the gateway that binds an IP packet securely with a prefix name. Moreover, we compare the dual-channel gateway performance with the encapsulation gateway.
2022-09-09
Wilke, Luca, Wichelmann, Jan, Sieck, Florian, Eisenbarth, Thomas.  2021.  undeSErVed trust: Exploiting Permutation-Agnostic Remote Attestation. 2021 IEEE Security and Privacy Workshops (SPW). :456—466.

The ongoing trend of moving data and computation to the cloud is met with concerns regarding privacy and protection of intellectual property. Cloud Service Providers (CSP) must be fully trusted to not tamper with or disclose processed data, hampering adoption of cloud services for many sensitive or critical applications. As a result, CSPs and CPU manufacturers are rushing to find solutions for secure and trustworthy outsourced computation in the Cloud. While enclaves, like Intel SGX, are strongly limited in terms of throughput and size, AMD’s Secure Encrypted Virtualization (SEV) offers hardware support for transparently protecting code and data of entire VMs, thus removing the performance, memory and software adaption barriers of enclaves. Through attestation of boot code integrity and means for securely transferring secrets into an encrypted VM, CSPs are effectively removed from the list of trusted entities. There have been several attacks on the security of SEV, by abusing I/O channels to encrypt and decrypt data, or by moving encrypted code blocks at runtime. Yet, none of these attacks have targeted the attestation protocol, the core of the secure computing environment created by SEV. We show that the current attestation mechanism of Zen 1 and Zen 2 architectures has a significant flaw, allowing us to manipulate the loaded code without affecting the attestation outcome. An attacker may abuse this weakness to inject arbitrary code at startup–and thus take control over the entire VM execution, without any indication to the VM’s owner. Our attack primitives allow the attacker to do extensive modifications to the bootloader and the operating system, like injecting spy code or extracting secret data. We present a full end-to-end attack, from the initial exploit to leaking the key of the encrypted disk image during boot, giving the attacker unthrottled access to all of the VM’s persistent data.

2022-08-26
Zhang, Yuan, Li, Jian, Yang, Jiayu, Xing, Yitao, Zhuang, Rui, Xue, Kaiping.  2021.  Low Priority Congestion Control for Multipath TCP. 2021 IEEE Global Communications Conference (GLOBECOM). :1–6.

Many applications are bandwidth consuming but may tolerate longer flow completion times. Multipath protocols, such as multipath TCP (MPTCP), can offer bandwidth aggregation and resilience to link failures for such applications, and low priority congestion control (LPCC) mechanisms can make these applications yield to other time-sensitive ones. Properly combining the above two can improve the overall user experience. However, the existing LPCC mechanisms are not adequate for MPTCP. They do not take into account the characteristics of multiple network paths, and cannot ensure fairness among the same priority flows. Therefore, we propose a multipath LPCC mechanism, i.e., Dynamic Coupled Low Extra Delay Background Transport, named DC-LEDBAT. Our scheme is designed based on a standardized LPCC mechanism LEDBAT. To avoid unfairness among the same priority flows, DC-LEDBAT trades little throughput for precisely measuring the minimum delay. Moreover, to be friendly to single-path LEDBAT, our scheme leverages the correlation of the queuing delay to detect whether multiple paths go through a shared bottleneck. Then, DC-LEDBAT couples the congestion window at shared bottlenecks to control the sending rate. We implement DC-LEDBAT in a Linux kernel and experimental results show that DC-LEDBAT can not only utilize the excess bandwidth of MPTCP but also ensure fairness among the same priority flows.

Flohr, Julius, Rathgeb, Erwin P..  2021.  Reducing End-to-End Delays in WebRTC using the FSE-NG Algorithm for SCReAM Congestion Control. 2021 IEEE 18th Annual Consumer Communications & Networking Conference (CCNC). :1–4.
The 2020 Corona pandemic has shown that on-line real-time multimedia communication is of vital importance when regular face-to-face meetings are not possible. One popular choice for conducting these meetings is the open standard WebRTC which is implemented in every major web browser. Even though this technology has found widespread use, there are still open issues with how different congestion control (CC) algorithms of Media- and DataChannels interact. In 2018 we have shown that the issue of self-inflicted queuing delay can be mitigated by introducing a CC coupling mechanism called FSE-NG. Originally, this solution was only capable of linking DataChannel flows controlled by TCP-style CCs and MediaChannels controlled by NADA CC. Standardization has progressed and along with NADA, IETF has also standardized the RTP CC SCReAM. This work extends the FSE-NG algorithm to also incorporate flows controlled by the latter algorithm. By means of simulation, we show that our approach is capable of drastically reducing end-to-end delays while also increasing RTP throughput and thus enabling WebRTC communication in scenarios where it has not been applicable before.
Muchhala, Yash, Singhania, Harshit, Sheth, Sahil, Devadkar, Kailas.  2021.  Enabling MapReduce based Parallel Computation in Smart Contracts. 2021 6th International Conference on Inventive Computation Technologies (ICICT). :537—543.
Smart Contracts based cryptocurrencies such as Ethereum are becoming increasingly popular in various domains: but with this increase in popularity comes a significant decrease in throughput and efficiency. Smart Contracts are executed by every miner in the system serially without any parallelism, both inter and intra-Smart Contracts. Such a serial execution inhibits the scalability required to obtain extremely high throughput pertaining to computationally intensive tasks deployed with such Smart Contracts. While significant advancements have been made in the field of concurrency, from GPU architectures that enable massively parallel computation to tools such as MapRe-duce that distributed computing to several nodes connected in the system to achieve higher performance in distributed systems, none are incorporated in blockchain-based distributed computing. The team proposes a novel blockchain that allows public nodes in a permission-independent blockchain to deploy and run Smart Contracts that provide concurrency-related functionalities within the Smart Contract framework. In this paper, the researchers present “ConCurrency,” a blockchain network capable of handling big data-based computations. The technique is based on currently used distributed system paradigms, such as MapReduce, while also allowing for fundamental parallelly computable problems. Concurrency is achieved using a sharding protocol incorporated with consensus mechanisms to ensure high scalability, high reliability, and better efficiency. A detailed methodology and a comprehensive analysis of the proposed blockchain further indicate a significant increase in throughput for parallelly computable tasks, as detailed in this paper.
Ganguli, Mrittika, Ranganath, Sunku, Ravisundar, Subhiksha, Layek, Abhirupa, Ilangovan, Dakshina, Verplanke, Edwin.  2021.  Challenges and Opportunities in Performance Benchmarking of Service Mesh for the Edge. 2021 IEEE International Conference on Edge Computing (EDGE). :78—85.
As Edge deployments move closer towards the end devices, low latency communication among Edge aware applications is one of the key tenants of Edge service offerings. In order to simplify application development, service mesh architectures have emerged as the evolutionary architectural paradigms for taking care of bulk of application communication logic such as health checks, circuit breaking, secure communication, resiliency (among others), thereby decoupling application logic with communication infrastructure. The latency to throughput ratio needs to be measurable for high performant deployments at the Edge. Providing benchmark data for various edge deployments with Bare Metal and virtual machine-based scenarios, this paper digs into architectural complexities of deploying service mesh at edge environment, performance impact across north-south and east-west communications in and out of a service mesh leveraging popular open-source service mesh Istio/Envoy using a simple on-prem Kubernetes cluster. The performance results shared indicate performance impact of Kubernetes network stack with Envoy data plane. Microarchitecture analyses indicate bottlenecks in Linux based stacks from a CPU micro-architecture perspective and quantify the high impact of Linux's Iptables rule matching at scale. We conclude with the challenges in multiple areas of profiling and benchmarking requirement and a call to action for deploying a service mesh, in latency sensitive environments at Edge.
2022-07-01
Matri, Pierre, Ross, Robert.  2021.  Neon: Low-Latency Streaming Pipelines for HPC. 2021 IEEE 14th International Conference on Cloud Computing (CLOUD). :698—707.
Real time data analysis in the context of e.g. realtime monitoring or computational steering is an important tool in many fields of science, allowing scientists to make the best use of limited resources such as sensors and HPC platforms. These tools typically rely on large amounts of continuously collected data that needs to be processed in near-real time to avoid wasting compute, storage, and networking resources. Streaming pipelines are a natural fit for this use case but are inconvenient to use on high-performance computing (HPC) systems because of the diverging system software environment with big data, increasing both the cost and the complexity of the solution. In this paper we propose Neon, a clean-slate design of a streaming data processing framework for HPC systems that enables users to create arbitrarily large streaming pipelines. The experimental results on the Bebop supercomputer show significant performance improvements compared with Apache Storm, with up to 2x increased throughput and reduced latency.
Kawashima, Ryota.  2021.  A Vision to Software-Centric Cloud Native Network Functions: Achievements and Challenges. 2021 IEEE 22nd International Conference on High Performance Switching and Routing (HPSR). :1—7.
Network slicing qualitatively transforms network infrastructures such that they have maximum flexibility in the context of ever-changing service requirements. While the agility of cloud native network functions (CNFs) demonstrates significant promise, virtualization and softwarization severely degrade the performance of such network functions. Considerable efforts were expended to improve the performance of virtualized systems, and at this stage 10 Gbps throughput is a real target even for container/VM-based applications. Nonetheless, the current performance of CNFs with state-of-the-art enhancements does not meet the performance requirements of next-generation 6G networks that aim for terabit-class throughput. The present pace of performance enhancements in hardware indicates that straightforward optimization of existing system components has limited possibility of filling the performance gap. As it would be reasonable to expect a single silver-bullet technology to dramatically enhance the ability of CNFs, an organic integration of various data-plane technologies with a comprehensive vision is a potential approach. In this paper, we show a future vision of system architecture for terabit-class CNFs based on effective harmonization of the technologies within the wide-range of network systems consisting of commodity hardware devices. We focus not only on the performance aspect of CNFs but also other pragmatic aspects such as interoperability with the current environment (not clean slate). We also highlight the remaining missing-link technologies revealed by the goal-oriented approach.
Yin, Jinyu, Jiang, Li, Zhang, Xinggong, Liu, Bin.  2021.  INTCP: Information-centric TCP for Satellite Network. 2021 4th International Conference on Hot Information-Centric Networking (HotICN). :86—91.
Satellite networks are booming to provide high-speed and low latency Internet access, but the transport layer becomes one of the main obstacles. Legacy end-to-end TCP is designed for terrestrial networks, not suitable for error-prone, propagation delay varying, and intermittent satellite links. It is necessary to make a clean-slate design for the satellite transport layer. This paper introduces a novel Information-centric Hop-by-Hop transport layer design, INTCP. It carries out hop-by-hop packets retransmission and hop-by-hop congestion control with the help of cache and request-response model. Hop-by-hop retransmission recovers lost packets on hop, reduces retransmission delay. INTCP controls traffic and congestion also by hop. Each hop tries its best to maximize its bandwidth utilization and improves end-to-end throughput. The capability of caching enables asynchronous multicast in transport layer. This would save precious spectrum resources in the satellite network. The performance of INTCP is evaluated with the simulated Starlink constellation. Long-distance communication with more than 1000km is carried out. The results demonstrate that, for the unicast scenario INTCP could reduce 42% one-way delay, 53% delay jitters, and improve 60% throughput compared with the legacy TCP. In multicast scenario, INTCP could achieve more than 6X throughput.
Ciko, Kristjon, Welzl, Michael, Teymoori, Peyman.  2021.  PEP-DNA: A Performance Enhancing Proxy for Deploying Network Architectures. 2021 IEEE 29th International Conference on Network Protocols (ICNP). :1—6.
Deploying a new network architecture in the Internet requires changing some, but not necessarily all elements between communicating applications. One way to achieve gradual deployment is a proxy or gateway which "translates" between the new architecture and TCP/IP. We present such a proxy, called "Performance Enhancing Proxy for Deploying Network Architectures (PEP-DNA)", which allows TCP/IP applications to benefit from advanced features of a new network architecture without having to be redeveloped. Our proxy is a kernel-based Linux implementation which can be installed wherever a translation needs to occur between a new architecture and TCP/IP domains. We discuss the proxy operation in detail and evaluate its efficiency and performance in a local testbed, demonstrating that it achieves high throughput with low additional latency overhead. In our experiments, we use the Recursive InterNetwork Architecture (RINA) and Information-Centric Networking (ICN) as examples, but our proxy is modular and flexible, and hence enables realistic gradual deployment of any new "clean-slate" approaches.