Visible to the public Biblio

Filters: Keyword is telecommunication scheduling  [Clear All Filters]
2021-04-08
Lin, X., Zhang, Z., Chen, M., Sun, Y., Li, Y., Liu, M., Wang, Y., Liu, M..  2020.  GDGCA: A Gene Driven Cache Scheduling Algorithm in Information-Centric Network. 2020 IEEE 3rd International Conference on Information Systems and Computer Aided Education (ICISCAE). :167–172.
The disadvantages and inextensibility of the traditional network require more novel thoughts for the future network architecture, as for ICN (Information-Centric Network), is an information centered and self-caching network, ICN is deeply rooted in the 5G era, of which concept is user-centered and content-centered. Although the ICN enables cache replacement of content, an information distribution scheduling algorithm is still needed to allocate resources properly due to its limited cache capacity. This paper starts with data popularity, information epilepsy and other data related attributes in the ICN environment. Then it analyzes the factors affecting the cache, proposes the concept and calculation method of Gene value. Since the ICN is still in a theoretical state, this paper describes an ICN scenario that is close to the reality and processes a greedy caching algorithm named GDGCA (Gene Driven Greedy Caching Algorithm). The GDGCA tries to design an optimal simulation model, which based on the thoughts of throughput balance and satisfaction degree (SSD), then compares with the regular distributed scheduling algorithm in related research fields, such as the QoE indexes and satisfaction degree under different Poisson data volumes and cycles, the final simulation results prove that GDGCA has better performance in cache scheduling of ICN edge router, especially with the aid of Information Gene value.
2021-03-16
Sharma, P., Nair, J., Singh, R..  2020.  Adaptive Flow-Level Scheduling for the IoT MAC. 2020 International Conference on COMmunication Systems NETworkS (COMSNETS). :515—518.

Over the past decade, distributed CSMA, which forms the basis for WiFi, has been deployed ubiquitously to provide seamless and high-speed mobile internet access. However, distributed CSMA might not be ideal for future IoT/M2M applications, where the density of connected devices/sensors/controllers is expected to be orders of magnitude higher than that in present wireless networks. In such high-density networks, the overhead associated with completely distributed MAC protocols will become a bottleneck. Moreover, IoT communications are likely to have strict QoS requirements, for which the `best-effort' scheduling by present WiFi networks may be unsuitable. This calls for a clean-slate redesign of the wireless MAC taking into account the requirements for future IoT/M2M networks. In this paper, we propose a reservation-based (for minimal overhead) wireless MAC designed specifically with IoT/M2M applications in mind.

2020-12-21
Portaluri, G., Giordano, S..  2020.  Gambling on fairness: a fair scheduler for IIoT communications based on the shell game. 2020 IEEE 25th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1–6.
The Industrial Internet of Things (IIoT) paradigm represents nowadays the cornerstone of the industrial automation since it has introduced new features and services for different environments and has granted the connection of industrial machine sensors and actuators both to local processing and to the Internet. One of the most advanced network protocol stack for IoT-IIoT networks that have been developed is 6LoWPAN which supports IPv6 on top of Low-power Wireless Personal Area Networks (LoWPANs). 6LoWPAN is usually coupled with the IEEE 802.15.4 low-bitrate and low-energy MAC protocol that relies on the time-slotted channel hopping (TSCH) technique. In TSCH networks, a coordinator node synchronizes all end-devices and specifies whether (and when) they can transmit or not in order to improve their energy efficiency. In this scenario, the scheduling strategy adopted by the coordinator plays a crucial role that impacts dramatically on the network performance. In this paper, we present a novel scheduling strategy for time-slot allocation in IIoT communications which aims at the improvement of the overall network fairness. The proposed strategy mimics the well-known shell game turning the totally unfair mechanics of this game into a fair scheduling strategy. We compare our proposal with three allocation strategies, and we evaluate the fairness of each scheduler showing that our allocator outperforms the others.
2020-12-02
Lübben, R., Morgenroth, J..  2019.  An Odd Couple: Loss-Based Congestion Control and Minimum RTT Scheduling in MPTCP. 2019 IEEE 44th Conference on Local Computer Networks (LCN). :300—307.

Selecting the best path in multi-path heterogeneous networks is challenging. Multi-path TCP uses by default a scheduler that selects the path with the minimum round trip time (minRTT). A well-known problem is head-of-line blocking at the receiver when packets arrive out of order on different paths. We shed light on another issue that occurs if scheduling have to deal with deep queues in the network. First, we highlight the relevance by a real-world experiment in cellular networks that often deploy deep queues. Second, we elaborate on the issues with minRTT scheduling and deep queues in a simplified network to illustrate the root causes; namely the interaction of the minRTT scheduler and loss-based congestion control that causes extensive bufferbloat at network elements and distorts RTT measurement. This results in extraordinary large buffer sizes for full utilization. Finally, we discuss mitigation techniques and show how alternative congestion control algorithms mitigate the effect.

2020-10-05
Chen, Jen-Jee, Tsai, Meng-Hsun, Zhao, Liqiang, Chang, Wei-Chiao, Lin, Yu-Hsiang, Zhou, Qianwen, Lu, Yu-Zhang, Tsai, Jia-Ling, Cai, Yun-Zhan.  2019.  Realizing Dynamic Network Slice Resource Management based on SDN networks. 2019 International Conference on Intelligent Computing and its Emerging Applications (ICEA). :120–125.
It is expected that the concept of Internet of everything will be realized in 2020 because of the coming of the 5G wireless communication technology. Internet of Things (IoT) services in various fields require different types of network service features, such as mobility, security, bandwidth, latency, reliability and control strategies. In order to solve the complex requirements and provide customized services, a new network architecture is needed. To change the traditional control mode used in the traditional network architecture, the Software Defined Network (SDN) is proposed. First, SDN divides the network into the Control Plane and Data Plane and then delegates the network management authority to the controller of the control layer. This allows centralized control of connections of a large number of devices. Second, SDN can help realizing the network slicing in the aspect of network layer. With the network slicing technology proposed by 5G, it can cut the 5G network out of multiple virtual networks and each virtual network is to support the needs of diverse users. In this work, we design and develop a network slicing framework. The contributions of this article are two folds. First, through SDN technology, we develop to provide the corresponding end-to-end (E2E) network slicing for IoT applications with different requirements. Second, we develop a dynamic network slice resource scheduling and management method based on SDN to meet the services' requirements with time-varying characteristics. This is usually observed in streaming and services with bursty traffic. A prototyping system is completed. The effectiveness of the system is demonstrated by using an electronic fence application as a use case.
2020-09-08
Perello, Jordi, Lopez, Albert, Careglio, Davide.  2019.  Experimenting with Real Application-specific QoS Guarantees in a Large-scale RINA Demonstrator. 2019 22nd Conference on Innovation in Clouds, Internet and Networks and Workshops (ICIN). :31–36.
This paper reports the definition, setup and obtained results of the Fed4FIRE + medium experiment ERASER, aimed to evaluate the actual Quality of Service (QoS) guarantees that the clean-slate Recursive InterNetwork Architecture (RINA) can deliver to heterogeneous applications at large-scale. To this goal, a 37-Node 5G metro/regional RINA network scenario, spanning from the end-user to the server where applications run in a datacenter has been configured in the Virtual Wall experimentation facility. This scenario has initially been loaded with synthetic application traffic flows, with diverse QoS requirements, thus reproducing different network load conditions. Next,their experienced QoS metrics end-to-end have been measured with two different QTA-Mux (i.e., the most accepted candidate scheduling policy for providing RINA with its QoS support) deployment scenarios. Moreover, on this RINA network scenario loaded with synthetic application traffic flows, a real HD (1080p) video streaming demonstration has also been conducted, setting up video streaming sessions to end-users at different network locations, illustrating the perceived Quality of Experience (QoE). Obtained results in ERASER disclose that, by appropriately deploying and configuring QTA-Mux, RINA can yield effective QoS support, which has provided perfect QoE in almost all locations in our demo when assigning video traffic flows the highest (i.e., Gold) QoS Cube.
2020-08-28
Li, Peng, Min, Xiao-Cui.  2019.  Accurate Marking Method of Network Attacking Information Based on Big Data Analysis. 2019 International Conference on Intelligent Transportation, Big Data Smart City (ICITBS). :228—231.

In the open network environment, the network offensive information is implanted in big data environment, so it is necessary to carry out accurate location marking of network offensive information, to realize network attack detection, and to implement the process of accurate location marking of network offensive information. Combined with big data analysis method, the location of network attack nodes is realized, but when network attacks cross in series, the performance of attack information tagging is not good. An accurate marking technique for network attack information is proposed based on big data fusion tracking recognition. The adaptive learning model combined with big data is used to mark and sample the network attack information, and the feature analysis model of attack information chain is designed by extracting the association rules. This paper classifies the data types of the network attack nodes, and improves the network attack detection ability by the task scheduling method of the network attack information nodes, and realizes the accurate marking of the network attacking information. Simulation results show that the proposed algorithm can effectively improve the accuracy of marking offensive information in open network environment, the efficiency of attack detection and the ability of intrusion prevention is improved, and it has good application value in the field of network security defense.

2020-03-16
Zhou, Yaqiu, Ren, Yongmao, Zhou, Xu, Yang, Wanghong, Qin, Yifang.  2019.  A Scientific Data Traffic Scheduling Algorithm Based on Software-Defined Networking. 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). :62–67.
Compared to ordinary Internet applications, the transfer of scientific data flows often has higher requirements for network performance. The network security devices and systems often affect the efficiency of scientific data transfer. As a new type of network architecture, Software-defined Networking (SDN) decouples the data plane from the control plane. Its programmability allows users to customize the network transfer path and makes the network more intelligent. The Science DMZ model is a private network for scientific data flow transfer, which can improve performance under the premise of ensuring network security. This paper combines SDN with Science DMZ, designs and implements an SDN-based traffic scheduling algorithm considering the load of link. In addition to distinguishing scientific data flow from common data flow, the algorithm further distinguishes the scientific data flows of different applications and performs different traffic scheduling of scientific data for specific link states. Experiments results proved that the algorithm can effectively improve the transmission performance of scientific data flow.
2020-03-02
Tootaghaj, Diman Zad, La Porta, Thomas, He, Ting.  2019.  Modeling, Monitoring and Scheduling Techniques for Network Recovery from Massive Failures. 2019 IFIP/IEEE Symposium on Integrated Network and Service Management (IM). :695–700.

Large-scale failures in communication networks due to natural disasters or malicious attacks can severely affect critical communications and threaten lives of people in the affected area. In the absence of a proper communication infrastructure, rescue operation becomes extremely difficult. Progressive and timely network recovery is, therefore, a key to minimizing losses and facilitating rescue missions. To this end, we focus on network recovery assuming partial and uncertain knowledge of the failure locations. We proposed a progressive multi-stage recovery approach that uses the incomplete knowledge of failure to find a feasible recovery schedule. Next, we focused on failure recovery of multiple interconnected networks. In particular, we focused on the interaction between a power grid and a communication network. Then, we focused on network monitoring techniques that can be used for diagnosing the performance of individual links for localizing soft failures (e.g. highly congested links) in a communication network. We studied the optimal selection of the monitoring paths to balance identifiability and probing cost. Finally, we addressed, a minimum disruptive routing framework in software defined networks. Extensive experimental and simulation results show that our proposed recovery approaches have a lower disruption cost compared to the state-of-the-art while we can configure our choice of trade-off between the identifiability, execution time, the repair/probing cost, congestion and the demand loss.

2020-01-21
Zhang, Chiyu, Hwang, Inseok.  2019.  Decentralized Multi-Sensor Scheduling for Multi-Target Tracking and Identity Management. 2019 18th European Control Conference (ECC). :1804–1809.
This paper proposes a multi-target tracking and identity management method with multiple sensors: a primary sensor with a large detection range to provide the targets' state estimates, and multiple secondary sensors capable of recognizing the targets' identities. Each of the secondary sensors is assigned to a sector of the operation area; a secondary sensor decides which target in its assigned sector to be identified and controls itself to identify the target. We formulate the decision-making process as an optimization problem to minimize the uncertainty of the targets' identities subject to the sensor dynamic constraints. The proposed algorithm is decentralized since the secondary sensors only communicate with the primary sensor for the target information, and need not to synchronize with each other. By integrating the proposed algorithm with the existing multi-target tracking algorithms, we develop a closed-loop multi-target tracking and identity management algorithm. The effectiveness of the proposed algorithm is demonstrated with illustrative numerical examples.
2018-08-23
Chaturvedi, P., Daniel, A. K..  2017.  Trust aware node scheduling protocol for target coverage using rough set theory. 2017 International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT). :511–514.

Wireless sensor networks have achieved the substantial research interest in the present time because of their unique features such as fault tolerance, autonomous operation etc. The coverage maximization while considering the resource scarcity is a crucial problem in the wireless sensor networks. The approaches which address these problems and maximize the network lifetime are considered prominent. The node scheduling is such mechanism to address this issue. The scheduling strategy which addresses the target coverage problem based on coverage probability and trust values is proposed in Energy Efficient Coverage Protocol (EECP). In this paper the optimized decision rules is obtained by using the rough set theory to determine the number of active nodes. The results show that the proposed extension results in the lesser number of decision rules to consider in determination of node states in the network, hence it improves the network efficiency by reducing the number of packets transmitted and reducing the overhead.

2017-03-08
Farayev, B., Sadi, Y., Ergen, S. C..  2015.  Optimal Power Control and Rate Adaptation for Ultra-Reliable M2M Control Applications. 2015 IEEE Globecom Workshops (GC Wkshps). :1–6.

The main challenge of ultra-reliable machine-to-machine (M2M) control applications is to meet the stringent timing and reliability requirements of control systems, despite the adverse properties of wireless communication for delay and packet errors, and limited battery resources of the sensor nodes. Since the transmission delay and energy consumption of a sensor node are determined by the transmission power and rate of that sensor node and the concurrently transmitting nodes, the transmission schedule should be optimized jointly with the transmission power and rate of the sensor nodes. Previously, it has been shown that the optimization of power control and rate adaptation for each node subset can be separately formulated, solved and then used in the scheduling algorithm in the optimal solution of the joint optimization of power control, rate adaptation and scheduling problem. However, the power control and rate adaptation problem has been only formulated and solved for continuous rate transmission model, in which Shannon's capacity formulation for an Additive White Gaussian Noise (AWGN) wireless channel is used in the calculation of the maximum achievable rate as a function of Signal-to-Interference-plus-Noise Ratio (SINR). In this paper, we formulate the power control and rate adaptation problem with the objective of minimizing the time required for the concurrent transmission of a set of sensor nodes while satisfying their transmission delay, reliability and energy consumption requirements based on the more realistic discrete rate transmission model, in which only a finite set of transmit rates are supported. We propose a polynomial time algorithm to solve this problem and prove the optimality of the proposed algorithm. We then combine it with the previously proposed scheduling algorithms and demonstrate its close to optimal performance via extensive simulations.