Biblio
Industrial robots are playing an important role in now a day industrial productions. However, due to the increasing in robot hardware modules and the rapid expansion of software modules, the reliability of operating systems for industrial robots is facing severe challenges, especially for the light-weight edge computing platforms. Based on current technologies on resource security isolation protection and access control, a novel resource management model for real-time edge system of multiple robot arms is proposed on light-weight edge devices. This novel resource management model can achieve the following functions: mission-critical resource classification, resource security access control, and multi-level security data isolation transmission. We also propose a fault location and isolation model on each lightweight edge device, which ensures the reliability of the entire system. Experimental results show that the robot operating system can meet the requirements of hierarchical management and resource access control. Compared with the existing methods, the fault location and isolation model can effectively locate and deal with the faults generated by the system.
Software Defined Networking (SDN) provides opportunities for flexible and dynamic traffic engineering. However, in current SDN systems, routing strategies are based on traditional mechanisms which lack in real-time modification and less efficient resource utilization. To overcome these limitations, deep learning is used in this paper to improve the routing computation in SDN. This paper proposes Convolutional Deep Reinforcement Learning (CoDRL) model which is based on deep reinforcement learning agent for routing optimization in SDN to minimize the mean network delay and packet loss rate. The CoDRL model consists of Deep Deterministic Policy Gradients (DDPG) deep agent coupled with Convolution layer. The proposed model tends to automatically adapts the dynamic packet routing using network data obtained through the SDN controller, and provides the routing configuration that attempts to reduce network congestion and minimize the mean network delay. Hence, the proposed deep agent exhibits good convergence towards providing routing configurations that improves the network performance.
The idea to use multiple paths to transport TCP traffic seems very attractive due to its potential benefits it may offer for both redundancy and better utilization of available resources by load balancing. Fixed and mobile network providers employ frequently load-balancers that use multiple paths on either per-flow or per-destination level, but very seldom on per-packet level. Despite of the benefits of packet-level load balancing mechanisms (e.g., low computational complexity and high bandwidth utilization) network providers can't use them mainly because of TCP packet reorderings that harm TCP performance. Emerging network architectures also support multiple paths, but they face with the same obstacle in balancing their load to multiple paths. Indeed, packet level load balancing research is paralyzed by the reordering vulnerability of TCP.A couple of TCP variants exist that deal with TCP packet reordering problem, but due to lack of end-to-end transparency they were not widely deployed and adopted. In this paper, we revisit TCP's packet reorderings problem and present a transparent and light-weight algorithm, Out-of-Order Robustness for TCP with Transparent Acknowledgment (ACK) Intervention (ORTA), to deal with out-of-order deliveries.ORTA works as a transparent thin layer below TCP and hides harmful side-effects of packet-level load balancing. ORTA monitors all TCP flow packets and uses ACK traffic shaping, without any modifications to either TCP sender or receiver sides. Since it is transparent to TCP end-points, it can be easily deployed on TCP sender end-hosts (EHs), gateway (GW) routers, or access points (APs). ORTA opens a door for network providers to use per-packet load balancing.The proposed ORTA algorithm is implemented and tested in NS-2. The results show that ORTA can prevent TCP performance decrease when per-packet load balancing is used.
In recent years, integration of Passive Optical Net-work(PON) and WiMAX (Worldwide Interoperability Microwave Access Network) network is attracting huge interest among many researchers. The continuous demand for large bandwidths with wider coverage area are the key drivers to this technology. This integration has led to high speed and cost efficient solution for internet accessibility. This paper investigates the issues related to traffic grooming, routing and resource allocation in the hybrid networks. The Elastic Optical Network forms Backbone and is integrated with WiMAX. In this novel approach, traffic grooming is carried out using light trail technique to minimize the bandwidth blocking ratio and also reduce the network resource consumption. The simulation is performed on different network topologies, where in the traffic is routed through three modes namely the pure Wireless Network, the Wireless-Optical/Optical-Wireless Network, the pure Optical Network keeping the network congestion in mind. The results confirm reduction in bandwidth blocking ratio in all the given networks coupled with minimum network resource utilization.
Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.
We consider the scenario where a cloud service provider (CSP) operates multiple geo-distributed datacenters to provide Internet-scale service. Our objective is to minimize the total electricity and bandwidth cost by jointly optimizing electricity procurement from wholesale markets and geographical load balancing (GLB), i.e., dynamically routing workloads to locations with cheaper electricity. Under the ideal setting where exact values of market prices and workloads are given, this problem reduces to a simple linear programming and is easy to solve. However, under the realistic setting where only distributions of these variables are available, the problem unfolds into a non-convex infinite-dimensional one and is challenging to solve. One of our main contributions is to develop an algorithm that is proven to solve the challenging problem optimally, by exploring the full design space of strategic bidding. Trace-driven evaluations corroborate our theoretical results, demonstrate fast convergence of our algorithm, and show that it can reduce the cost for the CSP by up to 20% as compared with baseline alternatives. This paper highlights the intriguing role of uncertainty in workloads and market prices, measured by their variances. While uncertainty in workloads deteriorates the cost-saving performance of joint electricity procurement and GLB, counter-intuitively, uncertainty in market prices can be exploited to achieve a cost reduction even larger than the setting without price uncertainty.
The recent trend of military is to combined Internet of Things (IoT) knowledge to their field for enhancing the impact in battlefield. That's why Internet of battlefield (IoBT) is our concern. This paper discusses how Fog Radio Access Network(F-RAN) can provide support for local computing in Industrial IoT and IoBT. F-RAN can play a vital role because of IoT devices are becoming popular and the fifth generation (5G) communication is also an emerging issue with ultra-low latency, energy consumption, bandwidth efficiency and wide range of coverage area. To overcome the disadvantages of cloud radio access networks (C-RAN) F-RAN can be introduced where a large number of F-RAN nodes can take part in joint distributed computing and content sharing scheme. The F-RAN in IoBT is effective for enhancing the computing ability with fog computing and edge computing at the network edge. Since the computing capability of the fog equipment are weak, to overcome the difficulties of fog computing in IoBT this paper illustrates some challenging issues and solutions to improve battlefield efficiency. Therefore, the distributed computing load balancing problem of the F-RAN is researched. The simulation result indicates that the load balancing strategy has better performance for F-RAN architecture in the battlefield.