Visible to the public Biblio

Filters: Keyword is resource allocation  [Clear All Filters]
2023-05-19
Guo, Yihao, Guo, Chuangxin, Yang, Jie.  2022.  A Resource Allocation Method for Attacks on Power Systems Under Extreme Weather. 2022 IEEE/IAS Industrial and Commercial Power System Asia (I&CPS Asia). :165—169.
This paper addresses the allocation method of offensive resources for man-made attacks on power systems considering extreme weather conditions, which can help the defender identify the most vulnerable components to protect in this adverse situation. The problem is formulated as an attacker-defender model. The attacker at the upper level intends to maximize the expected damage considering all possible line failure scenarios. These scenarios are characterized by the combinations of failed transmission lines under extreme weather. Once the disruption is detected, the defender at the lower level alters the generation and consumption in the power grid using DC optimal power flow technique to minimize the damage. Then the original bi-level problem is transformed into an equivalent single-level mixed-integer linear program through strong duality theorem and Big-M method. The proposed attack resource allocation method is applied on IEEE 39-bus system and its effectiveness is demonstrated by the comparative case studies.
2023-01-05
Kumar, Ravula Arun, Konda, Srikar Goud, Karnati, Ramesh, Kumar.E, Ravi, NarenderRavula.  2022.  A Diagnostic survey on Sybil attack on cloud and assert possibilities in risk mitigation. 2022 First International Conference on Artificial Intelligence Trends and Pattern Recognition (ICAITPR). :1–6.
Any decentralized, biased distributed network is susceptible to the Sybil malicious attack, in which a malicious node masquerades as numerous different nodes, collectively referred to as Sybil nodes, causing the network to become unresponsive. Cloud computing environments are characterized by their loosely linked nature, which means that no node has comprehensive information of the entire system. In order to prevent Sybil attacks in cloud computing systems, it is necessary to detect them as soon as they occur. The network’s ability to function properly A Sybil attacker has the ability to construct. It is necessary to have multiple identities on a single physical device in order to execute a concerted attack on the network or switch between networks identities in order to make the detection process more difficult, and thereby lack of accountability is being promoted throughout the network. The purpose of this study is to Various varieties of Sybil assaults have been documented, including those that occur in Peer-to-peer reputation systems, self-organizing networks, and other similar technologies. The topic of social network systems is discussed. In addition, there are other approaches in which it has been urged over time that they be reduced or eliminated Their potential risks are also thoroughly investigated.
2022-12-09
Fakhartousi, Amin, Meacham, Sofia, Phalp, Keith.  2022.  Autonomic Dominant Resource Fairness (A-DRF) in Cloud Computing. 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC). :1626—1631.
In the world of information technology and the Internet, which has become a part of human life today and is constantly expanding, Attention to the users' requirements such as information security, fast processing, dynamic and instant access, and costs savings has become essential. The solution that is proposed for such problems today is a technology that is called cloud computing. Today, cloud computing is considered one of the most essential distributed tools for processing and storing data on the Internet. With the increasing using this tool, the need to schedule tasks to make the best use of resources and respond appropriately to requests has received much attention, and in this regard, many efforts have been made and are being made. To this purpose, various algorithms have been proposed to calculate resource allocation, each of which has tried to solve equitable distribution challenges while using maximum resources. One of these calculation methods is the DRF algorithm. Although it offers a better approach than previous algorithms, it faces challenges, especially with time-consuming resource allocation computing. These challenges make the use of DRF more complex than ever in the low number of requests with high resource capacity as well as the high number of simultaneous requests. This study tried to reduce the computations costs associated with the DRF algorithm for resource allocation by introducing a new approach to using this DRF algorithm to automate calculations by machine learning and artificial intelligence algorithms (Autonomic Dominant Resource Fairness or A-DRF).
2022-09-16
Massey, Keith, Moazen, Nadia, Halabi, Talal.  2021.  Optimizing the Allocation of Secure Fog Resources based on QoS Requirements. 2021 8th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2021 7th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :143—148.
Fog computing plays a critical role in the provisioning of computing tasks in the context of Internet of Things (IoT) services. However, the security of IoT services against breaches and attacks relies heavily on the security of fog resources, which must be properly implemented and managed. Increasing security investments and integrating the security aspect into the core processes and operations of fog computing including resource management will increase IoT service protection as well as the trustworthiness of fog service providers. However, this requires careful modeling of the security requirements of IoT services as well as theoretical and experimental evaluation of the tradeoff between security and performance in fog infrastructures. To this end, this paper explores a new model for fog resource allocation according to security and Quality of Service (QoS). The problem is modeled as a multi-objective linear optimization problem and solved using conventional, off-the-shelf optimizers by applying the preemptive method. Specifically, two objective functions were defined: one representing the satisfaction of the security design requirements of IoT services and another that models the communication delay among the different virtual machines belonging to the same service request, which might be deployed on different intermediary fog nodes. The simulation results show that the optimization is efficient and achieves the required level of scalability in fog computing. Moreover, a tradeoff needs to be pondered between the two criteria during the resource allocation process.
2022-07-01
He, Xufeng, Li, Xi, Ji, Hong, Zhang, Heli.  2021.  Resource Allocation for Secrecy Rate Optimization in UAV-assisted Cognitive Radio Network. 2021 IEEE Wireless Communications and Networking Conference (WCNC). :1—6.
Cognitive radio (CR) as a key technology of solving the problem of low spectrum utilization has attracted wide attention in recent years. However, due to the open nature of the radio, the communication links can be eavesdropped by illegal user, resulting to severe security threat. Unmanned aerial vehicle (UAV) equipped with signal sensing and data transmission module, can access to the unoccupied channel to improve network security performance by transmitting artificial noise (AN) in CR networks. In this paper, we propose a resource allocation scheme for UAV-assisted overlay CR network. Based on the result of spectrum sensing, the UAV decides to play the role of jammer or secondary transmitter. The power splitting ratio for transmitting secondary signal and AN is introduced to allocate the UAV's transmission power. Particularly, we jointly optimize the spectrum sensing time, the power splitting ratio and the hovering position of the UAV to maximize the total secrecy rate of primary and secondary users. The optimization problem is highly intractable, and we adopt an adaptive inertia coefficient particle swarm optimization (A-PSO) algorithm to solve this problem. Simulation results show that the proposed scheme can significantly improve the total secrecy rate in CR network.
2022-05-06
Saravanan, M, Pratap Sircar, Rana.  2021.  Quantum Evolutionary Algorithm for Scheduling Resources in Virtualized 5G RAN Environment. 2021 IEEE 4th 5G World Forum (5GWF). :111–116.
Radio is the most important part of any wireless network. Radio Access Network (RAN) has been virtualized and disaggregated into different functions whose location is best defined by the requirements and economics of the use case. This Virtualized RAN (vRAN) architecture separates network functions from the underlying hardware and so 5G can leverage virtualization of the RAN to implement these functions. The easy expandability and manageability of the vRAN support the expansion of the network capacity and deployment of new features and algorithms for streamlining resource usage. In this paper, we try to address the problem of scheduling 5G vRAN with mid-haul network capacity constraints as a combinatorial optimization problem. We transformed it to a Quadratic Unconstrained Binary Optimization (QUBO) problem by using a newly proposed quantum-based algorithm and compared our implementation with existing classical algorithms. This work has demonstrated the advantage of quantum computers in solving a particular optimization problem in the Telecommunication domain and paves the way for solving critical real-world problems using quantum computers faster and better.
2021-11-29
AlShiab, Ismael, Leivadeas, Aris, Ibnkahla, Mohamed.  2021.  Virtual Sensing Networks and Dynamic RPL-Based Routing for IoT Sensing Services. ICC 2021 - IEEE International Conference on Communications. :1–6.
IoT applications are quickly evolving in scope and objectives while their focus is being shifted toward supporting dynamic users’ requirements. IoT users initiate applications and expect quick and reliable deployment without worrying about the underlying complexities of the required sensing and routing resources. On the other hand, IoT sensing nodes, sinks, and gateways are heterogeneous, have limited resources, and require significant cost and installation time. Sensing network-level virtualization through virtual Sensing Networks (VSNs) could play an important role in enabling the formation of virtual groups that link the needed IoT sensing and routing resources. These VSNs can be initiated on-demand with the goal to satisfy different IoT applications’ requirements. In this context, we present a joint algorithm for IoT Sensing Resource Allocation with Dynamic Resource-Based Routing (SRADRR). The SRADRR algorithm builds on the current distinguished empowerment of sensing networks using recent standards like RPL and 6LowPAN. The proposed algorithm suggests employing the RPL standard concepts to create DODAG routing trees that dynamically adapt according to the available sensing resources and the requirements of the running and arriving applications. Our results and implementation of the SRADRR reveal promising enhancements in the overall applications deployment rate.
2021-04-08
Yaseen, Q., Panda, B..  2012.  Tackling Insider Threat in Cloud Relational Databases. 2012 IEEE Fifth International Conference on Utility and Cloud Computing. :215—218.
Cloud security is one of the major issues that worry individuals and organizations about cloud computing. Therefore, defending cloud systems against attacks such asinsiders' attacks has become a key demand. This paper investigates insider threat in cloud relational database systems(cloud RDMS). It discusses some vulnerabilities in cloud computing structures that may enable insiders to launch attacks, and shows how load balancing across multiple availability zones may facilitate insider threat. To prevent such a threat, the paper suggests three models, which are Peer-to-Peer model, Centralized model and Mobile-Knowledgebase model, and addresses the conditions under which they work well.
Lin, X., Zhang, Z., Chen, M., Sun, Y., Li, Y., Liu, M., Wang, Y., Liu, M..  2020.  GDGCA: A Gene Driven Cache Scheduling Algorithm in Information-Centric Network. 2020 IEEE 3rd International Conference on Information Systems and Computer Aided Education (ICISCAE). :167–172.
The disadvantages and inextensibility of the traditional network require more novel thoughts for the future network architecture, as for ICN (Information-Centric Network), is an information centered and self-caching network, ICN is deeply rooted in the 5G era, of which concept is user-centered and content-centered. Although the ICN enables cache replacement of content, an information distribution scheduling algorithm is still needed to allocate resources properly due to its limited cache capacity. This paper starts with data popularity, information epilepsy and other data related attributes in the ICN environment. Then it analyzes the factors affecting the cache, proposes the concept and calculation method of Gene value. Since the ICN is still in a theoretical state, this paper describes an ICN scenario that is close to the reality and processes a greedy caching algorithm named GDGCA (Gene Driven Greedy Caching Algorithm). The GDGCA tries to design an optimal simulation model, which based on the thoughts of throughput balance and satisfaction degree (SSD), then compares with the regular distributed scheduling algorithm in related research fields, such as the QoE indexes and satisfaction degree under different Poisson data volumes and cycles, the final simulation results prove that GDGCA has better performance in cache scheduling of ICN edge router, especially with the aid of Information Gene value.
2021-03-29
Liao, S., Wu, J., Li, J., Bashir, A. K..  2020.  Proof-of-Balance: Game-Theoretic Consensus for Controller Load Balancing of SDN. IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :231–236.
Software Defined Networking (SDN) focus on the isolation of control plane and data plane, greatly enhancing the network's support for heterogeneity and flexibility. However, although the programmable network greatly improves the performance of all aspects of the network, flexible load balancing across controllers still challenges the current SDN architecture. Complex application scenarios lead to flexible and changeable communication requirements, making it difficult to guarantee the Quality of Service (QoS) for SDN users. To address this issue, this paper proposes a paradigm that uses blockchain to incentive safe load balancing for multiple controllers. We proposed a controller consortium blockchain for secure and efficient load balancing of multi-controllers, which includes a new cryptographic currency balance coin and a novel consensus mechanism Proof-of-Balance (PoB). In addition, we have designed a novel game theory-based incentive mechanism to incentive controllers with tight communication resources to offload tasks to idle controllers. The security analysis and performance simulation results indicate the superiority and effectiveness of the proposed scheme.
Halabi, T., Wahab, O. A., Zulkernine, M..  2020.  A Game-Theoretic Approach for Distributed Attack Mitigation in Intelligent Transportation Systems. NOMS 2020 - 2020 IEEE/IFIP Network Operations and Management Symposium. :1–6.
Intelligent Transportation Systems (ITS) play a vital role in the development of smart cities. They enable various road safety and efficiency applications such as optimized traffic management, collision avoidance, and pollution control through the collection and evaluation of traffic data from Road Side Units (RSUs) and connected vehicles in real time. However, these systems are highly vulnerable to data corruption attacks which can seriously influence their decision-making abilities. Traditional attack detection schemes do not account for attackers' sophisticated and evolving strategies and ignore the ITS's constraints on security resources. In this paper, we devise a security game model that allows the defense mechanism deployed in the ITS to optimize the distribution of available resources for attack detection while considering mixed attack strategies, according to which the attacker targets multiple RSUs in a distributed fashion. In our security game, the utility of the ITS is quantified in terms of detection rate, attack damage, and the relevance of the information transmitted by the RSUs. The proposed approach will enable the ITS to mitigate the impact of attacks and increase its resiliency. The results show that our approach reduces the attack impact by at least 20% compared to the one that fairly allocates security resources to RSUs indifferently to attackers' strategies.
2021-03-15
Wang, F., Zhang, X..  2020.  Secure Resource Allocation for Polarization-Based Non-Linear Energy Harvesting Over 5G Cooperative Cognitive Radio Networks. ICC 2020 - 2020 IEEE International Conference on Communications (ICC). :1–6.
We address secure resource allocation for the energy harvesting (EH) based 5G cooperative cognitive radio networks (CRNs). To guarantee that the size-limited secondary users (SUs) can simultaneously send the primary user's and their own information, we assume that SUs are equipped with orthogonally dual-polarized antennas (ODPAs). In particular, we propose, develop, and analyze an efficient resource allocation scheme under a practical non-linear EH model, which can capture the nonlinear characteristics of the end-to-end wireless power transfer (WPT) for radio frequency (RF) based EH circuits. Our obtained numerical results validate that a substantial performance gain can be obtained by employing the non-linear EH model.
2021-02-16
Shi, Y., Sagduyu, Y. E., Erpek, T..  2020.  Reinforcement Learning for Dynamic Resource Optimization in 5G Radio Access Network Slicing. 2020 IEEE 25th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1—6.
The paper presents a reinforcement learning solution to dynamic resource allocation for 5G radio access network slicing. Available communication resources (frequency-time blocks and transmit powers) and computational resources (processor usage) are allocated to stochastic arrivals of network slice requests. Each request arrives with priority (weight), throughput, computational resource, and latency (deadline) requirements, and if feasible, it is served with available communication and computational resources allocated over its requested duration. As each decision of resource allocation makes some of the resources temporarily unavailable for future, the myopic solution that can optimize only the current resource allocation becomes ineffective for network slicing. Therefore, a Q-learning solution is presented to maximize the network utility in terms of the total weight of granted network slicing requests over a time horizon subject to communication and computational constraints. Results show that reinforcement learning provides major improvements in the 5G network utility relative to myopic, random, and first come first served solutions. While reinforcement learning sustains scalable performance as the number of served users increases, it can also be effectively used to assign resources to network slices when 5G needs to share the spectrum with incumbent users that may dynamically occupy some of the frequency-time blocks.
2020-12-17
Hu, Z., Niu, J., Ren, T., Li, H., Rui, Y., Qiu, Y., Bai, L..  2020.  A Resource Management Model for Real-time Edge System of Multiple Robots. 2020 7th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2020 6th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :222—227.

Industrial robots are playing an important role in now a day industrial productions. However, due to the increasing in robot hardware modules and the rapid expansion of software modules, the reliability of operating systems for industrial robots is facing severe challenges, especially for the light-weight edge computing platforms. Based on current technologies on resource security isolation protection and access control, a novel resource management model for real-time edge system of multiple robot arms is proposed on light-weight edge devices. This novel resource management model can achieve the following functions: mission-critical resource classification, resource security access control, and multi-level security data isolation transmission. We also propose a fault location and isolation model on each lightweight edge device, which ensures the reliability of the entire system. Experimental results show that the robot operating system can meet the requirements of hierarchical management and resource access control. Compared with the existing methods, the fault location and isolation model can effectively locate and deal with the faults generated by the system.

2020-12-14
Goudos, S. K., Diamantoulakis, P. D., Boursianis, A. D., Papanikolaou, V. K., Karagiannidis, G. K..  2020.  Joint User Association and Power Allocation Using Swarm Intelligence Algorithms in Non-Orthogonal Multiple Access Networks. 2020 9th International Conference on Modern Circuits and Systems Technologies (MOCAST). :1–4.
In this paper, we address the problem of joint user association and power allocation for non-orthogonal multiple access (NOMA) networks with multiple base stations (BSs). A user grouping procedure into orthogonal clusters, as well as an allocation of different physical resource blocks (PRBs) is considered. The problem of interest is mathematically described using the maximization of the weighted sum rate. We apply two different swarm intelligence algorithms, namely, the recently introduced Grey Wolf Optimizer (GWO), and the popular Particle Swarm Optimization (PSO), in order to solve this problem. Numerical results demonstrate that the above-described problem can be satisfactorily addressed by both algorithms.
2020-12-02
Swain, P., Kamalia, U., Bhandarkar, R., Modi, T..  2019.  CoDRL: Intelligent Packet Routing in SDN Using Convolutional Deep Reinforcement Learning. 2019 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS). :1—6.

Software Defined Networking (SDN) provides opportunities for flexible and dynamic traffic engineering. However, in current SDN systems, routing strategies are based on traditional mechanisms which lack in real-time modification and less efficient resource utilization. To overcome these limitations, deep learning is used in this paper to improve the routing computation in SDN. This paper proposes Convolutional Deep Reinforcement Learning (CoDRL) model which is based on deep reinforcement learning agent for routing optimization in SDN to minimize the mean network delay and packet loss rate. The CoDRL model consists of Deep Deterministic Policy Gradients (DDPG) deep agent coupled with Convolution layer. The proposed model tends to automatically adapts the dynamic packet routing using network data obtained through the SDN controller, and provides the routing configuration that attempts to reduce network congestion and minimize the mean network delay. Hence, the proposed deep agent exhibits good convergence towards providing routing configurations that improves the network performance.

Ayar, T., Budzisz, Ł, Rathke, B..  2018.  A Transparent Reordering Robust TCP Proxy To Allow Per-Packet Load Balancing in Core Networks. 2018 9th International Conference on the Network of the Future (NOF). :1—8.

The idea to use multiple paths to transport TCP traffic seems very attractive due to its potential benefits it may offer for both redundancy and better utilization of available resources by load balancing. Fixed and mobile network providers employ frequently load-balancers that use multiple paths on either per-flow or per-destination level, but very seldom on per-packet level. Despite of the benefits of packet-level load balancing mechanisms (e.g., low computational complexity and high bandwidth utilization) network providers can't use them mainly because of TCP packet reorderings that harm TCP performance. Emerging network architectures also support multiple paths, but they face with the same obstacle in balancing their load to multiple paths. Indeed, packet level load balancing research is paralyzed by the reordering vulnerability of TCP.A couple of TCP variants exist that deal with TCP packet reordering problem, but due to lack of end-to-end transparency they were not widely deployed and adopted. In this paper, we revisit TCP's packet reorderings problem and present a transparent and light-weight algorithm, Out-of-Order Robustness for TCP with Transparent Acknowledgment (ACK) Intervention (ORTA), to deal with out-of-order deliveries.ORTA works as a transparent thin layer below TCP and hides harmful side-effects of packet-level load balancing. ORTA monitors all TCP flow packets and uses ACK traffic shaping, without any modifications to either TCP sender or receiver sides. Since it is transparent to TCP end-points, it can be easily deployed on TCP sender end-hosts (EHs), gateway (GW) routers, or access points (APs). ORTA opens a door for network providers to use per-packet load balancing.The proposed ORTA algorithm is implemented and tested in NS-2. The results show that ORTA can prevent TCP performance decrease when per-packet load balancing is used.

Naik, D., Nikita, De, T..  2018.  Congestion aware traffic grooming in elastic optical and WiMAX network. 2018 Technologies for Smart-City Energy Security and Power (ICSESP). :1—9.

In recent years, integration of Passive Optical Net-work(PON) and WiMAX (Worldwide Interoperability Microwave Access Network) network is attracting huge interest among many researchers. The continuous demand for large bandwidths with wider coverage area are the key drivers to this technology. This integration has led to high speed and cost efficient solution for internet accessibility. This paper investigates the issues related to traffic grooming, routing and resource allocation in the hybrid networks. The Elastic Optical Network forms Backbone and is integrated with WiMAX. In this novel approach, traffic grooming is carried out using light trail technique to minimize the bandwidth blocking ratio and also reduce the network resource consumption. The simulation is performed on different network topologies, where in the traffic is routed through three modes namely the pure Wireless Network, the Wireless-Optical/Optical-Wireless Network, the pure Optical Network keeping the network congestion in mind. The results confirm reduction in bandwidth blocking ratio in all the given networks coupled with minimum network resource utilization.

2020-12-01
Yang, R., Ouyang, X., Chen, Y., Townend, P., Xu, J..  2018.  Intelligent Resource Scheduling at Scale: A Machine Learning Perspective. 2018 IEEE Symposium on Service-Oriented System Engineering (SOSE). :132—141.

Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.

Zhang, Y., Deng, L., Chen, M., Wang, P..  2018.  Joint Bidding and Geographical Load Balancing for Datacenters: Is Uncertainty a Blessing or a Curse? IEEE/ACM Transactions on Networking. 26:1049—1062.

We consider the scenario where a cloud service provider (CSP) operates multiple geo-distributed datacenters to provide Internet-scale service. Our objective is to minimize the total electricity and bandwidth cost by jointly optimizing electricity procurement from wholesale markets and geographical load balancing (GLB), i.e., dynamically routing workloads to locations with cheaper electricity. Under the ideal setting where exact values of market prices and workloads are given, this problem reduces to a simple linear programming and is easy to solve. However, under the realistic setting where only distributions of these variables are available, the problem unfolds into a non-convex infinite-dimensional one and is challenging to solve. One of our main contributions is to develop an algorithm that is proven to solve the challenging problem optimally, by exploring the full design space of strategic bidding. Trace-driven evaluations corroborate our theoretical results, demonstrate fast convergence of our algorithm, and show that it can reduce the cost for the CSP by up to 20% as compared with baseline alternatives. This paper highlights the intriguing role of uncertainty in workloads and market prices, measured by their variances. While uncertainty in workloads deteriorates the cost-saving performance of joint electricity procurement and GLB, counter-intuitively, uncertainty in market prices can be exploited to achieve a cost reduction even larger than the setting without price uncertainty.

2020-11-17
Hossain, M. S., Ramli, M. R., Lee, J. M., Kim, D.-S..  2019.  Fog Radio Access Networks in Internet of Battlefield Things (IoBT) and Load Balancing Technology. 2019 International Conference on Information and Communication Technology Convergence (ICTC). :750—754.

The recent trend of military is to combined Internet of Things (IoT) knowledge to their field for enhancing the impact in battlefield. That's why Internet of battlefield (IoBT) is our concern. This paper discusses how Fog Radio Access Network(F-RAN) can provide support for local computing in Industrial IoT and IoBT. F-RAN can play a vital role because of IoT devices are becoming popular and the fifth generation (5G) communication is also an emerging issue with ultra-low latency, energy consumption, bandwidth efficiency and wide range of coverage area. To overcome the disadvantages of cloud radio access networks (C-RAN) F-RAN can be introduced where a large number of F-RAN nodes can take part in joint distributed computing and content sharing scheme. The F-RAN in IoBT is effective for enhancing the computing ability with fog computing and edge computing at the network edge. Since the computing capability of the fog equipment are weak, to overcome the difficulties of fog computing in IoBT this paper illustrates some challenging issues and solutions to improve battlefield efficiency. Therefore, the distributed computing load balancing problem of the F-RAN is researched. The simulation result indicates that the load balancing strategy has better performance for F-RAN architecture in the battlefield.

Singh, M., Butakov, S., Jaafar, F..  2018.  Analyzing Overhead from Security and Administrative Functions in Virtual Environment. 2018 International Conference on Platform Technology and Service (PlatCon). :1—6.
The paper provides an analysis of the performance of an administrative component that helps the hypervisor to manage the resources of guest operating systems under fluctuation workload. The additional administrative component provides an extra layer of security to the guest operating systems and system as a whole. In this study, an administrative component was implemented by using Xen-hypervisor based para-virtualization technique and assigned some additional roles and responsibilities that reduce hypervisor workload. The study measured the resource utilizations of an administrative component when excessive input/output load passes passing through the system. Performance was measured in terms of bandwidth and CPU utilisation Based on the analysis of administrative component performance recommendations have been provided with the goal to improve system availability. Recommendations included detection of the performance saturation point that indicates the necessity to start load balancing procedures for the administrative component in the virtualized environment.
2020-09-04
Sutton, Sara, Bond, Benjamin, Tahiri, Sementa, Rrushi, Julian.  2019.  Countering Malware Via Decoy Processes with Improved Resource Utilization Consistency. 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :110—119.
The concept of a decoy process is a new development of defensive deception beyond traditional honeypots. Decoy processes can be exceptionally effective in detecting malware, directly upon contact or by redirecting malware to decoy I/O. A key requirement is that they resemble their real counterparts very closely to withstand adversarial probes by threat actors. To be usable, decoy processes need to consume only a small fraction of the resources consumed by their real counterparts. Our contribution in this paper is twofold. We attack the resource utilization consistency of decoy processes provided by a neural network with a heatmap training mechanism, which we find to be insufficiently trained. We then devise machine learning over control flow graphs that improves the heatmap training mechanism. A neural network retrained by our work shows higher accuracy and defeats our attacks without a significant increase in its own resource utilization.
2020-08-24
Noor, Joseph, Ali-Eldin, Ahmed, Garcia, Luis, Rao, Chirag, Dasari, Venkat R., Ganesan, Deepak, Jalaian, Brian, Shenoy, Prashant, Srivastava, Mani.  2019.  The Case for Robust Adaptation: Autonomic Resource Management is a Vulnerability. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :821–826.
Autonomic resource management for distributed edge computing systems provides an effective means of enabling dynamic placement and adaptation in the face of network changes, load dynamics, and failures. However, adaptation in-and-of-itself offers a side channel by which malicious entities can extract valuable information. An attacker can take advantage of autonomic resource management techniques to fool a system into misallocating resources and crippling applications. Using a few scenarios, we outline how attacks can be launched using partial knowledge of the resource management substrate - with as little as a single compromised node. We argue that any system that provides adaptation must consider resource management as an attack surface. As such, we propose ADAPT2, a framework that incorporates concepts taken from Moving-Target Defense and state estimation techniques to ensure correctness and obfuscate resource management, thereby protecting valuable system and application information from leaking.
2020-08-13
Jiang, Wei, Anton, Simon Duque, Dieter Schotten, Hans.  2019.  Intelligence Slicing: A Unified Framework to Integrate Artificial Intelligence into 5G Networks. 2019 12th IFIP Wireless and Mobile Networking Conference (WMNC). :227—232.
The fifth-generation and beyond mobile networks should support extremely high and diversified requirements from a wide variety of emerging applications. It is envisioned that more advanced radio transmission, resource allocation, and networking techniques are required to be developed. Fulfilling these tasks is challenging since network infrastructure becomes increasingly complicated and heterogeneous. One promising solution is to leverage the great potential of Artificial Intelligence (AI) technology, which has been explored to provide solutions ranging from channel prediction to autonomous network management, as well as network security. As of today, however, the state of the art of integrating AI into wireless networks is mainly limited to use a dedicated AI algorithm to tackle a specific problem. A unified framework that can make full use of AI capability to solve a wide variety of network problems is still an open issue. Hence, this paper will present the concept of intelligence slicing where an AI module is instantiated and deployed on demand. Intelligence slices are applied to conduct different intelligent tasks with the flexibility of accommodating arbitrary AI algorithms. Two example slices, i.e., neural network based channel prediction and anomaly detection based industrial network security, are illustrated to demonstrate this framework.