Visible to the public Biblio

Found 180 results

Filters: Keyword is quality of service  [Clear All Filters]
2023-08-25
Clark, Nicholas K..  2022.  Enhancing an Information-Centric Network of Things at the Internet Edge with Trust-Based Access Control. 2022 IEEE 8th World Forum on Internet of Things (WF-IoT). :1–6.
This work expands on our prior work on an architecture and supporting protocols to efficiently integrate constrained devices into an Information-Centric Network-based Internet of Things in a way that is both secure and scalable. In this work, we propose a scheme for addressing additional threats and integrating trust-based behavioral observations and attribute-based access control by leveraging the capabilities of less constrained coordinating nodes at the network edge close to IoT devices. These coordinating devices have better insight into the behavior of their constituent devices and access to a trusted overall security management cloud service. We leverage two modules, the security manager (SM) and trust manager (TM). The former provides data confidentiality, integrity, authentication, and authorization, while the latter analyzes the nodes' behavior using a trust model factoring in a set of service and network communication attributes. The trust model allows trust to be integrated into the SM's access control policies, allowing access to resources to be restricted to trusted nodes.
2023-08-03
Liu, Zhichao, Jiang, Yi.  2022.  Cross-Layer Design for UAV-Based Streaming Media Transmission. IEEE Transactions on Circuits and Systems for Video Technology. 32:4710–4723.
Unmanned Aerial Vehicle (UAV)-based streaming media transmission may become unstable when the bit rate generated by the source load exceeds the channel capacity owing to the UAV location and speed change. The change of the location can affect the network connection, leading to reduced transmission rate; the change of the flying speed can increase the video payload due to more I-frames. To improve the transmission reliability, in this paper we design a Client-Server-Ground&User (C-S-G&U) framework, and propose an algorithm of splitting-merging stream (SMS) for multi-link concurrent transmission. We also establish multiple transport links and configure the routing rules for the cross-layer design. The multi-link transmission can achieve higher throughput and significantly smaller end-to-end delay than a single-link especially in a heavy load situation. The audio and video data are packaged into the payload by the Real-time Transport Protocol (RTP) before being transmitted over the User Datagram Protocol (UDP). The forward error correction (FEC) algorithm is implemented to promote the reliability of the UDP transmission, and an encryption algorithm to enhance security. In addition, we propose a Quality of Service (QoS) strategy so that the server and the user can control the UAV to adapt its transmission mode dynamically, according to the load, delay, and packet loss. Our design has been implemented on an engineering platform, whose efficacy has been verified through comprehensive experiments.
Conference Name: IEEE Transactions on Circuits and Systems for Video Technology
2023-07-21
Elmoghrapi, Asma N., Bleblo, Ahmed, Younis, Younis A..  2022.  Fog Computing or Cloud Computing: a Study. 2022 International Conference on Engineering & MIS (ICEMIS). :1—6.
Cloud computing is a new term that refers to the service provisioned over the Internet. It is considered one of the foremost prevailing standards within the Data Innovation (IT) industry these days. It offers capable handling and capacity assets as on-demand administrations at diminished fetched, and progressed productivity. It empowers sharing computing physical assets among cloud computing tents and offers on-demand scaling with taken toll effectiveness. Moreover, cloud computing plays an important role in data centers because they house virtually limitless computational and storage capacities that businesses and end-users can access and use via the Internet. In the context of cloud computing, fog computing refers to bringing services to the network’s edge. Fog computing gives cloud-like usefulness, such as information capacity space, systems, and compute handling control, yet with a more noteworthy scope and nearness since fog nodes are found close to d-user edge gadgets, leveraging assets and diminishing inactivity. The concepts of cloud computing and fog computing will be explored in this paper, and their features will be contrasted to determine the differences between them. Over 25 factors have been used to compare them.
2023-06-30
Song, Yuning, Ding, Liping, Liu, Xuehua, Du, Mo.  2022.  Differential Privacy Protection Algorithm Based on Zero Trust Architecture for Industrial Internet. 2022 IEEE 4th International Conference on Power, Intelligent Computing and Systems (ICPICS). :917–920.
The Zero Trust Architecture is an important part of the industrial Internet security protection standard. When analyzing industrial data for enterprise-level or industry-level applications, differential privacy (DP) is an important technology for protecting user privacy. However, the centralized and local DP used widely nowadays are only applicable to the networks with fixed trust relationship and cannot cope with the dynamic security boundaries in Zero Trust Architecture. In this paper, we design a differential privacy scheme that can be applied to Zero Trust Architecture. It has a consistent privacy representation and the same noise mechanism in centralized and local DP scenarios, and can balance the strength of privacy protection and the flexibility of privacy mechanisms. We verify the algorithm in the experiment, that using maximum expectation estimation method it is able to obtain equal or even better result of the utility with the same level of security as traditional methods.
2023-03-17
Al-Kateb, Mohammed, Eltabakh, Mohamed Y., Al-Omari, Awny, Brown, Paul G..  2022.  Analytics at Scale: Evolution at Infrastructure and Algorithmic Levels. 2022 IEEE 38th International Conference on Data Engineering (ICDE). :3217–3220.
Data Analytics is at the core of almost all modern ap-plications ranging from science and finance to healthcare and web applications. The evolution of data analytics over the last decade has been dramatic - new methods, new tools and new platforms - with no slowdown in sight. This rapid evolution has pushed the boundaries of data analytics along several axis including scalability especially with the rise of distributed infrastructures and the Big Data era, and interoperability with diverse data management systems such as relational databases, Hadoop and Spark. However, many analytic application developers struggle with the challenge of production deployment. Recent experience suggests that it is difficult to deliver modern data analytics with the level of reliability, security and manageability that has been a feature of traditional SQL DBMSs. In this tutorial, we discuss the advances and innovations introduced at both the infrastructure and algorithmic levels, directed at making analytic workloads scale, while paying close attention to the kind of quality of service guarantees different technology provide. We start with an overview of the classical centralized analytical techniques, describing the shift towards distributed analytics over non-SQL infrastructures. We contrast such approaches with systems that integrate analytic functionality inside, above or adjacent to SQL engines. We also explore how Cloud platforms' virtualization capabilities make it easier - and cheaper - for end users to apply these new analytic techniques to their data. Finally, we conclude with the learned lessons and a vision for the near future.
ISSN: 2375-026X
2023-03-03
Yuan, Wen.  2022.  Development of Key Technologies of Legal Case Management Information System Considering QoS Optimization. 2022 International Conference on Electronics and Renewable Systems (ICEARS). :693–696.
This paper conducts the development of the key technologies of the legal case management information system considering QoS optimization. The designed system administrator can carry out that the all-round management of the system, including account management, database management, security setting management, core data entry management, and data statistics management. With this help, the QoS optimization model is then integrated to improve the systematic performance of the system as the key technology. Similar to the layering in the data source, the data set is composed of the fields of the data set, and contains the relevant information of the attribute fields of various entity element categories. Furthermore, the designed system is analyzed and implemented on the public data sets to show the results.
2023-02-17
Syambas, Nana Rachmana, Juhana, Tutun, Hendrawan, Mulyana, Eueung, Edward, Ian Joseph Matheus, Situmorang, Hamonangan, Mayasari, Ratna, Negara, Ridha Muldina, Yovita, Leanna Vidya, Wibowo, Tody Ariefianto et al..  2022.  Research Progress On Name Data Networking To Achieve A Superior National Product In Indonesia. 2022 8th International Conference on Wireless and Telematics (ICWT). :1–6.
Global traffic data are proliferating, including in Indonesia. The number of internet users in Indonesia reached 205 million in January 2022. This data means that 73.7% of Indonesia’s population has used the internet. The median internet speed for mobile phones in Indonesia is 15.82 Mbps, while the median internet connection speed for Wi-Fi in Indonesia is 20.13 Mbps. As predicted by many, real-time traffic such as multimedia streaming dominates more than 79% of traffic on the internet network. This condition will be a severe challenge for the internet network, which is required to improve the Quality of Experience (QoE) for user mobility, such as reducing delay, data loss, and network costs. However, IP-based networks are no longer efficient at managing traffic. Named Data Network (NDN) is a promising technology for building an agile communication model that reduces delays through a distributed and adaptive name-based data delivery approach. NDN replaces the ‘where’ paradigm with the concept of ‘what’. User requests are no longer directed to a specific IP address but to specific content. This paradigm causes responses to content requests to be served by a specific server and can also be served by the closest device to the requested data. NDN router has CS to cache the data, significantly reducing delays and improving the internet network’s quality of Service (QoS). Motivated by this, in 2019, we began intensive research to achieve a national flagship product, an NDN router with different functions from ordinary IP routers. NDN routers have cache, forwarding, and routing functions that affect data security on name-based networks. Designing scalable NDN routers is a new challenge as NDN requires fast hierarchical name-based lookups, perpackage data field state updates, and large-scale forward tables. We have a research team that has conducted NDN research through simulation, emulation, and testbed approaches using virtual machines to get the best NDN router design before building a prototype. Research results from 2019 show that the performance of NDN-based networks is better than existing IP-based networks. The tests were carried out based on various scenarios on the Indonesian network topology using NDNsimulator, MATLAB, Mininet-NDN, and testbed using virtual machines. Various network performance parameters, such as delay, throughput, packet loss, resource utilization, header overhead, packet transmission, round trip time, and cache hit ratio, showed the best results compared to IP-based networks. In addition, NDN Testbed based on open source is free, and the flexibility of creating topology has also been successfully carried out. This testbed includes all the functions needed to run an NDN network. The resource capacity on the server used for this testbed is sufficient to run a reasonably complex topology. However, bugs are still found on the testbed, and some features still need improvement. The following exploration of the NDN testbed will run with more new strategy algorithms and add Artificial Intelligence (AI) to the NDN function. Using AI in cache and forwarding strategies can make the system more intelligent and precise in making decisions according to network conditions. It will be a step toward developing NDN router products by the Bandung Institute of Technology (ITB) Indonesia.
Gopal, Kumar Parop, Sambath, M, Geetha, Angelina, Shekhar, Himanshu.  2022.  Implementing Fast Router In Convergent LTE/ Wifi Networks Using Software Defined Networks. 2022 IEEE 2nd Mysore Sub Section International Conference (MysuruCon). :1–5.
The phenomenon known as "Internet ossification" describes the process through which certain components of the Internet’s older design have become immovable at the present time. This presents considerable challenges to the adoption of IPv6 and makes it hard to implement IP multicast services. For new applications such as data centers, cloud computing and virtualized networks, improved network availability, improved internal and external domain routing, and seamless user connectivity throughout the network are some of the advantages of Internet growth. To meet these needs, we've developed Software Defined Networking for the Future Internet (SDN). When compared to current networks, this new paradigm emphasizes control plane separation from network-forwarding components. To put it another way, this decoupling enables the installation of control plane software (such as Open Flow controller) on computer platforms that are substantially more powerful than traditional network equipment (such as switches/routers). This research describes Mininet’s routing techniques for a virtualized software-defined network. There are two obstacles to overcome when attempting to integrate SDN in an LTE/WiFi network. The first problem is that external network load monitoring tools must be used to measure QoS settings. Because of the increased demand for real-time load balancing methods, service providers cannot adopt QoS-based routing. In order to overcome these issues, this research suggests a router configuration method. Experiments have proved that the network coefficient matrix routing arrangement works, therefore it may provide an answer to the above-mentioned concerns. The Java-based SDN controller outperforms traditional routing systems by nine times on average highest sign to sound ratio. The study’s final finding suggests that the field’s future can be forecast. We must have a thorough understanding of this emerging paradigm to solve numerous difficulties, such as creating the Future Internet and dealing with its obliteration problem. In order to address these issues, we will first examine current technologies and a wide range of current and future SDN projects before delving into the most important issues in this field in depth.
Rajan, Manju, Choksey, Mayank, Jose, John.  2022.  Runtime Detection of Time-Delay Security Attack in System-an-Chip. 2022 15th IEEE/ACM International Workshop on Network on Chip Architectures (NoCArc). :1–6.
Soft real-time applications, including multimedia, gaming, and smart appliances, rely on specific architectural characteristics to deliver output in a time-constrained fashion. Any violation of application deadlines can lower the Quality-of-Service (QoS). The data sets associated with these applications are distributed over cores that communicate via Network-on-Chip (NoC) in multi-core systems. Accordingly, the response time of such applications depends on the worst-case latency of request/reply packets. A malicious implant such as Hardware Trojan (HT) that initiates a delay-of-service attack can tamper with the system performance. We model an HT that mounts a time-delay attack in the system by violating the path selection strategy used by the adaptive NoC router. Our analysis shows that once activated, the proposed HT increases the packet latency by 17% and degrades the system performance (IPC) by 18% over the Baseline. Furthermore, we propose an HT detection framework that uses packet traffic analysis and path monitoring to localise the HT. Experiment results show that the proposed detection framework exhibits 4.8% less power consumption and 6.4% less area than the existing technique.
2023-02-03
Forti, Stefano.  2022.  Keynote: The fog is rising, in sustainable smart cities. 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops). :469–471.
With their variety of application verticals, smart cities represent a killer scenario for Cloud-IoT computing, e.g. fog computing. Such applications require a management capable of satisfying all their requirements through suitable service placements, and of balancing among QoS-assurance, operational costs, deployment security and, last but not least, energy consumption and carbon emissions. This keynote discusses these aspects over a motivating use case and points to some open challenges.
2023-01-05
Garcia, Carla E., Camana, Mario R., Koo, Insoo.  2022.  DNN aided PSO based-scheme for a Secure Energy Efficiency Maximization in a cooperative NOMA system with a non-linear EH. 2022 Thirteenth International Conference on Ubiquitous and Future Networks (ICUFN). :155–160.
Physical layer security is an emerging security area to tackle wireless security communications issues and complement conventional encryption-based techniques. Thus, we propose a novel scheme based on swarm intelligence optimization technique and a deep neural network (DNN) for maximizing the secrecy energy efficiency (SEE) in a cooperative relaying underlay cognitive radio- and non-orthogonal multiple access (NOMA) system with a non-linear energy harvesting user which is exposed to multiple eavesdroppers. Satisfactorily, simulation results show that the proposed particle swarm optimization (PSO)-DNN framework achieves close performance to that of the optimal solutions, with a meaningful reduction in computation complexity.
2022-12-06
Hkiri, Amal, Karmani, Mouna, Machhout, Mohsen.  2022.  The Routing Protocol for low power and lossy networks (RPL) under Attack: Simulation and Analysis. 2022 5th International Conference on Advanced Systems and Emergent Technologies (IC_ASET). :143-148.

Routing protocol for low power and lossy networks (RPL) is the underlying routing protocol of 6LoWPAN, a core communication standard for the Internet of Things. In terms of quality of service (QoS), device management, and energy efficiency, RPL beats competing wireless sensor and ad hoc routing protocols. However, several attacks could threaten the network due to the problem of unauthenticated or unencrypted control frames, centralized root controllers, compromised or unauthenticated devices. Thus, in this paper, we aim to investigate the effect of topology and Resources attacks on RPL.s efficiency. The Hello Flooding attack, Increase Number attack and Decrease Rank attack are the three forms of Resources attacks and Topology attacks respectively chosen to work on. The simulations were done to understand the impact of the three different attacks on RPL performances metrics including End-to-End Delay (E2ED), throughput, Packet Delivery Ratio (PDR) and average power consumption. The findings show that the three attacks increased the E2ED, decreased the PDR and the network throughput, and degrades the network’, which further raises the power consumption of the network nodes.

Khodayer Al-Dulaimi, Omer Mohammed, Hassan Al-Dulaimi, Mohammed Khodayer, Khodayer Al-Dulaimi, Aymen Mohammed.  2022.  Analysis of Low Power Wireless Technologies used in the Internet of Things (IoT). 2022 2nd International Conference on Computing and Machine Intelligence (ICMI). :1-6.

The Internet of Things (IoT) is a novel paradigm that enables the development of a slew of Services for the future of technology advancements. When it comes to IoT applications, the cyber and physical worlds can be seamlessly integrated, but they are essentially limitless. However, despite the great efforts of standardization bodies, coalitions, companies, researchers, and others, there are still a slew of issues to overcome in order to fully realize the IoT's promise. These concerns should be examined from a variety of perspectives, including enabling technology, applications, business models, and social and environmental consequences. The focus of this paper is on open concerns and challenges from a technological standpoint. We will study the differences in technical such Sigfox, NB-IoT, LoRa, and 6LowPAN, and discuss their advantages and disadvantage for each technology compared with other technologies. Demonstrate that each technology has a position in the internet of things market. Each technology has different advantages and disadvantages it depends on the quality of services, latency, and battery life as a mention. The first will be analysis IoT technologies. SigFox technology offers a long-range, low-power, low-throughput communications network that is remarkably resistant to environmental interference, enabling information to be used efficiently in a wide variety of applications. We analyze how NB-IoT technology will benefit higher-value-added services markets for IoT devices that are willing to pay for exceptionally low latency and high service quality. The LoRa technology will be used as a low-cost device, as it has a very long-range (high coverage).

Buzura, Sorin, Dadarlat, Vasile, Peculea, Adrian, Bertrand, Hugo, Chevalier, Raphaël.  2022.  Simulation Framework for 6LoWPAN Networks Using Mininet-WiFi. 2022 IEEE International Conference on Automation, Quality and Testing, Robotics (AQTR). :1-5.

The Internet of Things (IoT) continuously grows as applications require connectivity and sensor networks are being deployed in multiple application domains. With the increased applicability demand, the need for testing and development frameworks also increases. This paper presents a novel simulation framework for testing IPv6 over Low Power Wireless Personal Networks (6LoWPAN) networks using the Mininet-WiFi simulator. The goal of the simulation framework is to allow easier automation testing of large-scale networks and to also allow easy configuration. This framework is a starting point for many development scenarios targeting traffic management, Quality of Service (QoS) or security network features. A basic smart city simulation is presented which demonstrates the working principles of the framework.

2022-10-20
Jain, Arpit, Jat, Dharm Singh.  2020.  An Edge Computing Paradigm for Time-Sensitive Applications. 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4). :798—803.
Edge computing (EC) is a new developing computing technology where data are collected, and analysed nearer to the edge or sources of the data. Cloud to the edge, intelligent applications and analytics are part of the IoT applications and technology. Edge computing technology aims to bring cloud computing features near to edge devices. For time-sensitive applications in cloud computing, architecture massive volume of data is generated at the edge and stored and analysed in the cloud. Cloud infrastructure is a composition of data centres and large-scale networks, which provides reliable services to users. Traditional cloud computing is inefficient due to delay in response, network delay and congestion as simultaneous transactions to the cloud, which is a centralised system. This paper presents a literature review on cloud-based edge computing technologies for delay-sensitive applications and suggests a conceptual model of edge computing architecture. Further, the paper also presents the implementation of QoS support edge computing paradigm in Python for further research to improve the latency and throughput for time-sensitive applications.
Choudhary, Swapna, Dorle, Sanjay.  2021.  Empirical investigation of VANET-based security models from a statistical perspective. 2021 International Conference on Computational Intelligence and Computing Applications (ICCICA). :1—8.
Vehicular ad-hoc networks (VANETs) are one of the most stochastic networks in terms of node movement patterns. Due to the high speed of vehicles, nodes form temporary clusters and shift between clusters rapidly, which limits the usable computational complexity for quality of service (QoS) and security enhancements. Hence, VANETs are one of the most insecure networks and are prone to various attacks like Masquerading, Distributed Denial of Service (DDoS) etc. Various algorithms have been proposed to safeguard VANETs against these attacks, which vary concerning security and QoS performance. These algorithms include linear rule-checking models, software-defined network (SDN) rules, blockchain-based models, etc. Due to such a wide variety of model availability, it becomes difficult for VANET designers to select the most optimum security framework for the network deployment. To reduce the complexity of this selection, the paper reviews statistically investigate a wide variety of modern VANET-based security models. These models are compared in terms of security, computational complexity, application and cost of deployment, etc. which will assist network designers to select the most optimum models for their application. Moreover, the paper also recommends various improvements that can be applied to the reviewed models, to further optimize their performance.
2022-09-16
Massey, Keith, Moazen, Nadia, Halabi, Talal.  2021.  Optimizing the Allocation of Secure Fog Resources based on QoS Requirements. 2021 8th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2021 7th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :143—148.
Fog computing plays a critical role in the provisioning of computing tasks in the context of Internet of Things (IoT) services. However, the security of IoT services against breaches and attacks relies heavily on the security of fog resources, which must be properly implemented and managed. Increasing security investments and integrating the security aspect into the core processes and operations of fog computing including resource management will increase IoT service protection as well as the trustworthiness of fog service providers. However, this requires careful modeling of the security requirements of IoT services as well as theoretical and experimental evaluation of the tradeoff between security and performance in fog infrastructures. To this end, this paper explores a new model for fog resource allocation according to security and Quality of Service (QoS). The problem is modeled as a multi-objective linear optimization problem and solved using conventional, off-the-shelf optimizers by applying the preemptive method. Specifically, two objective functions were defined: one representing the satisfaction of the security design requirements of IoT services and another that models the communication delay among the different virtual machines belonging to the same service request, which might be deployed on different intermediary fog nodes. The simulation results show that the optimization is efficient and achieves the required level of scalability in fog computing. Moreover, a tradeoff needs to be pondered between the two criteria during the resource allocation process.
2022-08-26
Nougnanke, Kokouvi Benoit, Labit, Yann, Bruyere, Marc, Ferlin, Simone, Aïvodji, Ulrich.  2021.  Learning-based Incast Performance Inference in Software-Defined Data Centers. 2021 24th Conference on Innovation in Clouds, Internet and Networks and Workshops (ICIN). :118–125.
Incast traffic is a many-to-one communication pattern used in many applications, including distributed storage, web-search with partition/aggregation design pattern, and MapReduce, commonly in data centers. It is generally composed of short-lived flows that may be queued behind large flows' packets in congested switches where performance degradation is observed. Smart buffering at the switch level is sensed to mitigate this issue by automatically and dynamically adapting to traffic conditions changes in the highly dynamic data center environment. But for this dynamic and smart buffer management to become effectively beneficial for all the traffic, and especially for incast the most critical one, incast performance models that provide insights on how various factors affect it are needed. The literature lacks these types of models. The existing ones are analytical models, which are either tightly coupled with a particular protocol version or specific to certain empirical data. Motivated by this observation, we propose a machine-learning-based incast performance inference. With this prediction capability, smart buffering scheme or other QoS optimization algorithms could anticipate and efficiently optimize system parameters adjustment to achieve optimal performance. Since applying machine learning to networks managed in a distributed fashion is hard, the prediction mechanism will be deployed on an SDN control plane. We could then take advantage of SDN's centralized global view, its telemetry capabilities, and its management flexibility.
Mamushiane, Lusani, Shozi, Themba.  2021.  A QoS-based Evaluation of SDN Controllers: ONOS and OpenDayLight. 2021 IST-Africa Conference (IST-Africa). :1–10.
SDN marks a paradigm shift towards an externalized and logically centralized controller, unlike the legacy networks where control and data planes are tightly coupled. The controller has a comprehensive view of the network, offering flexibility to enforce new traffic engineering policies and easing automation. In SDN, a high performance controller is required for efficient traffic management. In this paper, we conduct a performance evaluation of two distributed SDN controllers, namely ONOS and OpenDayLight. Specifically, we use the Mininet emulation environment to emulate different topologies and the D-ITG traffic generator to evaluate aforementioned controllers based on metrics such as delay, jitter and packet loss. The experimental results show that ONOS provides a significantly higher latency, jitter and low packet loss than OpenDayLight in all topologies. We attribute the poor performance of OpenDayLight to its excessive CPU utilization and propose the use of Hyper-threading to improve its performance. This work provides practitioners in the telecoms industry with guidelines towards making informed controller selection decisions
LaMar, Suzanna, Gosselin, Jordan J, Caceres, Ivan, Kapple, Sarah, Jayasumana, Anura.  2021.  Congestion Aware Intent-Based Routing using Graph Neural Networks for Improved Quality of Experience in Heterogeneous Networks. MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM). :477—481.
Making use of spectrally diverse communications links to re-route traffic in response to dynamic environments to manage network bottlenecks has become essential in order to guarantee message delivery across heterogeneous networks. We propose an innovative, proactive Congestion Aware Intent-Based Routing (CONAIR) architecture that can select among available communication link resources based on quality of service (QoS) metrics to support continuous information exchange between networked participants. The CONAIR architecture utilizes a Network Controller (NC) and artificial intelligence (AI) to re-route traffic based on traffic priority, fundamental to increasing end user quality of experience (QoE) and mission effectiveness. The CONAIR architecture provides network behavior prediction, and can mitigate congestion prior to its occurrence unlike traditional static routing techniques, e.g. Open Shortest Path First (OSPF), which are prone to congestion due to infrequent routing table updates. Modeling and simulation (M&S) was performed on a multi-hop network in order to characterize the resiliency and scalability benefits of CONAIR over OSPF routing-based frameworks. Results demonstrate that for varying traffic profiles, packet loss and end-to-end latency is minimized.
2022-06-09
Iashvili, Giorgi, Iavich, Maksim, Bocu, Razvan, Odarchenko, Roman, Gnatyuk, Sergiy.  2021.  Intrusion Detection System for 5G with a Focus on DOS/DDOS Attacks. 2021 11th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS). 2:861–864.
The industry of telecommunications is being transformed towards 5G technology, because it has to deal with the emerging and existing use cases. Because, 5G wireless networks need rather large data rates and much higher coverage of the dense base station deployment with the bigger capacity, much better Quality of Service - QoS, and the need very low latency [1–3]. The provision of the needed services which are envisioned by 5G technologies need the new service models of deployment, networking architectures, processing technologies and storage to be defined. These technologies will cause the new problems for the cybersecurity of 5G systems and the security of their functionality. The developers and researchers working in this field make their best to secure 5G systems. The researchers showed that 5G systems have the security challenges. The researchers found the vulnerabilities in 5G systems which allow attackers to integrate malicious code into the system and make the different types of the illegitimate actions. MNmap, Battery drain attacks and MiTM can be successfully implemented on 5G. The paper makes the analysis of the existing cyber security problems in 5G technology. Based on the analysis, we suggest the novel Intrusion Detection System - IDS by means of the machine-learning algorithms. In the related papers the scientists offer to use NSL-KDD in order to train IDS. In our paper we offer to train IDS using the big datasets of DOS/DDOS attacks, besides of training using NSL-KDD. The research also offers the methodology of integration of the offered intrusion detection systems into an standard architecture of 5G. The paper also offers the pseudo code of the designed system.
2022-05-12
Li, Fulin, Ji, Huifang, Zhou, Hongwei, Zhang, Chang.  2021.  A Dynamic and Secure Migration Method of Cryptographic Service Virtual Machine for Cloud Environment. 2021 7th International Conference on Computer and Communications (ICCC). :583–588.
In order to improve the continuity of cryptographic services and ensure the quality of services in the cloud environment, a dynamic migration framework of cryptographic service virtual machines based on the network shared storage system is proposed. Based on the study of the security threats in the migration process, a dynamic migration attack model is established, and the security requirement of dynamic migration is analyzed. It designs and implements the dynamic security migration management software, which includes a dynamic migration security enhancement module based on the Libvirt API, role-based access control policy, and transmission channel protection module. A cryptographic service virtual machine migration environment is built, and the designed management software and security mechanism are verified and tested. The experimental results show that the method proposed in the paper can effectively improve the security of cryptographic service virtual machine migration.
2022-05-10
Halabi, Talal.  2021.  Adaptive Security Risk Mitigation in Edge Computing: Randomized Defense Meets Prospect Theory. 2021 IEEE/ACM Symposium on Edge Computing (SEC). :432–437.

Edge computing supports the deployment of ubiquitous, smart services by providing computing and storage closer to terminal devices. However, ensuring the full security and privacy of computations performed at the edge is challenging due to resource limitation. This paper responds to this challenge and proposes an adaptive approach to defense randomization among the edge data centers via a stochastic game, whose solution corresponds to the optimal security deployment at the network's edge. Moreover, security risk is evaluated subjectively based on Prospect Theory to reflect realistic scenarios where the attacker and the edge system do not similarly perceive the status of the infrastructure. The results show that a non-deterministic defense policy yields better security compared to a static defense strategy.

2022-05-06
Bhagavan, Srini, Gharibi, Mohamed, Rao, Praveen.  2021.  FedSmarteum: Secure Federated Matrix Factorization Using Smart Contracts for Multi-Cloud Supply Chain. 2021 IEEE International Conference on Big Data (Big Data). :4054–4063.
With increased awareness comes unprecedented expectations. We live in a digital, cloud era wherein the underlying information architectures are expected to be elastic, secure, resilient, and handle petabyte scaling. The expectation of epic proportions from the next generation of the data frameworks is to not only do all of the above but also build it on a foundation of trust and explainability across multi-organization business networks. From cloud providers to automobile industries or even vaccine manufacturers, components are often sourced by a complex, not full digitized thread of disjoint suppliers. Building Machine Learning and AI-based order fulfillment and predictive models, remediating issues, is a challenge for multi-organization supply chain automation. We posit that Federated Learning in conjunction with blockchain and smart contracts are technologies primed to tackle data privacy and centralization challenges. In this paper, motivated by challenges in the industry, we propose a decentralized distributed system in conjunction with a recommendation system model (Matrix Factorization) that is trained using Federated Learning on an Ethereum blockchain network. We leverage smart contracts that allow decentralized serverless aggregation to update local-ized items vectors. Furthermore, we utilize Homomorphic Encryption (HE) to allow sharing the encrypted gradients over the network while maintaining their privacy. Based on our results, we argue that training a model over a serverless Blockchain network using smart contracts will provide the same accuracy as in a centralized model while maintaining our serverless model privacy and reducing the overhead communication to a central server. Finally, we assert such a system that provides transparency, audit-ready and deep insights into supply chain operations for enterprise cloud customers resulting in cost savings and higher Quality of Service (QoS).
2022-04-19
Al-Eidi, Shorouq, Darwish, Omar, Chen, Yuanzhu, Husari, Ghaith.  2021.  SnapCatch: Automatic Detection of Covert Timing Channels Using Image Processing and Machine Learning. IEEE Access. 9:177–191.
With the rapid growth of data exfiltration carried out by cyber attacks, Covert Timing Channels (CTC) have become an imminent network security risk that continues to grow in both sophistication and utilization. These types of channels utilize inter-arrival times to steal sensitive data from the targeted networks. CTC detection relies increasingly on machine learning techniques, which utilize statistical-based metrics to separate malicious (covert) traffic flows from the legitimate (overt) ones. However, given the efforts of cyber attacks to evade detection and the growing column of CTC, covert channels detection needs to improve in both performance and precision to detect and prevent CTCs and mitigate the reduction of the quality of service caused by the detection process. In this article, we present an innovative image-based solution for fully automated CTC detection and localization. Our approach is based on the observation that the covert channels generate traffic that can be converted to colored images. Leveraging this observation, our solution is designed to automatically detect and locate the malicious part (i.e., set of packets) within a traffic flow. By locating the covert parts within traffic flows, our approach reduces the drop of the quality of service caused by blocking the entire traffic flows in which covert channels are detected. We first convert traffic flows into colored images, and then we extract image-based features for detection covert traffic. We train a classifier using these features on a large data set of covert and overt traffic. This approach demonstrates a remarkable performance achieving a detection accuracy of 95.83% for cautious CTCs and a covert traffic accuracy of 97.83% for 8 bit covert messages, which is way beyond what the popular statistical-based solutions can achieve.
Conference Name: IEEE Access