Biblio
The scale of the intelligent networked vehicle market is expanding rapidly, and network security issues also follow. A Situational Awareness (SA) system can detect, identify, and respond to security risks from a global perspective. In view of the discrete and weak correlation characteristics of perceptual data, this paper uses the Fly Optimization Algorithm (FOA) based on dynamic adjustment of the optimization step size to improve the convergence speed, and optimizes the extraction model of security situation element of the Internet of Vehicles (IoV), based on Probabilistic Neural Network (PNN), to improve the accuracy of element extraction. Through the comparison of experimental algorithms, it is verified that the algorithm has fast convergence speed, high precision and good stability.
The big data platform based on cloud computing realizes the storage, analysis and processing of massive data, and provides users with more efficient, accurate and intelligent Internet services. Combined with the characteristics of college teaching resource sharing platform based on cloud computing mode, the multi-faceted security defense strategy of the platform is studied from security management, security inspection and technical means. In the detection module, the optimization of the support vector machine is realized, the detection period is determined, the DDoS data traffic characteristics are extracted, and the source ID blacklist is established; the triggering of the defense mechanism in the defense module, the construction of the forwarder forwarding queue and the forwarder forwarding capability are realized. Reallocation.
ISSN: 2767-7788
Cloud computing provides customers with enormous compute power and storage capacity, allowing them to deploy their computation and data-intensive applications without having to invest in infrastructure. Many firms use cloud computing as a means of relocating and maintaining resources outside of their enterprise, regardless of the cloud server's location. However, preserving the data in cloud leads to a number of issues related to data loss, accountability, security etc. Such fears become a great barrier to the adoption of the cloud services by users. Cloud computing offers a high scale storage facility for internet users with reference to the cost based on the usage of facilities provided. Privacy protection of a user's data is considered as a challenge as the internal operations offered by the service providers cannot be accessed by the users. Hence, it becomes necessary for monitoring the usage of the client's data in cloud. In this research, we suggest an effective cloud storage solution for accessing patient medical records across hospitals in different countries while maintaining data security and integrity. In the suggested system, multifactor authentication for user login to the cloud, homomorphic encryption for data storage with integrity verification, and integrity verification have all been implemented effectively. To illustrate the efficacy of the proposed strategy, an experimental investigation was conducted.
With billions of devices already connected to the network's edge, the Internet of Things (IoT) is shaping the future of pervasive computing. Nonetheless, IoT applications still cannot escape the need for the computing resources available at the fog layer. This becomes challenging since the fog nodes are not necessarily secure nor reliable, which widens even further the IoT threat surface. Moreover, the security risk appetite of heterogeneous IoT applications in different domains or deploy-ment contexts should not be assessed similarly. To respond to this challenge, this paper proposes a new approach to optimize the allocation of secure and reliable fog computing resources among IoT applications with varying security risk level. First, the security and reliability levels of fog nodes are quantitatively evaluated, and a security risk assessment methodology is defined for IoT services. Then, an online, incentive-compatible mechanism is designed to allocate secure fog resources to high-risk IoT offloading requests. Compared to the offline Vickrey auction, the proposed mechanism is computationally efficient and yields an acceptable approximation of the social welfare of IoT devices, allowing to attenuate security risk within the edge network.
The Internet-of-Things (IoT) paradigm at large continues to be compromised, hindering the privacy, dependability, security, and safety of our nations. While the operational security communities (i.e., CERTS, SOCs, CSIRT, etc.) continue to develop capabilities for monitoring cyberspace, tools which are IoT-centric remain at its infancy. To this end, we address this gap by innovating an actionable Cyber Threat Intelligence (CTI) feed related to Internet-scale infected IoT devices. The feed analyzes, in near real-time, 3.6TB of daily streaming passive measurements ( ≈ 1M pps) by applying a custom-developed learning methodology to distinguish between compromised IoT devices and non-IoT nodes, in addition to labeling the type and vendor. The feed is augmented with third party information to provide contextual information. We report on the operation, analysis, and shortcomings of the feed executed during an initial deployment period. We make the CTI feed available for ingestion through a public, authenticated API and a front-end platform.
With the rapid development of Internet scale and technology, people pay more and more attention to network security. At present, the general method in the field of network security is to use NSS(Network Security Situation) to describe the security situation of the target network. Because NSSA (Network Security Situation Awareness) has not formed a unified optimal solution in architecture design and algorithm design, many ideas have been put forward continuously, and there is still a broad research space. In this paper, the improved LSTM(long short-term memory) neural network is used to analyze and process NSS data, and effectively utilize the attack logic contained in sequence data. Build NSSF (Network Security Situation Forecast) framework based on NAWL-ILSTM. The framework is to directly output the quantified NSS change curve after processing the input original security situation data. Modular design and dual discrimination engine reduce the complexity of implementation and improve the stability. Simulation results show that the prediction model not only improves the convergence speed of the prediction model, but also greatly reduces the prediction error of the model.
The papers in this special section explore recent advancements in parallel graph processing. In the sphere of modern data science and data-driven applications, graph algorithms have achieved a pivotal place in advancing the state of scientific discovery and knowledge. Nearly three centuries of ideas have made graph theory and its applications a mature area in computational sciences. Yet, today we find ourselves at a crossroads between theory and application. Spurred by the digital revolution, data from a diverse range of high throughput channels and devices, from across internet-scale applications, are starting to mark a new era in data-driven computing and discovery. Building robust graph models and implementing scalable graph application frameworks in the context of this new era are proving to be significant challenges. Concomitant to the digital revolution, we have also experienced an explosion in computing architectures, with a broad range of multicores, manycores, heterogeneous platforms, and hardware accelerators (CPUs, GPUs) being actively developed and deployed within servers and multinode clusters. Recent advances have started to show that in more than one way, these two fields—graph theory and architectures–are capable of benefiting and in fact spurring new research directions in one another. This special section is aimed at introducing some of the new avenues of cutting-edge research happening at the intersection of graph algorithm design and their implementation on advanced parallel architectures.
With the development of Internet of Things, numerous IoT devices have been brought into our daily lives. Bluetooth Low Energy (BLE), due to the low energy consumption and generic service stack, has become one of the most popular wireless communication technologies for IoT. However, because of the short communication range and exclusive connection pattern, a BLE-equipped device can only be used by a single user near the device. To fully explore the benefits of BLE and make BLE-equipped devices truly accessible over the Internet as IoT devices, in this paper, we propose a cloud-based software framework that can enable multiple users to interact with various BLE IoT devices over the Internet. This framework includes an agent program, a suite of services hosting in cloud, and a set of RESTful APIs exposed to Internet users. Given the availability of this framework, the access to BLE devices can be extended from local to the Internet scale without any software or hardware changes to BLE devices, and more importantly, shared usage of remote BLE devices over the Internet is also made available.
Increasing number of Internet-scale applications, such as video streaming, incur huge amount of wide area traffic. Such traffic over the unreliable Internet without bandwidth guarantee suffers unpredictable network performance. This result, however, is unappealing to the application providers. Fortunately, Internet giants like Google and Microsoft are increasingly deploying their private wide area networks (WANs) to connect their global datacenters. Such high-speed private WANs are reliable, and can provide predictable network performance. In this paper, we propose a new type of service-inter-datacenter network as a service (iDaaS), where traditional application providers can reserve bandwidth from those Internet giants to guarantee their wide area traffic. Specifically, we design a bandwidth trading market among multiple iDaaS providers and application providers, and concentrate on the essential bandwidth pricing problem. The involved challenging issue is that the bandwidth price of each iDaaS provider is not only influenced by other iDaaS providers, but also affected by the application providers. To address this issue, we characterize the interaction between iDaaS providers and application providers using a Stackelberg game model, and analyze the existence and uniqueness of the equilibrium. We further present an efficient bandwidth pricing algorithm by blending the advantage of a geometrical Nash bargaining solution and the demand segmentation method. For comparison, we present two bandwidth reservation algorithms, where each iDaaS provider's bandwidth is reserved in a weighted fair manner and a max-min fair manner, respectively. Finally, we conduct comprehensive trace-driven experiments. The evaluation results show that our proposed algorithms not only ensure the revenue of iDaaS providers, but also provide bandwidth guarantee for application providers with lower bandwidth price per unit.
Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.