Biblio
This work-in-progress paper proposes a design methodology that addresses the complexity and heterogeneity of cyber-physical systems (CPS) while simultaneously proving resilient control logic and security properties. The design methodology involves a formal methods-based approach by translating the complex control logic and security properties of a water flow CPS into timed automata. Timed automata are a formal model that describes system behaviors and properties using mathematics-based logic languages with precision. Due to the semantics that are used in developing the formal models, verification techniques, such as theorem proving and model checking, are used to mathematically prove the specifications and security properties of the CPS. This work-in-progress paper aims to highlight the need for formalizing plant models by creating a timed automata of the physical portions of the water flow CPS. Extending the time automata with control logic, network security, and privacy control processes is investigated. The final model will be formally verified to prove the design specifications of the water flow CPS to ensure efficacy and security.
Recent approaches have proven the effectiveness of local outlier factor-based outlier detection when applied over traffic flow probability distributions. However, these approaches used distance metrics based on the Bhattacharyya coefficient when calculating probability distribution similarity. Consequently, the limited expressiveness of the Bhattacharyya coefficient restricted the accuracy of the methods. The crucial deficiency of the Bhattacharyya distance metric is its inability to compare distributions with non-overlapping sample spaces over the domain of natural numbers. Traffic flow intensity varies greatly, which results in numerous non-overlapping sample spaces, rendering metrics based on the Bhattacharyya coefficient inappropriate. In this work, we address this issue by exploring alternative distance metrics and showing their applicability in a massive real-life traffic flow data set from 26 vital intersections in The Hague. The results on these data collected from 272 sensors for more than two years show various advantages of the Earth Mover's distance both in effectiveness and efficiency.
The requirements of much larger file sizes, different storage formats, and immersive viewing conditions pose significant challenges to the goals of compressing VR content. At the same time, the great potential of deep learning to advance progress on the video compression problem has driven a significant research effort. Because of the high bandwidth requirements of VR, there has also been significant interest in the use of space-variant, foveated compression protocols. We have integrated these techniques to create an end-to-end deep learning video compression framework. A feature of our new compression model is that it dispenses with the need for expensive search-based motion prediction computations by using displaced frame differences. We also implement foveation in our learning based approach, by introducing a Foveation Generator Unit (FGU) that generates foveation masks which direct the allocation of bits, significantly increasing compression efficiency while making it possible to retain an impression of little to no additional visual loss given an appropriate viewing geometry. Our experiment results reveal that our new compression model, which we call the Foveated MOtionless VIdeo Codec (Foveated MOVI-Codec), is able to efficiently compress videos without computing motion, while outperforming foveated version of both H.264 and H.265 on the widely used UVG dataset and on the HEVC Standard Class B Test Sequences.
Cloud security has become a serious challenge due to increasing number of attacks day-by-day. Intrusion Detection System (IDS) requires an efficient security model for improving security in the cloud. This paper proposes a game theory based model, named as Game Theory Cloud Security Deep Neural Network (GT-CSDNN) for security in cloud. The proposed model works with the Deep Neural Network (DNN) for classification of attack and normal data. The performance of the proposed model is evaluated with CICIDS-2018 dataset. The dataset is normalized and optimal points about normal and attack data are evaluated based on the Improved Whale Algorithm (IWA). The simulation results show that the proposed model exhibits improved performance as compared with existing techniques in terms of accuracy, precision, F-score, area under the curve, False Positive Rate (FPR) and detection rate.
With billions of devices already connected to the network's edge, the Internet of Things (IoT) is shaping the future of pervasive computing. Nonetheless, IoT applications still cannot escape the need for the computing resources available at the fog layer. This becomes challenging since the fog nodes are not necessarily secure nor reliable, which widens even further the IoT threat surface. Moreover, the security risk appetite of heterogeneous IoT applications in different domains or deploy-ment contexts should not be assessed similarly. To respond to this challenge, this paper proposes a new approach to optimize the allocation of secure and reliable fog computing resources among IoT applications with varying security risk level. First, the security and reliability levels of fog nodes are quantitatively evaluated, and a security risk assessment methodology is defined for IoT services. Then, an online, incentive-compatible mechanism is designed to allocate secure fog resources to high-risk IoT offloading requests. Compared to the offline Vickrey auction, the proposed mechanism is computationally efficient and yields an acceptable approximation of the social welfare of IoT devices, allowing to attenuate security risk within the edge network.
With a record 400Gbps 100-piece-FPGA implementation, we investigate performance of the potential FEC schemes for OIF-800GZR. By comparing the power dissipation and correction threshold at 10−15 BER, we proposed the simplified OFEC for the 800G-ZR FEC.
With the objective to eliminate the input current sensor in a totem-pole boost power factor corrector (PFC) for its low-cost design, a novel discretized sampling-based robust control scheme is proposed in this work. The proposed control methodology proves to be beneficial due to its ease of implementation and its ability to support high-frequency operation, while being able to eliminate one sensor and, thus, enhancing reliability and cost-effectiveness. In addition, detailed closed-loop stability analysis is carried out for the controller in discrete domain to ascertain brisk dynamic operation when subjected to sudden load fluctuations. To establish the robustness of the proposed control scheme, a detailed sensitivity analysis of the closed-loop performance metrics with respect to undesired changes and inherent uncertainty in system parameters is presented in this article. A comparison with the state-of-the-art (SOA) methods is provided, and conclusive results in terms of better dynamic performance are also established. To verify and elaborate on the specifics of the proposed scheme, a detailed simulation study is conducted, and the results show 25% reduction in response time as compared to SOA approaches. A 500-W boost PFC prototype is developed and tested with the proposed control scheme to evaluate and benchmark the system steady-state and dynamic performance. A total harmonic distortion of 1.68% is obtained at the rated load with a resultant power factor of 0.998 (lag), which proves the effectiveness and superiority of the proposed control scheme.
Conference Name: IEEE Journal of Emerging and Selected Topics in Industrial Electronics
In this paper, we quantify elements representing video features and we propose the bitrate prediction of compressed encoding video using deep learning. Particularly, to overcome disadvantage that we cannot predict bitrate of compression video by using Constant Rate Factor (CRF), we use deep learning. We can find element of video feature with relationship of bitrate when we compress the video, and we can confirm its possibility to find relationship through various deep learning techniques.
In this paper, an overall introduction of fingerprint encryption algorithm is made, and then a fingerprint encryption algorithm with error correction is designed by adding error correction mechanism. This new fingerprint encryption algorithm can produce stochastic key in the form of multinomial coefficient by using the binary system sequencer, encrypt fingerprint, and use the Lagrange difference value to restore the multinomial during authenticating. Due to using the cyclic redundancy check code to find out the most accurate key, the accuracy of this algorithm can be ensured. Experimental result indicates that the fuzzy vault algorithm with error correction can well realize the template protection, and meet the requirements of biological information security protection. In addition, it also indicates that the system's safety performance can be enhanced by chanaing the key's length.
The scale of the intelligent networked vehicle market is expanding rapidly, and network security issues also follow. A Situational Awareness (SA) system can detect, identify, and respond to security risks from a global perspective. In view of the discrete and weak correlation characteristics of perceptual data, this paper uses the Fly Optimization Algorithm (FOA) based on dynamic adjustment of the optimization step size to improve the convergence speed, and optimizes the extraction model of security situation element of the Internet of Vehicles (IoV), based on Probabilistic Neural Network (PNN), to improve the accuracy of element extraction. Through the comparison of experimental algorithms, it is verified that the algorithm has fast convergence speed, high precision and good stability.
Smart metering is a mechanism through which fine-grained electricity usage data of consumers is collected periodically in a smart grid. However, a growing concern in this regard is that the leakage of consumers' consumption data may reveal their daily life patterns as the state-of-the-art metering strategies lack adequate security and privacy measures. Many proposed solutions have demonstrated how the aggregated metering information can be transformed to obscure individual consumption patterns without affecting the intended semantics of smart grid operations. In this paper, we expose a complete break of such an existing privacy preserving metering scheme [10] by determining individual consumption patterns efficiently, thus compromising its privacy guarantees. The underlying methodol-ogy of this scheme allows us to - i) retrieve the lower bounds of the privacy parameters and ii) establish a relationship between the privacy preserved output readings and the initial input readings. Subsequently, we present a rigorous experimental validation of our proposed attacking methodology using real-life dataset to highlight its efficacy. In summary, the present paper queries: Is the Whole lesser than its Parts? for such privacy aware metering algorithms which attempt to reduce the information leakage of aggregated consumption patterns of the individuals.
The security of Energy Data collection is the basis of achieving reliability and security intelligent of smart grid. The newest security communication of Data collection is Zero Trust communication; The Strategy of Zero Trust communication is that don’t trust any device of outside or inside. Only that device authenticate is successful and software and hardware is more security, the Energy intelligent power system allow the device enroll into network system, otherwise deny these devices. When the device has been communicating with the Energy system, the Zero Trust still need to detect its security and vulnerability, if device have any security issue or vulnerability issue, the Zero Trust deny from network system, it ensures that Energy power system absolute security, which lays a foundation for the security analysis of intelligent power unit.
Self-adaptive systems commonly operate in heterogeneous contexts and need to consider multiple quality attributes. Human stakeholders often express their quality preferences by defining utility functions, which are used by self-adaptive systems to automatically generate adaptation plans. However, the adaptation space of realistic systems is large and it is obscure how utility functions impact the generated adaptation behavior, as well as structural, behavioral, and quality constraints. Moreover, human stakeholders are often not aware of the underlying tradeoffs between quality attributes. To address this issue, we present an approach that uses machine learning techniques (dimensionality reduction, clustering, and decision tree learning) to explain the reasoning behind automated planning. Our approach focuses on the tradeoffs between quality attributes and how the choice of weights in utility functions results in different plans being generated. We help humans understand quality attribute tradeoffs, identify key decisions in adaptation behavior, and explore how differences in utility functions result in different adaptation alternatives. We present two systems to demonstrate the approach’s applicability and consider its potential application to 24 exemplar self-adaptive systems. Moreover, we describe our assessment of the tradeoff between the information reduction and the amount of explained variance retained by the results obtained with our approach.
In software design, guaranteeing the correctness of run-time system behavior while achieving an acceptable balance among multiple quality attributes remains a challenging problem. Moreover, providing guarantees about the satisfaction of those requirements when systems are subject to uncertain environments is even more challenging. While recent developments in architectural analysis techniques can assist architects in exploring the satisfaction of quantitative guarantees across the design space, existing approaches are still limited because they do not explicitly link design decisions to satisfaction of quality requirements. Furthermore, the amount of information they yield can be overwhelming to a human designer, making it difficult to see the forest for the trees. In this paper we present ExTrA (Explaining Tradeoffs of software Architecture design spaces), an approach to analyzing architectural design spaces that addresses these limitations and provides a basis for explaining design tradeoffs. Our approach employs dimensionality reduction techniques employed in machine learning pipelines like Principal Component Analysis (PCA) and Decision Tree Learning (DTL) to enable architects to understand how design decisions contribute to the satisfaction of extra-functional properties across the design space. Our results show feasibility of the approach in two case studies and evidence that combining complementary techniques like PCA and DTL is a viable approach to facilitate comprehension of tradeoffs in poorly-understood design spaces.
The vehicle-to-grid (V2G) network has a clear advantage in terms of economic benefits, and it has grabbed the interest of powergrid and electric vehicle (EV) consumers. Many V2G techniques, at present, for example, use bilinear pairing to execute the authentication scheme, which results in significant computational costs. Furthermore, in the existing V2G techniques, the system master key is issued independently by the third parties, it is vulnerable to leaking if the third party is compromised by an attacker. This paper presents an efficient and secure anonymous authentication scheme for V2G networks to overcome this issue we use a lightweight authentication system for electric vehicles and smart grids. In the proposed technique, the keys are generated by the trusted authority after the successful registration of EVs in the trusted authority and the dispatching center. The suggested scheme not only enhances the verification performance of V2G networks and also protects against inbuilt hackers.
Cyber Physical Systems (CPS), which contain devices to aid with physical infrastructure activities, comprise sensors, actuators, control units, and physical objects. CPS sends messages to physical devices to carry out computational operations. CPS mainly deals with the interplay among cyber and physical environments. The real-time network data acquired and collected in physical space is stored there, and the connection becomes sophisticated. CPS incorporates cyber and physical technologies at all phases. Cyber Physical Systems are a crucial component of Internet of Things (IoT) technology. The CPS is a traditional concept that brings together the physical and digital worlds inhabit. Nevertheless, CPS has several difficulties that are likely to jeopardise our lives immediately, while the CPS's numerous levels are all tied to an immediate threat, therefore necessitating a look at CPS security. Due to the inclusion of IoT devices in a wide variety of applications, the security and privacy of users are key considerations. The rising level of cyber threats has left current security and privacy procedures insufficient. As a result, hackers can treat every person on the Internet as a product. Deep Learning (DL) methods are therefore utilised to provide accurate outputs from big complex databases where the outputs generated can be used to forecast and discover vulnerabilities in IoT systems that handles medical data. Cyber-physical systems need anomaly detection to be secure. However, the rising sophistication of CPSs and more complex attacks means that typical anomaly detection approaches are unsuitable for addressing these difficulties since they are simply overwhelmed by the volume of data and the necessity for domain-specific knowledge. The various attacks like DoS, DDoS need to be avoided that impact the network performance. In this paper, an effective Network Cluster Reliability Model with enhanced security and privacy levels for the data in IoT for Anomaly Detection (NSRM-AD) using deep learning model is proposed. The security levels of the proposed model are contrasted with the proposed model and the results represent that the proposed model performance is accurate