Biblio
{Unikernel is smaller in size than existing operating systems and can be started and shut down much more quickly and safely, resulting in greater flexibility and security. Since unikernel does not include large modules like the file system in its library to reduce its size, it is common to choose offloading to handle file IO. However, the processing of IO offload of unikernel transfers the file IO command to the proxy of the file server and copies the file IO result of the proxy. This can result in a trade-off of rapid processing, an advantage of unikernel. In this paper, we propose a method to offload file IO and to perform file IO with direct copy from file server to unikernel}.
According to the information security requirements of the industrial control system and the technical features of the existing defense measures, a dynamic security control strategy based on trusted computing is proposed. According to the strategy, the Industrial Cyber-Physical System system information security solution is proposed, and the linkage verification mechanism between the internal fire control wall of the industrial control system, the intrusion detection system and the trusted connection server is provided. The information exchange of multiple network security devices is realized, which improves the comprehensive defense capability of the industrial control system, and because the trusted platform module is based on the hardware encryption, storage, and control protection mode, It overcomes the common problem that the traditional repairing and stitching technique based on pure software leads to easy breakage, and achieves the goal of significantly improving the safety of the industrial control system . At the end of the paper, the system analyzes the implementation of the proposed secure industrial control information security system based on the trustworthy calculation.
There has been growing concern about privacy and security risks towards electronic-government (e-government) services adoption. Though there are positive results of e- government, there are still other contestable challenges that hamper success of e-government services. While many of the challenges have received considerable attention, there is still little to no firm research on others such as privacy and security risks, effects of infrastructure both in urban and rural settings. Other concerns that have received little consideration are how for instance; e-government serves as a function of perceived usefulness, ease of use, perceived benefit, as well as cultural dimensions and demographic constructs in South Africa. Guided by technology acceptance model, privacy calculus, Hofstede cultural theory and institutional logic theory, the current research sought to examine determinants of e- government use in developing countries. Anchored upon the aforementioned theories and background, the current study proposed three recommendations as potential value chain, derived from e-government service in response to citizens (end- user) support, government and community of stakeholders.
e-Government is needed to actualize clean, effective, transparent and accountable governance as well as quality and reliable public services. The implementation of e-Government is currently constrained because there is no derivative regulation, one of which is the regulation for e-Government Security. To answer this need, this study aims to provide input on performance management and governance systems for e-Government Security with the hope that the control design for e-Government Security can be met. The results of this study are the e-Government Security Governance System taken from 28 core models of COBIT 2019. The 28 core models were taken using CSF and risk. Furthermore, performance management for this governance system consists of capability and maturity levels which is an extension of the evaluation process in the e-Government Evaluation Guidelines issued by the Ministry of PAN & RB. The evaluation of the design carried out by determining the current condition of capability and maturity level in Badan XYZ. The result of the evaluation shows that the design possible to be implemented and needed.
There is a growing movement to retrofit ageing, large scale infrastructures, such as water networks, with wireless sensors and actuators. Next generation Cyber-Physical Systems (CPSs) are a tight integration of sensing, control, communication, computation and physical processes. The failure of any one of these components can cause a failure of the entire CPS. This represents a system design challenge to address these interdependencies. Wireless communication is unreliable and prone to cyber-attacks. An attack upon the wireless communication of CPS would prevent the communication of up-to-date information from the physical process to the controller. A controller without up-to-date information is unable to meet system's stability and performance guarantees. We focus on design approach to make CPSs secure and we evaluate their resilience to jamming attacks aimed at disrupting the system's wireless communication. We consider classic time-triggered control scheme and various resource-aware event-triggered control schemes. We evaluate these on a water network test-bed against three jamming strategies: constant, random, and protocol aware. Our test-bed results show that all schemes are very susceptible to constant and random jamming. We find that time-triggered control schemes are just as susceptible to protocol aware jamming, where some event-triggered control schemes are completely resilient to protocol aware jamming. Finally, we further enhance the resilience of an event-triggered control scheme through the addition of a dynamical estimator that estimates lost or corrupted data.
The article analyzes the concept of "Resilience" in relation to the development of computing. The strategy for reacting to perturbations in this process can be based either on "harsh Resistance" or "smarter Elasticity." Our "Models" are descriptive in defining the path of evolutionary development as structuring under the perturbations of the natural order and enable the analysis of the relationship among models, structures and factors of evolution. Among those, two features are critical: parallelism and "fuzziness", which to a large extent determine the rate of change of computing development, especially in critical applications. Both reversible and irreversible development processes related to elastic and resistant methods of problem solving are discussed. The sources of perturbations are located in vicinity of the resource boundaries, related to growing problem size with progress combined with the lack of computational "checkability" of resources i.e. data with inadequate models, methodologies and means. As a case study, the problem of hidden faults caused by the growth, the deficit of resources, and the checkability of digital circuits in critical applications is analyzed.
As an emerging paradigm for energy-efficiency design, approximate computing can reduce power consumption through simplification of logic circuits. Although calculation errors are caused by approximate computing, their impacts on the final results can be negligible in some error resilient applications, such as Convolutional Neural Networks (CNNs). Therefore, approximate computing has been applied to CNNs to reduce the high demand for computing resources and energy. Compared with the traditional method such as reducing data precision, this paper investigates the effect of approximate computing on the accuracy and power consumption of CNNs. To optimize the approximate computing technology applied to CNNs, we propose a method for quantifying the error resilience of each neuron by theoretical analysis and observe that error resilience varies widely across different neurons. On the basic of quantitative error resilience, dynamic adaptation of approximate bit-width and the corresponding configurable adder are proposed to fully exploit the error resilience of CNNs. Experimental results show that the proposed method further improves the performance of power consumption while maintaining high accuracy. By adopting the optimal approximate bit-width for each layer found by our proposed algorithm, dynamic adaptation of approximate bit-width reduces power consumption by more than 30% and causes less than 1% loss of the accuracy for LeNet-5.
Engineering complex distributed systems is challenging. Recent solutions for the development of cyber-physical systems (CPS) in industry tend to rely on architectural designs based on service orientation, where the constituent components are deployed according to their service behavior and are to be understood as loosely coupled and mostly independent. In this paper, we develop a workflow that combines contract-based and CPS model-based specifications with service orientation, and analyze the resulting model using fault injection to assess the dependability of the systems. Compositionality principles based on the contract specification help us to make the analysis practical. The presented techniques are evaluated on two case studies.
This paper is to design substitution boxes (S-Boxes) using innovative I-Ching operators (ICOs) that have evolved from ancient Chinese I-Ching philosophy. These three operators-intrication, turnover, and mutual- inherited from I-Ching are specifically designed to generate S-Boxes in cryptography. In order to analyze these three operators, identity, compositionality, and periodicity measures are developed. All three operators are only applied to change the output positions of Boolean functions. Therefore, the bijection property of S-Box is satisfied automatically. It means that our approach can avoid singular values, which is very important to generate S-Boxes. Based on the periodicity property of the ICOs, a new network is constructed, thus to be applied in the algorithm for designing S-Boxes. To examine the efficiency of our proposed approach, some commonly used criteria are adopted, such as nonlinearity, strict avalanche criterion, differential approximation probability, and linear approximation probability. The comparison results show that S-Boxes designed by applying ICOs have a higher security and better performance compared with other schemes. Furthermore, the proposed approach can also be used to other practice problems in a similar way.
We introduce the strictly in-order core (SIC), a timing-predictable pipelined processor core. SIC is provably timing compositional and free of timing anomalies. This enables precise and efficient worst-case execution time (WCET) and multi-core timing analysis. SIC's key underlying property is the monotonicity of its transition relation w.r.t. a natural partial order on its microarchitectural states. This monotonicity is achieved by carefully eliminating some of the dependencies between consecutive instructions from a standard in-order pipeline design. SIC preserves most of the benefits of pipelining: it is only about 6-7% slower than a conventional pipelined processor. Its timing predictability enables orders-of-magnitude faster WCET and multi-core timing analysis than conventional designs.
In this paper, we consider the problem of decentralized verification for large-scale cascade interconnections of linear subsystems such that dissipativity properties of the overall system are guaranteed with minimum knowledge of the dynamics. In order to achieve compositionality, we distribute the verification process among the individual subsystems, which utilize limited information received locally from their immediate neighbors. Furthermore, to obviate the need for full knowledge of the subsystem parameters, each decentralized verification rule employs a model-free learning structure; a reinforcement learning algorithm that allows for online evaluation of the appropriate storage function that can be used to verify dissipativity of the system up to that point. Finally, we show how the interconnection can be extended by adding learning-enabled subsystems while ensuring dissipativity.
With the popularity of smart devices and the widespread use of the Wi-Fi-based indoor localization, edge computing is becoming the mainstream paradigm of processing massive sensing data to acquire indoor localization service. However, these data which were conveyed to train the localization model unintentionally contain some sensitive information of users/devices, and were released without any protection may cause serious privacy leakage. To solve this issue, we propose a lightweight differential privacy-preserving mechanism for the edge computing environment. We extend ε-differential privacy theory to a mature machine learning localization technology to achieve privacy protection while training the localization model. Experimental results on multiple real-world datasets show that, compared with the original localization technology without privacy-preserving, our proposed scheme can achieve high accuracy of indoor localization while providing differential privacy guarantee. Through regulating the value of ε, the data quality loss of our method can be controlled up to 8.9% and the time consumption can be almost negligible. Therefore, our scheme can be efficiently applied in the edge networks and provides some guidance on indoor localization privacy protection in the edge computing.