Biblio
Cyber physical system (CPS) is often deployed at safety-critical key infrastructures and fields, fault tolerance policies are extensively applied in CPS systems to improve its credibility; the same physical backup of hardware redundancy (SPB) technology is frequently used for its simple and reliable implementation. To resolve challenges faced with in simulation test of SPB-CPS, this paper dynamically determines the test resources matched with the CPS scale by using the adaptive allocation policies, establishes the hierarchical models and inter-layer message transmission mechanism. Meanwhile, the collaborative simulation time sequence push strategy and the node activity test mechanism based on the sliding window are designed in this paper to improve execution efficiency of the simulation test. In order to validate effectiveness of the method proposed in this paper, we successfully built up a fault-tolerant CPS simulation platform. Experiments showed that it can improve the SPB-CPS simulation test efficiency.
From signal processing to emerging deep neural networks, a range of applications exhibit intrinsic error resilience. For such applications, approximate computing opens up new possibilities for energy-efficient computing by producing slightly inaccurate results using greatly simplified hardware. Adopting this approach, a variety of basic arithmetic units, such as adders and multipliers, have been effectively redesigned to generate approximate results for many error-resilient applications.In this work, we propose SECO, an approximate exponential function unit (EFU). Exponentiation is a key operation in many signal processing applications and more importantly in spiking neuron models, but its energy-efficient implementation has been inadequately explored. We also introduce a cross-layer design method for SECO to optimize the energy-accuracy trade-off. At the algorithm level, SECO offers runtime scaling between energy efficiency and accuracy based on approximate Taylor expansion, where the error is minimized by optimizing parameters using discrete gradient descent at design time. At the circuit level, our error analysis method efficiently explores the design space to select the energy-accuracy-optimal approximate multiplier at design time. In tandem, the cross-layer design and runtime optimization method are able to generate energy-efficient and accurate approximate EFU designs that are up to 99.7% accurate at a power consumption of 3.73 pJ per exponential operation. SECO is also evaluated on the adaptive exponential integrate-and-fire neuron model, yielding only 0.002% timing error and 0.067% value error compared to the precise neuron model.
Fast, accurate three dimensional reconstructions of plasma equilibria, crucial for physics interpretation of fusion data generated within confinement devices like stellarators/ tokamaks, are computationally very expensive and routinely require days, even weeks, to complete using serial approaches. Here, we present a parallel implementation of the three dimensional plasma reconstruction code, V3FIT. A formal analysis to identify the performance bottlenecks and scalability limits of this new parallel implementation, which combines both task and data parallelism, is presented. The theoretical findings are supported by empirical performance results on several thousands of processor cores of a Cray XC30 supercomputer. Parallel V3FIT is shown to deliver over 40X speedup, enabling fusion scientists to carry out three dimensional plasma equilibrium reconstructions at unprecedented scales in only a few hours (instead of in days/weeks) for the first time.
As a vital component of variety cyber attacks, malicious domain detection becomes a hot topic for cyber security. Several recent techniques are proposed to identify malicious domains through analysis of DNS data because much of global information in DNS data which cannot be affected by the attackers. The attackers always recycle resources, so they frequently change the domain - IP resolutions and create new domains to avoid detection. Therefore, multiple malicious domains are hosted by the same IPs and multiple IPs also host same malicious domains in simultaneously, which create intrinsic association among them. Hence, using the labeled domains which can be traced back from queries history of all domains to verify and figure out the association of them all. Graphs seem the best candidate to represent for this relationship and there are many algorithms developed on graph with high performance. A graph-based interface can be developed and transformed to the graph mining task of inferring graph node's reputation scores using improvements of the belief propagation algorithm. Then higher reputation scores the nodes reveal, the more malicious probabilities they infer. For demonstration, this paper proposes a malicious domain detection technique and evaluates on a real-world dataset. The dataset is collected from DNS data servers which will be used for building a DNS graph. The proposed technique achieves high performance in accuracy rates over 98.3%, precision and recall rates as: 99.1%, 98.6%. Especially, with a small set of labeled domains (legitimate and malicious domains), the technique can discover a large set of potential malicious domains. The results indicate that the method is strongly effective in detecting malicious domains.
A visible nearest neighbor (VNN) query returns the k nearest objects that are visible to a query point, which is used to support various applications such as route planning, target monitoring, and antenna placement. However, with the proliferation of wireless communications and advances in positioning technology for mobile equipments, efficiently searching for VNN among moving objects are required. While most previous work on VNN query focused on static objects, in this paper, we treats the objects as moving consecutively when indexing them, and study the visible nearest neighbor query for moving objects (MVNN) . Assuming that the objects are represented as trajectories given by linear functions of time, we propose a scheme which indexes the moving objects by time-parameterized R-tree (TPR-tree) and obstacles by R-tree. The paper offers four heuristics for visibility and space pruning. New algorithms, Post-pruning and United-pruning, are developed for efficiently solving MVNN queries with all four heuristics. The effectiveness and efficiency of our solutions are verified by extensive experiments over synthetic datasets on real road network.
The previous consideration of power grid focuses on the power system itself, however, the recent work is aiming at both power grid and communication network, this coupling networks are firstly called as interdependent networks. Prior study on modeling interdependent networks always extracts main features from real networks, the model of network A and network B are completely symmetrical, both degree distribution in intranetwork and support pattern in inter-network, but in reality this circumstance is hard to attain. In this paper, we deliberately set both networks with same topology in order to specialized research the support pattern between networks. In terms of initial failure from power grid or communication network, we find the remaining survival fraction is greatly disparate, and the failure initially from power grid is more harmful than failure initially from communication network, which all show the vulnerability of interdependency and meantime guide us to pay more attention to the protection measures for power grid.
Until recently, IT security received limited attention within the scope of Process Control Systems (PCS). In the past, PCS consisted of isolated, specialized components running closed process control applications, where hardware was placed in physically secured locations and connections to remote network infrastructures were forbidden. Nowadays, industrial communications are fully exploiting the plethora of features and novel capabilities deriving from the adoption of commodity off the shelf (COTS) hardware and software. Nonetheless, the reliance on COTS for remote monitoring, configuration and maintenance also exposed PCS to significant cyber threats. In light of these issues, this paper presents the steps for the design, verification and implementation of a lightweight remote attestation protocol. The protocol is aimed at providing a secure software integrity verification scheme that can be readily integrated into existing industrial applications. The main novelty of the designed protocol is that it encapsulates key elements for the protection of both participating parties (i.e., verifier and prover) against cyber attacks. The protocol is formally verified for correctness with the help of the Scyther model checking tool. The protocol implementation and experimental results are provided for a Phoenix-Contact industrial controller, which is widely used in the automation of gas transportation networks in Romania.
Ransomware techniques have evolved over time with the most resilient attacks making data recovery practically impossible. This has driven countermeasures to shift towards recovery against prevention but in this paper, we model ransomware attacks from an infection vector point of view. We follow the basic infection chain of crypto ransomware and use Bayesian network statistics to infer some of the most common ransomware infection vectors. We also employ the use of attack and sensor nodes to capture uncertainty in the Bayesian network.
Traditional information Security Risk Assessment algorithms are mainly used for evaluating small scale of information system, not suitable for massive information systems in Energy Internet. To solve the problem, this paper proposes an Information Security Risk Algorithm based on Dynamic Risk Propagation (ISRADRP). ISRADRP firstly divides information systems in the Energy Internet into different partitions according to their logical network location. Then, ISRADRP computes each partition's risk value without considering threat propagation effect via RM algorithm. Furthermore, ISRADRP calculates inside and outside propagation risk value for each partition according to Dependency Structure Matrix. Finally, the security bottleneck of systems will be identified and the overall risk value of information system will be obtained.
Computer security has become an increasingly important hot topic in computer and communication industry, since it is important to support critical business process and to protect personal and sensitive information. Computer security is to keep security attributes (confidentiality, integrity and availability) of computer systems, which face the threats such as deny-of-service (DoS), virus and intrusion. To ensure high computer security, the intrusion tolerance technique based on fault-tolerant scheme has been widely applied. This paper presents the quantitative performance evaluation of a virtual machine (VM) based intrusion tolerant system. Concretely, two security measures are derived; MTTSF (mean time to security failure) and the effective traffic intensity. The mathematical analysis is achieved by using Laplace-Stieltjes transforms according to the analysis of M/G/1 queueing system.
Arrays of nanosized hollow spheres of Ni were studied using micromagnetic simulation by the Object Oriented Micromagnetic Framework. Before all the results, we will present an analysis of the properties for an individual hollow sphere in order to separate the real effects due to the array. The results in this paper are divided into three parts in order to analyze the magnetic behaviors in the static and dynamic regimes. The first part presents calculations for the magnetic field applied parallel to the plane of the array; specifically, we present the magnetization for equilibrium configurations. The obtained magnetization curves show that decreasing the thickness of the shell decreases the coercive field and it is difficult to obtain magnetic saturation. The values of the coercive field obtained in our work are of the same order as reported in experimental studies in the literature. The magnetic response in our study is dominated by the shape effects and we obtained high values for the reduced remanence, Mr/MS = 0.8. In the second part of this paper, we have changed the orientation of the magnetic field and calculated hysteresis curves to study the angular dependence of the coercive field and remanence. In thin shells, we have observed how the moments are oriented tangentially to the spherical surface. For the inversion of the magnetic moments we have observed the formation of vortex and onion modes. In the third part of this paper, we present an analysis for the process of magnetization reversal in the dynamic regime. The analysis showed that inversion occurs in the nonhomogeneous configuration. We could see that self-demagnetizing effects are predominant in the magnetic properties of the array. We could also observe that there are two contributions: one due to the shell as an independent object and the other due to the effects of the array.
Wireless Sensor Networks (WSN) is vulnerable to node capture attacks in which an attacker can capture one or more sensor nodes and reveal all stored security information which enables him to compromise a part of the WSN communications. Due to large number of sensor nodes and lack of information about deployment and hardware capabilities of sensor node, key management in wireless sensor networks has become a complex task. Limited memory resources and energy constraints are the other issues of key management in WSN. Hence an efficient key management scheme is necessary which reduces the impact of node capture attacks and consume less energy. By simulation results, we show that our proposed technique efficiently increases packet delivery ratio with reduced energy consumption.
Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing ℓ2,1-norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.
In cloud computing environments, the user authentication scheme is an important security tool because it provides the authentication, authorization, and accounting for cloud users. Therefore, many user authentication schemes for cloud computing have been proposed in recent years. However, we find that most of the previous authentication schemes have some security problems. Besides, it cannot be implemented in cloud computing. To solve the above problems, we propose a new ID-based user authentication scheme for cloud computing in this paper. Compared with the related works, the proposed scheme has higher security levels and lower computation costs. In addition, it can be easily applied to cloud computing environments. Therefore, the proposed scheme is more efficient and practical than the related works.