Biblio
Though the deep penetration of cyber systems across the smart grid sub-domains enrich the operation of the wide-area protection, control, and smart grid applications, the stochastic nature of cyber-attacks by adversaries inflict their performance and the system operation. Various hardware-in-the-loop (HIL) cyber-physical system (CPS) testbeds have attempted to evaluate the cyberattack dynamics and power system perturbations for robust wide-area protection algorithms. However, physical resource constraints and modular integration designs have been significant barriers while modeling large-scale grid models (scalability) and have limited many of the CPS testbeds to either small-scale HIL environment or complete simulation environments. This paper proposes a meticulous design and efficient modeling of IEC-61850 logical nodes in physical relays to simulate large-scale grid models in a HIL real-time digital simulator environment integrated with industry-grade hardware and software systems for wide-area power system applications. The proposed meticulous design includes multi-breaker emulation in the physical relays, which extends the capacity of a physical relay to accommodate more number of CPS interfaces in the HIL CPS security testbed environment. We have used our existing HIL CPS security testbed to demonstrate scalability by the real-time performance of ten simultaneous IEEE-39 CPS grid models. The experiments demonstrated significant results by 100% real-time performance with zero overruns, and low latency while receiving and executing control signals from physical SEL relays via IEC-61850 and DNP-3 protocols to real-time digital simulator, substation remote terminal unit (RTU) software and supervisory control and data acquisition (SCADA) software at control center.
This work considers the trade-off between security and performance when revealing partial information about encrypted data computed on. The focus of our work is on information revealed through control flow side-channels when executing programs on encrypted data. We use quantitative information flow to measure security, running time to measure performance and program transformation techniques to alter the trade-off between the two. Combined with information flow policies, we perform a policy-aware security and performance trade-off (PASAPTO) analysis. We formalize the problem of PASAPTO analysis as an optimization problem, prove the NP-hardness of the corresponding decision problem and present two algorithms solving it heuristically. We implemented our algorithms and combined them with the Dataflow Authentication (DFAuth) approach for outsourcing sensitive computations. Our DFAuth Trade-off Analyzer (DFATA) takes Java Bytecode operating on plaintext data and an associated information flow policy as input. It outputs semantically equivalent program variants operating on encrypted data which are policy-compliant and approximately Pareto-optimal with respect to leakage and performance. We evaluated DFATA in a commercial cloud environment using Java programs, e.g., a decision tree program performing machine learning on medical data. The decision tree variant with the worst performance is 357% slower than the fastest variant. Leakage varies between 0% and 17% of the input.
Cyber-physical systems (CPS) depend on cybersecurity to ensure functionality, data quality, cyberattack resilience, etc. There are known and unknown cyber threats and attacks that pose significant risks. Information assurance and information security are critical. Many systems are vulnerable to intelligence exploitation and cyberattacks. By investigating cybersecurity risks and formal representation of CPS using spatiotemporal dynamic graphs and networks, this paper investigates topics and solutions aimed to examine and empower: (1) Cybersecurity capabilities; (2) Information assurance and system vulnerabilities; (3) Detection of cyber threat and attacks; (4) Situational awareness; etc. We introduce statistically-characterized dynamic graphs, novel entropy-centric algorithms and calculi which promise to ensure near-real-time capabilities.
Motions of facial components convey significant information of facial expressions. Although remarkable advancement has been made, the dynamic of facial topology has not been fully exploited. In this paper, a novel facial expression recognition (FER) algorithm called Spatial Temporal Semantic Graph Network (STSGN) is proposed to automatically learn spatial and temporal patterns through end-to-end feature learning from facial topology structure. The proposed algorithm not only has greater discriminative power to capture the dynamic patterns of facial expression and stronger generalization capability to handle different variations but also higher interpretability. Experimental evaluation on two popular datasets, CK+ and Oulu-CASIA, shows that our algorithm achieves more competitive results than other state-of-the-art methods.
Since 2018, a broad class of microarchitectural attacks called transient execution attacks (e.g., Spectre and Meltdown) have been disclosed. By abusing speculative execution mechanisms in modern CPUs, these attacks enable adversaries to leak secrets across security boundaries. A transient execution attack typically evolves through multiple stages, termed the attack chain. We find that current transient execution attacks usually rely on static attack chains, resulting in that any blockage in an attack chain may cause the failure of the entire attack. In this paper, we propose a novel defense-aware framework, called TEADS, for synthesizing transient execution attacks dynamically. The main idea of TEADS is that: each attacking stage in a transient execution attack chain can be implemented in several ways, and the implementations used in different attacking stages can be combined together under certain constraints. By constructing an attacking graph representing combination relationships between the implementations and testing available paths in the attacking graph dynamically, we can finally synthesize transient execution attacks which can bypass the imposed defense techniques. Our contributions include: (1) proposing an automated defense-aware framework for synthesizing transient execution attacks, even though possible combinations of defense strategies are enabled; (2) presenting an attacking graph extension algorithm to detect potential attack chains dynamically; (3) implementing TEADS and testing it on several modern CPUs with different protection settings. Experimental results show that TEADS can bypass the defenses equipped, improving the adaptability and durability of transient execution attacks.
The papers in this special section explore recent advancements in parallel graph processing. In the sphere of modern data science and data-driven applications, graph algorithms have achieved a pivotal place in advancing the state of scientific discovery and knowledge. Nearly three centuries of ideas have made graph theory and its applications a mature area in computational sciences. Yet, today we find ourselves at a crossroads between theory and application. Spurred by the digital revolution, data from a diverse range of high throughput channels and devices, from across internet-scale applications, are starting to mark a new era in data-driven computing and discovery. Building robust graph models and implementing scalable graph application frameworks in the context of this new era are proving to be significant challenges. Concomitant to the digital revolution, we have also experienced an explosion in computing architectures, with a broad range of multicores, manycores, heterogeneous platforms, and hardware accelerators (CPUs, GPUs) being actively developed and deployed within servers and multinode clusters. Recent advances have started to show that in more than one way, these two fields—graph theory and architectures–are capable of benefiting and in fact spurring new research directions in one another. This special section is aimed at introducing some of the new avenues of cutting-edge research happening at the intersection of graph algorithm design and their implementation on advanced parallel architectures.
In this paper, decentralized dynamic power allocation problem has been investigated for mobile ad hoc network (MANET) at tactical edge. Due to the mobility and self-organizing features in MANET and environmental uncertainties in the battlefield, many existing optimal power allocation algorithms are neither efficient nor practical. Furthermore, the continuously increasing large scale of the wireless connection population in emerging Internet of Battlefield Things (IoBT) introduces additional challenges for optimal power allocation due to the “Curse of Dimensionality”. In order to address these challenges, a novel Actor-Critic-Mass algorithm is proposed by integrating the emerging Mean Field game theory with online reinforcement learning. The proposed approach is able to not only learn the optimal power allocation for IoBT in a decentralized manner, but also effectively handle uncertainties from harsh environment at tactical edge. In the developed scheme, each agent in IoBT has three neural networks (NN), i.e., 1) Critic NN learns the optimal cost function that minimizes the Signal-to-interference-plus-noise ratio (SINR), 2) Actor NN estimates the optimal transmitter power adjustment rate, and 3) Mass NN learns the probability density function of all agents' transmitting power in IoBT. The three NNs are tuned based on the Fokker-Planck-Kolmogorov (FPK) and Hamiltonian-Jacobian-Bellman (HJB) equation given in the Mean Field game theory. An IoBT wireless network has been simulated to evaluate the effectiveness of the proposed algorithm. The results demonstrate that the actor-critic-mass algorithm can effectively approximate the probability distribution of all agents' transmission power and converge to the target SINR. Moreover, the optimal decentralized power allocation is obtained through integrated mean-field game theory with reinforcement learning.
FastChain is a simulator built in NS-3 which simulates the networked battlefield scenario with military applications, connecting tankers, soldiers and drones to form Internet-of-Battlefield-Things (IoBT). Computing, storage and communication resources in IoBT are limited during certain situations in IoBT. Under these circumstances, these resources should be carefully combined to handle the task to accomplish the mission. FastChain simulator uses Sharding approach to provide an efficient solution to combine resources of IoBT devices by identifying the correct and the best set of IoBT devices for a given scenario. Then, the set of IoBT devices for a given scenario collaborate together for sharding enabled Blockchain technology. Interested researchers, policy makers and developers can download and use the FastChain simulator to design, develop and evaluate blockchain enabled IoBT scenarios that helps make robust and trustworthy informed decisions in mission-critical IoBT environment.
The Rowhammer bug is a novel micro-architectural security threat, enabling powerful privilege-escalation attacks on various mainstream platforms. It works by actively flipping bits in Dynamic Random Access Memory (DRAM) cells with unprivileged instructions. In order to set up Rowhammer against binaries in the Linux page cache, the Waylaying algorithm has previously been proposed. The Waylaying method stealthily relocates binaries onto exploitable physical addresses without exhausting system memory. However, the proof-of-concept Waylaying algorithm can be easily detected during page cache eviction because of its high disk I/O overhead and long running time. This paper proposes the more advanced Memway algorithm, which improves on Waylaying in terms of both I/O overhead and speed. Running time and disk I/O overhead are reduced by 90% by utilizing Linux tmpfs and inmemory swapping to manage eviction files. Furthermore, by combining Memway with the unprivileged posix fadvise API, the binary relocation step is made 100 times faster. Equipped with our Memway+fadvise relocation scheme, we demonstrate practical Rowhammer attacks that take only 15-200 minutes to covertly relocate a victim binary, and less than 3 seconds to flip the target instruction bit.
With the proposal of the national industrial 4.0 strategy, the integration of industrial control network and Internet technology is getting higher and higher. At the same time, the closeness of industrial control networks has been broken to a certain extent, making the problem of industrial control network security increasingly serious. S7 protocol is a private protocol of Siemens Company in Germany, which is widely used in the communication process of industrial control network. In this paper, an industrial control intrusion detection model based on S7 protocol is proposed. Traditional protocol parsing technology cannot resolve private industrial control protocols, so, this model uses deep analysis algorithm to realize the analysis of S7 data packets. At the same time, in order to overcome the complexity and portability of static white list configuration, this model dynamically builds a white list through white list self-learning algorithm. Finally, a composite intrusion detection method combining white list detection and abnormal behavior detection is used to detect anomalies. The experiment proves that the method can effectively detect the abnormal S7 protocol packet in the industrial control network.