Biblio
This paper presents a secure reinforcement learning (RL) based control method for unknown linear time-invariant cyber-physical systems (CPSs) that are subjected to compositional attacks such as eavesdropping and covert attack. We consider the attack scenario where the attacker learns about the dynamic model during the exploration phase of the learning conducted by the designer to learn a linear quadratic regulator (LQR), and thereafter, use such information to conduct a covert attack on the dynamic system, which we refer to as doubly learning-based control and attack (DLCA) framework. We propose a dynamic camouflaging based attack-resilient reinforcement learning (ARRL) algorithm which can learn the desired optimal controller for the dynamic system, and at the same time, can inject sufficient misinformation in the estimation of system dynamics by the attacker. The algorithm is accompanied by theoretical guarantees and extensive numerical experiments on a consensus multi-agent system and on a benchmark power grid model.
This contribution provides the implementation of a holistic operational security assessment process for both steady-state security and dynamic stability. The merging of steady-state and dynamic security assessment as a sequential process is presented. A steady-state and dynamic modeling of a VSC-HVDC was performed including curative and stabilizing measures as remedial actions. The assessment process was validated by a case study on a modified version of the Nordic 32 system. Simulation results showed that measure selection based on purely steady-state contingency analysis can lead to loss of stability in time domain. A subsequent selection of measures on the basis of the dynamic security assessment was able to guarantee the operational security for the stationary N-1 scenario as well as the power system stability.
This paper studies the secure computation offloading for multi-user multi-server mobile edge computing (MEC)-enabled internet of things (IoT). A novel jamming signal scheme is designed to interfere with the decoding process at the Eve, but not impair the uplink task offloading from users to APs. Considering offloading latency and secrecy constraints, this paper studies the joint optimization of communication and computation resource allocation, as well as partial offloading ratio to maximize the total secrecy offloading data (TSOD) during the whole offloading process. The considered problem is nonconvex, and we resort to block coordinate descent (BCD) method to decompose it into three subproblems. An efficient iterative algorithm is proposed to achieve a locally optimal solution to power allocation subproblem. Then the optimal computation resource allocation and offloading ratio are derived in closed forms. Simulation results demonstrate that the proposed algorithm converges fast and achieves higher TSOD than some heuristics.
Skyline computation is an increasingly popular query, with broad applicability to many domains. Given the trend to outsource databases, and due to the sensitive nature of the data (e.g., in healthcare), it is essential to evaluate skylines on encrypted datasets. Research efforts acknowledged the importance of secure skyline computation, but existing solutions suffer from several shortcomings: (i) they only provide ad-hoc security; (ii) they are prohibitively expensive; or (iii) they rely on assumptions such as the presence of multiple non-colluding parties in the protocol. Inspired by solutions for secure nearest-neighbors, we conjecture that a secure and efficient way to compute skylines is through result materialization. However, materialization is much more challenging for skylines queries due to large space requirements. We show that pre-computing skyline results while minimizing storage overhead is NP-hard, and we provide heuristics that solve the problem more efficiently, while maintaining storage at reasonable levels. Our algorithms are novel and also applicable to regular skyline computation, but we focus on the encrypted setting where materialization reduces the response time of skyline queries from hours to seconds. Extensive experiments show that we clearly outperform existing work in terms of performance, and our security analysis proves that we obtain a small (and quantifiable) data leakage.
The security issue of complex network systems, such as communication systems and power grids, has attracted increasing attention due to cascading failure threats. Many existing studies have investigated the robustness of complex networks against cascading failure from an attacker's perspective. However, most of them focus on the synchronous attack in which the network components under attack are removed synchronously rather than in a sequential fashion. Most recent pioneering work on sequential attack designs the attack strategies based on simple heuristics like degree and load information, which may ignore the inside functions of nodes. In the paper, we exploit a reinforcement learning-based sequential attack method to investigate the impact of different nodes on cascading failure. Besides, a candidate pool strategy is proposed to improve the performance of the reinforcement learning method. Simulation results on Barabási-Albert scale-free networks and real-world networks have demonstrated the superiority and effectiveness of the proposed method.
In this paper we propose a security and cost aware scheduling heuristic for real-time workflow jobs that process Internet of Things (IoT) data with various security requirements. The environment under study is a four-tier architecture, consisting of IoT, mist, fog and cloud layers. The resources in the mist, fog and cloud tiers are considered to be heterogeneous. The proposed scheduling approach is compared to a baseline strategy, which is security aware, but not cost aware. The performance evaluation of both heuristics is conducted via simulation, under different values of security level probabilities for the initial IoT input data of the entry tasks of the workflow jobs.
Although many digital signature algorithms are available nowadays, the speed of signing and/or verifying a digital signature is crucial for different applications. Some algorithms are fast for signing but slow for verification, but others are the inverse. Research efforts for an algorithm being fast in both signing and verification is essential. The traditional GOST algorithm has the shortest signing time but longest verification time compared with other DSA algorithms. Hence an improvement in its signature verification time is sought in this work. A modified GOST digital signature algorithm variant is developed improve the signature verification speed by reducing the computation complexity as well as benefiting from its efficient signing speed. The obtained signature verification execution speed for this variant was 1.5 time faster than that for the original algorithm. Obviously, all parameters' values used, such as public and private key, random numbers, etc. for both signing and verification processes were the same. Hence, this algorithm variant will prove suitable for applications that require short time for both, signing and verification processes. Keywords— Discrete Algorithms, Authentication, Digital Signature Algorithms DSA, GOST, Data Integrity
In cyberspace, a digital signature is a mathematical technique that plays a significant role, especially in validating the authenticity of digital messages, emails, or documents. Furthermore, the digital signature mechanism allows the recipient to trust the authenticity of the received message that is coming from the said sender and that the message was not altered in transit. Moreover, a digital signature provides a solution to the problems of tampering and impersonation in digital communications. In a real-life example, it is equivalent to a handwritten signature or stamp seal, but it offers more security. This paper proposes a scheme to enable users to digitally sign their communications by validating their identity through users’ mobile devices. This is done by utilizing the user’s ambient Wi-Fi-enabled devices. Moreover, the proposed scheme depends on something that a user possesses (i.e., Wi-Fi-enabled devices), and something that is in the user’s environment (i.e., ambient Wi-Fi access points) where the validation process is implemented, in a way that requires no effort from users and removes the "weak link" from the validation process. The proposed scheme was experimentally examined.