Biblio
Cloud security includes the strategies which works together to guard data and infrastructure with a set of policies, procedures, controls and technologies. These security events are arranged to protect cloud data, support supervisory obedience and protect customers' privacy as well as setting endorsement rules for individual users and devices. The partition-based handling and encryption mechanism which provide fine-grained admittance control and protected data sharing to the data users in cloud computing. Graph partition problems fall under the category of NP-hard problems. Resolutions to these problems are generally imitative using heuristics and approximation algorithms. Partition problems strategy is used in bi-criteria approximation or resource augmentation approaches with a common extension of hyper graphs, which can address the storage hierarchy.
This paper studies the secure computation offloading for multi-user multi-server mobile edge computing (MEC)-enabled internet of things (IoT). A novel jamming signal scheme is designed to interfere with the decoding process at the Eve, but not impair the uplink task offloading from users to APs. Considering offloading latency and secrecy constraints, this paper studies the joint optimization of communication and computation resource allocation, as well as partial offloading ratio to maximize the total secrecy offloading data (TSOD) during the whole offloading process. The considered problem is nonconvex, and we resort to block coordinate descent (BCD) method to decompose it into three subproblems. An efficient iterative algorithm is proposed to achieve a locally optimal solution to power allocation subproblem. Then the optimal computation resource allocation and offloading ratio are derived in closed forms. Simulation results demonstrate that the proposed algorithm converges fast and achieves higher TSOD than some heuristics.
Concurrency programs often induce buggy results due to the unexpected interaction among threads. The detection of these concurrency bugs costs a lot because they usually appear under a specific execution trace. How to virtually explore different thread schedules to detect concurrency bugs efficiently is an important research topic. Many techniques have been proposed, including lightweight techniques like adaptive randomized scheduling (ARS) and heavyweight techniques like maximal causality reduction (MCR). Compared to heavyweight techniques, ARS is efficient in exploring different schedulings and achieves state-of-the-art performance. However, it will lead to explore large numbers of redundant thread schedulings, which will reduce the efficiency. Moreover, it suffers from the “cold start” issue, when little information is available to guide the distance calculation at the beginning of the exploration. In this work, we propose a Heuristic-Enhanced Adaptive Randomized Scheduling (HARS) algorithm, which improves ARS to detect concurrency bugs guided with novel distance metrics and heuristics obtained from existing research findings. Compared with the adaptive randomized scheduling method, it can more effectively distinguish the traces that may contain concurrency bugs and avoid redundant schedules, thus exploring diverse thread schedules effectively. We conduct an evaluation on 45 concurrency Java programs. The evaluation results show that our algorithm performs more stably in terms of effectiveness and efficiency in detecting concurrency bugs. Notably, HARS detects hard-to-expose bugs more effectively, where the buggy traces are rare or the bug triggering conditions are tricky.
As millions of IoT devices are interconnected together for better communication and computation, compromising even a single device opens a gateway for the adversary to access the network leading to an epidemic. It is pivotal to detect any malicious activity on a device and mitigate the threat. Among multiple feasible security threats, malware (malicious applications) poses a serious risk to modern IoT networks. A wide range of malware can replicate itself and propagate through the network via the underlying connectivity in the IoT networks making the malware epidemic inevitable. There exist several techniques ranging from heuristics to game-theory based technique to model the malware propagation and minimize the impact on the overall network. The state-of-the-art game-theory based approaches solely focus either on the network performance or the malware confinement but does not optimize both simultaneously. In this paper, we propose a throughput-aware game theory-based end-to-end IoT network security framework to confine the malware epidemic while preserving the overall network performance. We propose a two-player game with one player being the attacker and other being the defender. Each player has three different strategies and each strategy leads to a certain gain to that player with an associated cost. A tailored min-max algorithm was introduced to solve the game. We have evaluated our strategy on a 500 node network for different classes of malware and compare with existing state-of-the-art heuristic and game theory-based solutions.
Skyline computation is an increasingly popular query, with broad applicability to many domains. Given the trend to outsource databases, and due to the sensitive nature of the data (e.g., in healthcare), it is essential to evaluate skylines on encrypted datasets. Research efforts acknowledged the importance of secure skyline computation, but existing solutions suffer from several shortcomings: (i) they only provide ad-hoc security; (ii) they are prohibitively expensive; or (iii) they rely on assumptions such as the presence of multiple non-colluding parties in the protocol. Inspired by solutions for secure nearest-neighbors, we conjecture that a secure and efficient way to compute skylines is through result materialization. However, materialization is much more challenging for skylines queries due to large space requirements. We show that pre-computing skyline results while minimizing storage overhead is NP-hard, and we provide heuristics that solve the problem more efficiently, while maintaining storage at reasonable levels. Our algorithms are novel and also applicable to regular skyline computation, but we focus on the encrypted setting where materialization reduces the response time of skyline queries from hours to seconds. Extensive experiments show that we clearly outperform existing work in terms of performance, and our security analysis proves that we obtain a small (and quantifiable) data leakage.
Deep learning has made remarkable achievements in various domains. Active learning, which aims to reduce the budget for training a machine-learning model, is especially useful for the Deep learning tasks with the demand of a large number of labeled samples. Unfortunately, our empirical study finds that many of the active learning heuristics are not effective when applied to Deep learning models in batch settings. To tackle these limitations, we propose a density weighted diversity based query strategy (DWDS), which makes use of the geometry of the samples. Within a limited labeling budget, DWDS enhances model performance by querying labels for the new training samples with the maximum informativeness and representativeness. Furthermore, we propose a beam-search based method to obtain a good approximation to the optimum of such samples. Our experiments show that DWDS outperforms existing algorithms in Deep learning tasks.
The security issue of complex network systems, such as communication systems and power grids, has attracted increasing attention due to cascading failure threats. Many existing studies have investigated the robustness of complex networks against cascading failure from an attacker's perspective. However, most of them focus on the synchronous attack in which the network components under attack are removed synchronously rather than in a sequential fashion. Most recent pioneering work on sequential attack designs the attack strategies based on simple heuristics like degree and load information, which may ignore the inside functions of nodes. In the paper, we exploit a reinforcement learning-based sequential attack method to investigate the impact of different nodes on cascading failure. Besides, a candidate pool strategy is proposed to improve the performance of the reinforcement learning method. Simulation results on Barabási-Albert scale-free networks and real-world networks have demonstrated the superiority and effectiveness of the proposed method.
The design of attacks for cyber physical systems is critical to assess CPS resilience at design time and run-time, and to generate rich datasets from testbeds for research. Attacks against cyber physical systems distinguish themselves from IT attacks in that the main objective is to harm the physical system. Therefore, both cyber and physical system knowledge are needed to design such attacks. The current practice to generate attacks either focuses on the cyber part of the system using IT cyber security existing body of knowledge, or uses heuristics to inject attacks that could potentially harm the physical process. In this paper, we present a systematic approach to automatically generate integrity attacks from the CPS safety and control specifications, without knowledge of the physical system or its dynamics. The generated attacks violate the system operational and safety requirements, hence present a genuine test for system resilience. We present an algorithm to automate the malware payload development. Several examples are given throughout the paper to illustrate the proposed approach.
Today's companies are increasingly relying on Internet of Everything (IoE) to modernize their operations. The very complexes characteristics of such system expose their applications and their exchanged data to multiples risks and security breaches that make them targets for cyber attacks. The aim of our work in this paper is to provide an cybersecurity strategy whose objective is to prevent and anticipate threats related to the IoE. An economic approach is used in order to help to take decisions according to the reduction of the risks generated by the non definition of the appropriate levels of security. The considered problem have been resolved by exploiting a combinatorial optimization approach with a practical case of knapsack. We opted for a bi-objective modeling under uncertainty with a constraint of cardinality and a given budget to be respected. To guarantee a robustness of our strategy, we have also considered the criterion of uncertainty by taking into account all the possible threats that can be generated by a cyber attacks over IoE. Our strategy have been implemented and simulated under MATLAB environement and its performance results have been compared to those obtained by NSGA-II metaheuristic. Our proposed cyber security strategy recorded a clear improvment of efficiency according to the optimization of the security level and cost parametrs.
In this paper we propose a security and cost aware scheduling heuristic for real-time workflow jobs that process Internet of Things (IoT) data with various security requirements. The environment under study is a four-tier architecture, consisting of IoT, mist, fog and cloud layers. The resources in the mist, fog and cloud tiers are considered to be heterogeneous. The proposed scheduling approach is compared to a baseline strategy, which is security aware, but not cost aware. The performance evaluation of both heuristics is conducted via simulation, under different values of security level probabilities for the initial IoT input data of the entry tasks of the workflow jobs.
One important aspect in protecting Cyber Physical System (CPS) is ensuring that the proper control and measurement signals are propagated within the control loop. The CPS research community has been developing a large set of check blocks that can be integrated within the control loop to check signals against various types of attacks (e.g., false data injection attacks). Unfortunately, it is not possible to integrate all these “checks” within the control loop as the overhead introduced when checking signals may violate the delay constraints of the control loop. Moreover, these blocks do not completely operate in isolation of each other as dependencies exist among them in terms of their effectiveness against detecting a subset of attacks. Thus, it becomes a challenging and complex problem to assign the proper checks, especially with the presence of a rational adversary who can observe the check blocks assigned and optimizes her own attack strategies accordingly. This paper tackles the inherent state-action space explosion that arises in securing CPS through developing DeepBLOC (DB)-a framework in which Deep Reinforcement Learning algorithms are utilized to provide optimal/sub-optimal assignments of check blocks to signals. The framework models stochastic games between the adversary and the CPS defender and derives mixed strategies for assigning check blocks to ensure the integrity of the propagated signals while abiding to the real-time constraints dictated by the control loop. Through extensive simulation experiments and a real implementation on a water purification system, we show that DB achieves assignment strategies that outperform other strategies and heuristics.
Gartner, a large research and advisory company, anticipates that by 2024 80% of security operation centers (SOCs) will use machine learning (ML) based solutions to enhance their operations.11https://www.ciodive.com/news/how-data-science-tools-can-lighten-the-load-for-cybersecurity-teams/572209/ In light of such widespread adoption, it is vital for the research community to identify and address usability concerns. This work presents the results of the first in situ usability assessment of ML-based tools. With the support of the US Navy, we leveraged the national cyber range-a large, air-gapped cyber testbed equipped with state-of-the-art network and user emulation capabilities-to study six US Naval SOC analysts' usage of two tools. Our analysis identified several serious usability issues, including multiple violations of established usability heuristics for user interface design. We also discovered that analysts lacked a clear mental model of how these tools generate scores, resulting in mistrust \$a\$ and/or misuse of the tools themselves. Surprisingly, we found no correlation between analysts' level of education or years of experience and their performance with either tool, suggesting that other factors such as prior background knowledge or personality play a significant role in ML-based tool usage. Our findings demonstrate that ML-based security tool vendors must put a renewed focus on working with analysts, both experienced and inexperienced, to ensure that their systems are usable and useful in real-world security operations settings.