Biblio
The American National Standards Institute (ANSI) has standardized an access control approach, Next Generation Access Control (NGAC), that enables simultaneous instantiation of multiple access control policies. For large complex enterprises this is critical to limiting the authorized access of insiders. However, the specifications describe the required access control capabilities but not the related algorithms. While appropriate, this leave open the important question as to whether or not NGAC is scalable. Existing cubic reference implementations indicate that it does not. For example, the primary NGAC reference implementation took several minutes to simply display the set of files accessible to a user on a moderately sized system. To solve this problem we provide an efficient access control decision algorithm, reducing the overall complexity from cubic to linear. Our other major contribution is to provide a novel mechanism for administrators and users to review allowed access rights. We provide an interface that appears to be a simple file directory hierarchy but in reality is an automatically generated structure abstracted from the underlying access control graph that works with any set of simultaneously instantiated access control policies. Our work thus provides the first efficient implementation of NGAC while enabling user privilege review through a novel visualization approach. These capabilities help limit insider access to information (and thereby limit information leakage) by enabling the efficient simultaneous instantiation of multiple access control policies.
Developers and academics are constantly seeking to increase the speed and security of operating systems. Unfortunately, an increase in either one often comes at the cost of the other. In this paper, we present an operating system design that challenges a long-held tenet of multicore operating systems in order to produce an alternative architecture that has the potential to deliver both increased security and faster performance. In particular, we propose decoupling the operating system kernel from user processes by running each on completely separate processor cores instead of at different privilege levels within shared cores. Without using the hardware's privilege modes, virtualization and virtual memory contexts enforce the security policies necessary to maintain process isolation and protection. Our new kernel design paradigm offers the opportunity to simultaneously increase both performance and security; utilizing the hardware facilities for inter-core communication in place of those for privilege mode switching offers the opportunity for increased system call performance, while the hard separation between user processes and the kernel provides several strong security properties.
Thanks to their anonymity (pseudonymity) and elimination of trusted intermediaries, cryptocurrencies such as Bitcoin have created or stimulated growth in many businesses and communities. Unfortunately, some of these are criminal, e.g., money laundering, illicit marketplaces, and ransomware. Next-generation cryptocurrencies such as Ethereum will include rich scripting languages in support of smart contracts, programs that autonomously intermediate transactions. In this paper, we explore the risk of smart contracts fueling new criminal ecosystems. Specifically, we show how what we call criminal smart contracts (CSCs) can facilitate leakage of confidential information, theft of cryptographic keys, and various real-world crimes (murder, arson, terrorism). We show that CSCs for leakage of secrets (a la Wikileaks) are efficiently realizable in existing scripting languages such as that in Ethereum. We show that CSCs for theft of cryptographic keys can be achieved using primitives, such as Succinct Non-interactive ARguments of Knowledge (SNARKs), that are already expressible in these languages and for which efficient supporting language extensions are anticipated. We show similarly that authenticated data feeds, an emerging feature of smart contract systems, can facilitate CSCs for real-world crimes (e.g., property crimes). Our results highlight the urgency of creating policy and technical safeguards against CSCs in order to realize the promise of smart contracts for beneficial goals.
The automotive industry is experiencing a paradigm shift towards autonomous and connected vehicles. Coupled with the increasing usage and complexity of electrical and/or electronic systems, this introduces new safety and security risks. Encouragingly, the automotive industry has relatively well-known and standardised safety risk management practices, but security risk management is still in its infancy. In order to facilitate the derivation of security requirements and security measures for automotive embedded systems, we propose a specifically tailored risk assessment framework, and we demonstrate its viability with an industry use-case. Some of the key features are alignment with existing processes for functional safety, and usability for non-security specialists. The framework begins with a threat analysis to identify the assets, and threats to those assets. The following risk assessment process consists of an estimation of the threat level and of the impact level. This step utilises several existing standards and methodologies, with changes where necessary. Finally, a security level is estimated which is used to formulate high-level security requirements. The strong alignment with existing standards and processes should make this framework well-suited for the needs in the automotive industry.
Cascading failure is an intrinsic threat of power grid to cause enormous cost of society, and it is very challenging to be analyzed. The risk of cascading failure depends both on its probability and the severity of consequence. It is impossible to analyze all of the intrinsic attacks, only the critical and high probability initial events should be found to estimate the risk of cascading failure efficiently. To recognize the critical and high probability events, a cascading failure analysis model for power transmission grid is established based on complex network theory (CNT) in this paper. The risk coefficient of transmission line considering the betweenness, load rate and changeable outage probability is proposed to determine the initial events of power grid. The development tendency of cascading failure is determined by the network topology, the power flow and boundary conditions. The indicators of expected percentage of load loss and line cut are used to estimate the risk of cascading failure caused by the given initial malfunction of power grid. Simulation results from the IEEE RTS-79 test system show that the risk of cascading failure has close relations with the risk coefficient of transmission lines. The value of risk coefficient could be useful to make vulnerability assessment and to design specific action to reduce the topological weakness and the risk of cascading failure of power grid.
Offloading computationally expensive Simultaneous Localization and Mapping (SLAM) task for mobile robots have attracted significant attention during the last few years. Lack of powerful on-board compute capability in these energy constrained mobile robots and rapid advancement in compute cloud access technologies laid the foundation for development of several Cloud Robotics platforms that enabled parallel execution of computationally expensive robotic algorithms, especially involving multiple robots. In this work the Cloud Robotics concept is extended to include the current emphasis of computing at the network edge nodes along with the Cloud. The requirements and advantages of using edge nodes for computation offloading over remote cloud or local robot clusters are discussed with reference to the ETSI 'Mobile-Edge Computing' initiative and OpenFog Consortium's 'OpenFog Architecture'. A Particle Filter algorithm for SLAM is modified and implemented for offloading in a multi-tier edge+cloud setup. Additionally a model is proposed for offloading decision in such a setup with experiments and results demonstrating the efficacy of the proposed dynamic offloading scheme over static offloading strategies.
Motivated by the growing complexity and heterogeneity of modern data centers, and the prevalence of commodity component failures, this paper studies the failure-aware placement problem of placing tasks of a parallel job on machines in the data center with the goal of increasing availability. We consider two models of failures: adversarial and probabilistic. In the adversarial model, each node has a weight (higher weight implying higher reliability) and the adversary can remove any subset of nodes of total weight at most a given bound W and our goal is to find a placement that incurs the least disruption against such an adversary. In the probabilistic model, each node has a probability of failure and we need to find a placement that maximizes the probability that at least K out of N tasks survive at any time. For adversarial failures, we first show that (i) the problems are in Σ2, the second level of the polynomial hierarchy, (ii) a basic variant, that we call RobustFAP, is co-NP-hard, and (iii) an all-or-nothing version of RobustFAP is Σ2-complete. We then give a PTAS for RobustFAP, a key ingredient of which is a solution that we design for a fractional version of RobustFAP. We then study fractional RobustFAP over hierarchies, denoted HierRobustFAP, and introduce a notion of hierarchical max-min fairness/ and a novel Generalized Spreading/ algorithm which is simultaneously optimal for all W. These generalize the classical notion of max-min fairness to work with nodes of differing capacities, differing reliability weights and hierarchical structures. Using randomized rounding, we extend this to give an algorithm for integral HierRobustFAP. For the probabilistic version, we first give an algorithm that achieves an additive ε approximation in the failure probability for the single level version, called ProbFAP, while giving up a (1 + ε) multiplicative factor in the number of failures. We then extend the result to the hierarchical version, HierProbFAP, achieving an ε additive approximation in failure probability while giving up an (L + ε) multiplicative factor in the number of failures, where \$L\$ is the number of levels in the hierarchy.
Large scale sensor networks are ubiquitous nowadays. An important objective of deploying sensors is to detect anomalies in the monitored system or infrastructure, which allows remedial measures to be taken to prevent failures, inefficiencies, and security breaches. Most existing sensor anomaly detection methods are local, i.e., they do not capture the global dependency structure of the sensors, nor do they perform well in the presence of missing or erroneous data. In this paper, we propose an anomaly detection technique for large scale sensor data that leverages relationships between sensors to improve robustness even when data is missing or erroneous. We develop a probabilistic graphical model-based global outlier detection technique that represents a sensor network as a pairwise Markov Random Field and uses graphical model inference to detect anomalies. We show our model is more robust than local models, and detects anomalies with 90% accuracy even when 50% of sensors are erroneous. We also build a synthetic graphical model generator that preserves statistical properties of a real data set to test our outlier detection technique at scale.
Nowadays, the principle of image mining plays a vital role in various areas of our life, where numerous frameworks based on image mining are proposed for object recognition, object tracking, sensing images and medical image diagnosis. Nevertheless, the research in the image authentication based on image mining is still confined. Therefore, this paper comes to present an efficient engagement between the frequent pattern mining and digital watermarking to contribute significantly in the authentication of images transmitted via public networks. The proposed framework exploits some robust features of image to extract the frequent patterns in the image data. The maximal relevant patterns are used to discriminate between the textured and smooth blocks within the image, where the texture blocks are more appropriate to embed the secret data than smooth blocks. The experiment's result proves the efficiency of the proposed framework in terms of stabilization and robustness against different kind of attacks. The results are interesting and remarkable to preserve the image authentication.
Consensus algorithms provide strategies to solve problems in a distributed system with the added constraint that data can only be shared between adjacent computing nodes. We find these algorithms in applications for wireless and sensor networks, spectrum sensing for cognitive radio, even for some IoT services. However, consensus-based applications are not resilient to compromised nodes sending falsified data to their neighbors, i.e. they can be the target of Byzantine attacks. Several solutions have been proposed in the literature inspired from reputation based systems, outlier detection or model-based fault detection techniques in process control. We have reviewed some of these solutions, and propose two mitigation techniques to protect the consensus-based Network Intrusion Detection System in [1]. We analyze several implementation issues such as computational overhead, fine tuning of the solution parameters, impacts on the convergence of the consensus phase, accuracy of the intrusion detection system.
Consensus algorithms provide strategies to solve problems in a distributed system with the added constraint that data can only be shared between adjacent computing nodes. We find these algorithms in applications for wireless and sensor networks, spectrum sensing for cognitive radio, even for some IoT services. However, consensus-based applications are not resilient to compromised nodes sending falsified data to their neighbors, i.e. they can be the target of Byzantine attacks. Several solutions have been proposed in the literature inspired from reputation based systems, outlier detection or model-based fault detection techniques in process control. We have reviewed some of these solutions, and propose two mitigation techniques to protect the consensus-based Network Intrusion Detection System in [1]. We analyze several implementation issues such as computational overhead, fine tuning of the solution parameters, impacts on the convergence of the consensus phase, accuracy of the intrusion detection system.
The emerging trends of volatile distributed energy resources and micro-grids are putting pressure on electrical power system infrastructure. This pressure is motivating the integration of digital technology and advanced power-industry practices to improve the management of distributed electricity generation, transmission, and distribution, thereby creating a web of systems. Unlike legacy power system infrastructure, however, this emerging next-generation smart grid should be context-aware and adaptive to enable the creation of applications needed to enhance grid robustness and efficiency. This paper describes key factors that are driving the architecture of smart grids and describes orchestration middleware needed to make the infrastructure resilient. We use an example of adaptive protection logic in smart grid substations as a use case to motivate the need for contextawareness and adaptivity.
The evolution of the Internet of Things (IoT) requires a well-defined infrastructure of systems that provides services for device abstraction and data management, and also supports the development of applications. Middleware for IoT has been recognized as the system that can provide these services and has become increasingly important for IoT in recent years. The large amount of data that flows into a middleware system demands a security architecture that ensures the protection of all layers of the system, including the communication channels and border APIs used to integrate the applications and IoT devices. However, this security architecture should be based on lightweight approaches since middleware systems are widely applied in constrained environments. Some works have already defined new solutions and adaptations to existing approaches in order to mitigate IoT middleware security problems. In this sense, this article discusses the role of lightweight approaches to the standardization of a security architecture for IoT middleware systems. This article also analyzes concepts and existing works, and presents some important IoT middleware challenges that may be addressed by emerging lightweight security approaches in order to achieve the consolidation of a standard security architecture and the mitigation of the security problems found in IoT middleware systems.
Code reuse attacks based on return oriented programming (ROP) are becoming more and more prevalent every year. They started as a way to circumvent operating systems protections against injected code, but they are now also used as a technique to keep the malicious code hidden from detection and analysis systems. This means that while in the past ROP chains were short and simple (and therefore did not require any dedicated tool for their analysis), we recently started to observe very complex algorithms – such as a complete rootkit – implemented entirely as a sequence of ROP gadgets. In this paper, we present a set of techniques to analyze complex code reuse attacks. First, we identify and discuss the main challenges that complicate the reverse engineer of code implemented using ROP. Second, we propose an emulation-based framework to dissect, reconstruct, and simplify ROP chains. Finally, we test our tool on the most complex example available to date: a ROP rootkit containing four separate chains, two of them dynamically generated at runtime.
Some of the common works like, upload and retrieval of data, buying and selling things, earning and donating or transaction of money etc., are the most common works performed in daily life through internet. For every user who is accessing the internet regularly, their highest priority is to make sure that there data is secured. Users are willing to pay huge amount of money to the service provider for maintaining the security. But the intention of malicious users is to access and misuse others data. For that they are using zombie bots. Always Bots are not the only malicious, legitimate authorized user can also impersonate to access the data illegally. This makes the job tougher to discriminate between the bots and boots. For providing security form that threats, here we are proposing a novel RSJ Approach by User Authentication. RSJ approach is a secure way for providing the security to the user form both bots and malicious users.
As web applications is becoming more prominent due to the ubiquity of web services, web applications have become main targets for attackers. In order to steal or leak sensitive user data managed by web applications, attackers exploit a wide range of input validation vulnerabilities such as SQL injection, path traversal (or directory traversal), cross-site scripting (XSS), etc. This paper propose a technique that can verify input values of Java-based web applications using static bytecode instrumentation and runtime input validation. The technique searches for target methods or object constructors in compiled Java class files, and statically inserts bytecode modules. At runtime, the instrumented bytecode modules validate input values of the targets, and take countermeasure against malicious inputs. The proposed technique can mitigate the input validation vulnerabilities in Java-based web applications without source codes. To evaluate the effectiveness of the proposed technique, experiments are carried out with an insecure web application maintained by OWASP WebGoat Project. The experimental results show that the proposed technique successfully mitigates input validation vulnerabilities such as SQL injection and path traversal.
Users of modern data-processing services such as tax preparation or genomic screening are forced to trust them with data that the users wish to keep secret. Ryoan protects secret data while it is processed by services that the data owner does not trust. Accomplishing this goal in a distributed setting is difficult because the user has no control over the service providers or the computational platform. Confining code to prevent it from leaking secrets is notoriously difficult, but Ryoan benefits from new hardware and a request-oriented data model. Ryoan provides a distributed sandbox, leveraging hardware enclaves (e.g., Intel's software guard extensions (SGX) [15]) to protect sandbox instances from potentially malicious computing platforms. The protected sandbox instances confine untrusted data-processing modules to prevent leakage of the user's input data. Ryoan is designed for a request-oriented data model, where confined modules only process input once and do not persist state about the input. We present the design and prototype implementation of Ryoan and evaluate it on a series of challenging problems including email filtering, heath analysis, image processing and machine translation.
Modern applications often operate on data in multiple administrative domains. In this federated setting, participants may not fully trust each other. These distributed applications use transactions as a core mechanism for ensuring reliability and consistency with persistent data. However, the coordination mechanisms needed for transactions can both leak confidential information and allow unauthorized influence. By implementing a simple attack, we show these side channels can be exploited. However, our focus is on preventing such attacks. We explore secure scheduling of atomic, serializable transactions in a federated setting. While we prove that no protocol can guarantee security and liveness in all settings, we establish conditions for sets of transactions that can safely complete under secure scheduling. Based on these conditions, we introduce \textbackslashti\staged commit\, a secure scheduling protocol for federated transactions. This protocol avoids insecure information channels by dividing transactions into distinct stages. We implement a compiler that statically checks code to ensure it meets our conditions, and a system that schedules these transactions using the staged commit protocol. Experiments on this implementation demonstrate that realistic federated transactions can be scheduled securely, atomically, and efficiently.
The premise of this year's SafeConfig Workshop is existing tools and methods for security assessments are necessary but insufficient for scientifically rigorous testing and evaluation of resilient and active cyber systems. The objective for this workshop is the exploration and discussion of scientifically sound testing regimen(s) that will continuously and dynamically probe, attack, and "test" the various resilient and active technologies. This adaptation and change in focus necessitates at the very least modification, and potentially, wholesale new developments to ensure that resilient- and agile-aware security testing is available to the research community. All testing, validation and experimentation must also be repeatable, reproducible, subject to scientific scrutiny, measurable and meaningful to both researchers and practitioners.
The premise of the SafeConfig'16 Workshop is existing tools and methods for security assessments are necessary but insufficient for scientifically rigorous testing and evaluation of resilient and active cyber systems. The objective for this workshop is the exploration and discussion of scientifically sound testing regimen(s) that will continuously and dynamically probe, attack, and "test" the various resilient and active technologies. This adaptation and change in focus necessitates at the very least modification, and potentially, wholesale new developments to ensure that resilient- and agile-aware security testing is available to the research community. All testing, validation and experimentation must also be repeatable, reproducible, subject to scientific scrutiny, measurable and meaningful to both researchers and practitioners. The workshop will convene a panel of experts to explore this concept. The topic will be discussed from three different perspectives. One perspective is that of the practitioner. We will explore whether active and resilient technologies are or are planned for deployment and whether the verification methodology affects that decision. The second perspective will be that of the research community. We will address the shortcomings of current approaches and the research directions needed to address the practitioner's concerns. The third perspective is that of the policy community. Specifically, we will explore the dynamics between technology, verification, and policy.
This paper presents a detection and containment mechanism for fast self-propagating network worm malware. The detection part of the mechanism uses two categories of network host activities to identify worm behaviour in a network. Upon an identified worm activity in a network, a data-link containment system is used to isolate the internal source of infection, and a network level containment system is used to block inbound worm datagrams. The mechanism has been demonstrated using a software prototype. A number of worm experiments have been conducted to evaluate the prototype. The empirical results show the effectiveness of the developed mechanism in containing fast network worm malware at an early stage with almost no false positives.
Tor is a popular network for anonymous communication. The usage and operation of Tor is not well-understood, however, because its privacy goals make common measurement approaches ineffective or risky. We present PrivCount, a system for measuring the Tor network designed with user privacy as a primary goal. PrivCount securely aggregates measurements across Tor relays and over time to produce differentially private outputs. PrivCount improves on prior approaches by enabling flexible exploration of many diverse kinds of Tor measurements while maintaining accuracy and privacy for each. We use PrivCount to perform a measurement study of Tor of sufficient breadth and depth to inform accurate models of Tor users and traffic. Our results indicate that Tor has 710,000 users connected but only 550,000 active at a given time, that Web traffic now constitutes 91% of data bytes on Tor, and that the strictness of relays' connection policies significantly affects the type of application data they forward.
It is a fundamental problem to decide how many copies of an unknown mixed quantum state are necessary and sufficient to determine the state. This is the quantum analogue of the problem of estimating a probability distribution given some number of samples. Previously, it was known only that estimating states to error є in trace distance required O(dr2/є2) copies for a d-dimensional density matrix of rank r. Here, we give a measurement scheme (POVM) that uses O( (dr/ δ ) ln(d/δ) ) copies to estimate ρ to error δ in infidelity. This implies O( (dr / є2)· ln(d/є) ) copies suffice to achieve error є in trace distance. For fixed d, our measurement can be implemented on a quantum computer in time polynomial in n. We also use the Holevo bound from quantum information theory to prove a lower bound of Ω(dr/є2)/ log(d/rє) copies needed to achieve error є in trace distance. This implies a lower bound Ω(dr/δ)/log(d/rδ) for the estimation error δ in infidelity. These match our upper bounds up to log factors. Our techniques can also show an Ω(r2d/δ) lower bound for measurement strategies in which each copy is measured individually and then the outcomes are classically post-processed to produce an estimate. This matches the known achievability results and proves for the first time that such “product” measurements have asymptotically suboptimal scaling with d and r.