Biblio
More and more current industrial control systems (e.g, smart grids, oil and gas systems, connected cars and trucks) have the capability to collect and transmit users' data in order to provide services that are tailored to the specific needs of the customers. Such smart industrial control systems fall into the category of Internet of Things (IoT). However, in many cases, the data transmitted by such IoT devices includes sensitive information and users are faced with an all-or-nothing choice: either they adopt the proposed services and release their private data, or refrain from using services which could be beneficial but pose significant privacy risks. Unfortunately, encryption alone does not solve the problem, though techniques to counter these privacy risks are emerging (e.g., by using applications that alter, merge or bundle data to ensure they cannot be linked to a particular user). In this paper, we propose a general framework, whereby users can not only specify how their data is managed, but also restrict data collection from their connected devices. More precisely, we propose to use data collection policies to govern the transmission of data from IoT devices, coupled with policies to ensure that once the data has been transmitted, it is stored and shared in a secure way. To achieve this goal, we have designed a framework for secure data collection, storage and management, with logical foundations that enable verification of policy properties.
Emerging technologies such as the Internet of Things (IoT) heavily rely on hardware security for data and privacy protection. However, constantly increasing integration complexity requires automatic synthesis to maintain the pace of innovation. We introduce the first High-Level Synthesis (HLS) flow that produces a security enhanced hardware design to directly prevent Hardware Trojan Horse (HTH) injection by a malicious foundry. Through analysis of entropy loss and criticality decay, the presented algorithms implement highly efficient resource-targeted information dispersion to counter HTH insertion. The flow is evaluated on existing HLS benchmarks and a new IoT-specific benchmark and shows significant resource savings.
Access Control Policies are used to specify who can access which resource under which conditions, and ensuring their correctness is vital to prevent security breaches. As access control policies can be complex and error-prone, we propose an original framework that supports the validation of the implemented policies (specified in the standard XACML notation) against the intended rights, which can be informally expressed, e.g. in tabular form. The framework relies on well-known software testing technology, such as mutation and combinatorial techniques. The paper presents the implemented environment and an application example.
Administration of access control policies is a difficult task, especially in large organizations. We consider the problem of detecting whether administrative actions can yield in policies where some security goals are compromised. In particular, we are interested in problems generated by modifications –- such as adding/deleting elements to/from the set of possible users or permissions –- of policies specified as term-rewrite systems. We propose to use rewriting techniques to compare the behaviors of the modified version and the original version of the policy. More precisely, we use narrowing to compute counter-examples to the equivalence of rewrite-based policies. We prove that our technique provides a sound and complete way to recursively enumerate the set of counter-examples, even when this set is not finite, or when a mistake of the administrator makes one or both systems non-terminating.
We describe the formalization of a correctness proof for a conflict detection algorithm for XACML (eXtensible Access Control Markup Language). XACML is a standardized declarative access control policy language that is increasingly used in industry. In practice it is common for rule sets to grow large, and contain unintended errors, often due to conflicting rules. A conflict occurs in a policy when one rule permits a request and another denies that same request. Such errors can lead to serious risks involving both allowing access to an unauthorized user as well as denying access to someone who needs it. Removing conflicts is thus an important aspect of debugging policies, and the use of a verified algorithm provides the highest assurance in a domain where security is important. In this paper, we focus on several complex XACML constructs, including time ranges and integer intervals, as well as ways to combine any number of functions using the boolean operators and, or, and not. The latter are the most complex, and add significant expressive power to the language. We propose an algorithm to find conflicts and then use the Coq Proof Assistant to prove the algorithm correct. We develop a library of tactics to help automate the proof.
In this paper we describe a system that allows the real time creation of firewall rules in response to geographic and political changes in the control-plane. This allows an organization to mitigate data exfiltration threats by analyzing Border Gateway Protocol (BGP) updates and blocking packets from being routed through problematic jurisdictions. By inspecting the autonomous system paths and referencing external data sources about the autonomous systems, a BGP participant can infer the countries that traffic to a particular destination address will traverse. Based on this information, an organization can then define constraints on its egress traffic to prevent sensitive data from being sent via an untrusted region. In light of the many route leaks and BGP hijacks that occur today, this offers a new option to organizations willing to accept reduced availability over the risk to confidentiality. Similar to firewalls that allow organizations to block traffic originating from specific countries, our approach allows blocking outbound traffic from transiting specific jurisdictions. To illustrate the efficacy of this approach, we provide an analysis of paths to various financial services IP addresses over the course of a month from a single BGP vantage point that quantifies the frequency of path alterations resulting in the traversal of new countries. We conclude with an argument for the utility of country-based egress policies that do not require the cooperation of upstream providers.
Security mechanism of the mobile agent running platform is very important for mobile agent system operation and stability running. In this paper we mainly focus on the security issues related with the mobile agent running platform and we proposed a cross validation mechanism mixed with encryption algorithm to solve the security problems during the migration and communication of mobile agents. Firstly, we employ the cross-validation mechanism to authenticate the nodes mobile agents will be visiting. Secondly, we employ the hybrid encryption mechanism, which combines the advantages of the symmetric encryption and asymmetric encryption, to encrypt the mobile agents and ensure the transferring process of data. Finally, we employ the EMSSL socket communication method to encrypt the content of transmission, in turn to enhance the security and robustness of the mobile agent system. We implement several experiments in the simulation environment and the experimental results verify the efficiency and accuracy of the proposed methods.
Network architectures and applications are becoming increasingly complex. Several approaches to automatically enforce configurations on devices, applications and services have been proposed, such as Policy-Based Network Management (PBNM). However, the management of enforced configurations in production environments (e.g. data center) is a crucial and complex task. For example, updates on firewall configuration to change a set of rules. Although this task is fundamental for complex systems, few effective solutions have been proposed for monitoring and managing enforced configurations. This work proposes a novel approach to monitor and manage enforced configurations in production environments. The main contributions of this paper are a formal model to identify/ generate traffic flows and to verify the enforced configurations; and a slim and transparent framework to perform the policy assessment. We have implemented and validated our approach in a virtual environment in order to evaluate different scenarios. The results demonstrate that the prototype is effective and has good performance, therefore our model can be effectively used to analyse several types of IT infrastructures. A further interesting result is that our approach is complementary to PBNM.
The popularity of cloud hosting services also brings in new security challenges: it has been reported that these services are increasingly utilized by miscreants for their malicious online activities. Mitigating this emerging threat, posed by such "bad repositories" (simply Bar), is challenging due to the different hosting strategy to traditional hosting service, the lack of direct observations of the repositories by those outside the cloud, the reluctance of the cloud provider to scan its customers' repositories without their consent, and the unique evasion strategies employed by the adversary. In this paper, we took the first step toward understanding and detecting this emerging threat. Using a small set of "seeds" (i.e., confirmed Bars), we identified a set of collective features from the websites they serve (e.g., attempts to hide Bars), which uniquely characterize the Bars. These features were utilized to build a scanner that detected over 600 Bars on leading cloud platforms like Amazon, Google, and 150K sites, including popular ones like groupon.com, using them. Highlights of our study include the pivotal roles played by these repositories on malicious infrastructures and other important discoveries include how the adversary exploited legitimate cloud repositories and why the adversary uses Bars in the first place that has never been reported. These findings bring such malicious services to the spotlight and contribute to a better understanding and ultimately eliminating this new threat.
The automotive industry is experiencing a paradigm shift towards autonomous and connected vehicles. Coupled with the increasing usage and complexity of electrical and/or electronic systems, this introduces new safety and security risks. Encouragingly, the automotive industry has relatively well-known and standardised safety risk management practices, but security risk management is still in its infancy. In order to facilitate the derivation of security requirements and security measures for automotive embedded systems, we propose a specifically tailored risk assessment framework, and we demonstrate its viability with an industry use-case. Some of the key features are alignment with existing processes for functional safety, and usability for non-security specialists. The framework begins with a threat analysis to identify the assets, and threats to those assets. The following risk assessment process consists of an estimation of the threat level and of the impact level. This step utilises several existing standards and methodologies, with changes where necessary. Finally, a security level is estimated which is used to formulate high-level security requirements. The strong alignment with existing standards and processes should make this framework well-suited for the needs in the automotive industry.
The computer security community has long advocated defense in depth, building multiple layers of defense to protect a system. Realizing this vision is not yet practical, as software often ships with inadequate defenses, typically developed in an ad hoc fashion. Currently, programmers reason about security manually and lack tools to validate assurance that security controls provide satisfactory defenses. In this keynote talk, I will discuss how achieving defense in depth has a significant component in configuration. In particular, we advocate configuring security requirements for various layers of software defenses (e.g., privilege separation, authorization, and auditing) and generating software and systems defenses that implement such configurations (mostly) automatically. I will focus mainly on the challenge of retrofitting software with authorization code automatically to demonstrate the configuration problems faced by the community, and discuss how we may leverage these lessons to configuring software and systems for defense in depth.
Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.
Software development is often accompanied by security audits such as penetration tests, usually performed on behalf of the software vendor. In penetration tests security experts identify entry points for attacks in a software product. Many development teams undergo such audits for the first time if their product is attacked or faces new security concerns. The audits often serve as an eye-opener for development teams: they realize that security requires much more attention. However, there is a lack of clarity with regard to what lasting benefits developers can reap from penetration tests. We report from a one-year study of a penetration test run at a major software vendor, and describe how a software development team managed to incorporate the test findings. Results suggest that penetration tests improve developers' security awareness, but that long-lasting enhancements of development practices are hampered by a lack of dedicated security stakeholders and if security is not properly reflected in the communicative and collaborative structures of the organization.
Simulation and verification of the on-die power delivery network (PDN) is one of the key steps in the design of integrated circuits (ICs). With the very large sizes of modern grids, verification of PDNs has become very expensive and a host of techniques for faster simulation and grid model approximation have been proposed. These include topological node elimination, as in TICER and full-blown numerical model order reduction (MOR) as in PRIMA and related methods. However, both of these traditional approaches suffer from certain drawbacks that make them expensive and limit their scalability to very large grids. In this paper, we propose a novel technique for grid reduction that is a hybrid of both approaches–-the method is numerical but also factors in grid topology. It works by eliminating whole internal layers of the grid at a time, while aiming to preserve the dynamic behavior of the resulting reduced grid. Effectively, instead of traditional node-by-node topological elimination we provide a numerical layer-by-layer block-matrix approach that is both fast and accurate. Experimental results show that this technique is capable of handling very large power grids and provides a 4.25x speed-up in transient analysis.
A compiler is fully-abstract if the compilation from source language programs to target language programs reflects and preserves behavioural equivalence. Such compilers have important security benefits, as they limit the power of an attacker interacting with the program in the target language to that of an attacker interacting with the program in the source language. Proving compiler full-abstraction is, however, rather complicated. A common proof technique is based on the back-translation of target-level program contexts to behaviourally-equivalent source-level contexts. However, constructing such a back-translation is problematic when the source language is not strong enough to embed an encoding of the target language. For instance, when compiling from the simply-typed λ-calculus (λτ) to the untyped λ-calculus (λu), the lack of recursive types in λτ prevents such a back-translation. We propose a general and elegant solution for this problem. The key insight is that it suffices to construct an approximate back-translation. The approximation is only accurate up to a certain number of steps and conservative beyond that, in the sense that the context generated by the back-translation may diverge when the original would not, but not vice versa. Based on this insight, we describe a general technique for proving compiler full-abstraction and demonstrate it on a compiler from λτ to λu . The proof uses asymmetric cross-language logical relations and makes innovative use of step-indexing to express the relation between a context and its approximate back-translation. We believe this proof technique can scale to challenging settings and enable simpler, more scalable proofs of compiler full-abstraction.
Interactive systems are developed according to requirements, which may be, for instance, documentation, prototypes, diagrams, etc. The informal nature of system requirements may be a source of problems: it may be the case that a system does not implement the requirements as expected, thus, a way to validate whether an implementation follows the requirements is needed. We propose a novel approach to validating a system using formal models of the system. In this approach, a set of traces generated from the execution of the real interactive system is searched over the state space of the formal model. The scalability of the approach is demonstrated by an application to an industrial system in the nuclear plant domain. The combination of trace analysis and formal methods provides feedback that can bring improvements to both the real interactive system and the formal model.
Testing a software product line such as Linux implies building the source with different configurations. Manual approaches to generate configurations that enable code of interest are doomed to fail due to the high amount of variation points distributed over the feature model, the build system and the source code. Research has proposed various approaches to generate covering configurations, but the algorithms show many drawbacks related to run-time, exhaustiveness and the amount of generated configurations. Hence, analyzing an entire Linux source can yield more than 30 thousand configurations and thereby exceeds the limited budget and resources for build testing. In this paper, we present an approach to fill the gap between a systematic generation of configurations and the necessity to fully build software in order to test it. By merging previously generated configurations, we reduce the number of necessary builds and enable global variability-aware testing. We reduce the problem of merging configurations to finding maximum cliques in a graph. We evaluate the approach on the Linux kernel, compare the results to common practices in industry, and show that our implementation scales even when facing graphs with millions of edges.
Over transactional database systems MultiVersion concurrency control is maintained for secure, fast and efficient access to the shared data file implementation scenario. An effective coordination is supposed to be set up between owners and users also the developers & system operators, to maintain inter-cloud & intra-cloud communication Most of the services & application offered in cloud world are real-time, which entails optimized compatibility service environment between master and slave clusters. In the paper, offered methodology supports replication and triggering methods intended for data consistency and dynamicity. Where intercommunication between different clusters is processed through middleware besides slave intra-communication is handled by verification & identification protection. The proposed approach incorporates resistive flow to handle high impact systems that identifies and verifies multiple processes. Results show that the new scheme reduces the overheads from different master and slave servers as they are co-located in clusters which allow increased horizontal and vertical scalability of resources.
Time optimal reachability analysis employs model-checking to compute goal states that can be reached from an initial state with a minimal accumulated time duration. The model-checker may produce a corresponding diagnostic trace which can be interpreted as a feasible schedule for many scheduling and planning problems, response time optimization etc. We propose swarm verification to accelerate time optimal reachability using the real-time model-checker Uppaal. In swarm verification, a large number of model checker instances execute in parallel on a computer cluster using different, typically randomized search strategies. We develop four swarm algorithms and evaluate them with four models in terms scalability, and time- and memory consumption. Three of these cooperate by exchanging costs of intermediate solutions to prune the search using a branch-and-bound approach. Our results show that swarm algorithms work much faster than sequential algorithms, and especially two using combinations of random-depth-first and breadth-first show very promising performance.
Unsafe behavior of cyber-physical systems can have disastrous consequences, motivating the need for formal verification of these kinds of systems. Deductive verification in a proof assistant such as Coq is a promising technique for this verification because it (1) justifies all verification from first principles, (2) is not limited to classes of systems for which full automation is possible, and (3) provides a platform for proving powerful, higher-order modularity theorems that are crucial for scaling verification to complex systems. In this paper, we demonstrate the practicality, utility, and scalability of this approach by developing in Coq sound and powerful rules for modular construction and verification of sampled-data cyber-physical systems. We evaluate these rules by using them to verify a number of non-trivial controllers enforcing safety properties of a quadcopter, e.g. a geo-fence. We show that our controllers are realistic by running them on a real, flying quadcopter.
Cloud service providers typically adopt the multi-tenancy model to optimize resources usage and achieve the promised cost-effectiveness. Sharing resources between different tenants and the underlying complex technology increase the necessity of transparency and accountability. In this regard, auditing security compliance of the provider's infrastructure against standards, regulations and customers' policies takes on an increasing importance in the cloud to boost the trust between the stakeholders. However, virtualization and scalability make compliance verification challenging. In this work, we propose an automated framework that allows auditing the cloud infrastructure from the structural point of view while focusing on virtualization-related security properties and consistency between multiple control layers. Furthermore, to show the feasibility of our approach, we integrate our auditing system into OpenStack, one of the most used cloud infrastructure management systems. To show the scalability and validity of our framework, we present our experimental results on assessing several properties related to auditing inter-layer consistency, virtual machines co-residence, and virtual resources isolation.
Since debugging is a time-consuming activity, automated program repair tools such as GenProg have garnered interest. A recent study revealed that the majority of GenProg repairs avoid bugs simply by deleting functionality. We found that SPR, a state-of-the-art repair tool proposed in 2015, still deletes functionality in their many "plausible" repairs. Unlike generate-and-validate systems such as GenProg and SPR, semantic analysis based repair techniques synthesize a repair based on semantic information of the program. While such semantics-based repair methods show promise in terms of quality of generated repairs, their scalability has been a concern so far. In this paper, we present Angelix, a novel semantics-based repair method that scales up to programs of similar size as are handled by search-based repair tools such as GenProg and SPR. This shows that Angelix is more scalable than previously proposed semantics based repair methods such as SemFix and DirectFix. Furthermore, our repair method can repair multiple buggy locations that are dependent on each other. Such repairs are hard to achieve using SPR and GenProg. In our experiments, Angelix generated repairs from large-scale real-world software such as wireshark and php, and these generated repairs include multi-location repairs. We also report our experience in automatically repairing the well-known Heartbleed vulnerability.
We present the design and implementation of a trust-on-first-use (TOFU) policy for OpenPGP. When an OpenPGP user verifies a signature, TOFU checks that the signer used the same key as in the past. If not, this is a strong indicator that a key is a forgery and either the message is also a forgery or an active man-in-the-middle attack (MitM) is or was underway. That is, TOFU can proactively detect new attacks if the user had previously verified a message from the signer. And, it can reactively detect an attack if the signer gets a message through. TOFU cannot, however, protect against sustained MitM attacks. Despite this weakness, TOFU's practical security is stronger than the Web of Trust (WoT), OpenPGP's current trust policy, for most users. The problem with the WoT is that it requires too much user support. TOFU is also better than the most popular alternative, an X.509-based PKI, which relies on central servers whose certification processes are often sloppy. In this paper, we outline how TOFU can be integrated into OpenPGP; we address a number of potential attacks against TOFU; and, we show how TOFU can work alongside the WoT. Our implementation demonstrates the practicality of the approach.
The semantics of online authentication in the web are rather straightforward: if Alice has a certificate binding Bob's name to a public key, and if a remote entity can prove knowledge of Bob's private key, then (barring key compromise) that remote entity must be Bob. However, in reality, many websites' and the majority of the most popular ones-are hosted at least in part by third parties such as Content Delivery Networks (CDNs) or web hosting providers. Put simply: administrators of websites who deal with (extremely) sensitive user data are giving their private keys to third parties. Importantly, this sharing of keys is undetectable by most users, and widely unknown even among researchers. In this paper, we perform a large-scale measurement study of key sharing in today's web. We analyze the prevalence with which websites trust third-party hosting providers with their secret keys, as well as the impact that this trust has on responsible key management practices, such as revocation. Our results reveal that key sharing is extremely common, with a small handful of hosting providers having keys from the majority of the most popular websites. We also find that hosting providers often manage their customers' keys, and that they tend to react more slowly yet more thoroughly to compromised or potentially compromised keys.