Biblio
Unlike traditional processors, Internet of Things (IoT) devices are short of resources to incorporate mature protections (e.g. MMU, TrustZone) against modern control-flow attacks. Remote (control-flow) attestation is fast becoming a key instrument in securing such devices as it has proven the effectiveness on not only detecting runtime malware infestation of a remote device, but also saving the computing resources by moving the costly verification process away. However, few control-flow attestation schemes have been able to draw on any systematic research into the software specificity of bare-metal systems, which are widely deployed on resource-constrained IoT devices. To our knowledge, the unique design patterns of the system limit implementations of such expositions. In this paper, we present the design and proof-of-concept implementation of LAPE, a lightweight attestation of program execution scheme that enables detecting control-flow attacks for bare-metal systems without requiring hardware modification. With rudimentary memory protection support found in modern IoT-class microcontrollers, LAPE leverages software instrumentation to compartmentalize the firmware functions into several ”attestation compartments”. It then continuously tracks the control-flow events of each compartment and periodically reports them to the verifier. The PoC of the scheme is incorporated into an LLVM-based compiler to generate the LAPE-enabled firmware. By taking experiments with several real-world IoT firmware, the results show both the efficiency and practicality of LAPE.
In recent years, we have seen an advent in software attestation defenses targeting embedded systems which aim to detect tampering with a device's running program. With a persistent threat of an increasingly powerful attacker with physical access to the device, attestation approaches have become more rooted into the device's hardware with some approaches even changing the underlying microarchitecture. These drastic changes to the hardware make the proposed defenses hard to apply to new systems. In this paper, we present and evaluate LAHEL as the means to study the implementation and pitfalls of a hardware-based attestation mechanism. We limit LAHEL to utilize existing technologies without demanding any hardware changes. We implement LAHEL as a hardware IP core which interfaces with the CoreSight Debug Architecture available in modern ARM cores. We show how LAHEL can be integrated to system on chip designs allowing for microcontroller vendors to easily add our defense into their products. We present and test our prototype on a Zynq-7000 SoC, evaluating the security of LAHEL against powerful time-of-check-time-of-use (TOCTOU) attacks, while demonstrating improved performance over existing attestation schemes.
A common way to characterize security enforcement mechanisms is based on the time at which they operate. Mechanisms operating before a program's execution are static mechanisms, and mechanisms operating during a program's execution are dynamic mechanisms. This paper introduces a different perspective and classifies mechanisms based on the granularity of program code that they monitor. Classifying mechanisms in this way provides a unified view of security mechanisms and shows that all security mechanisms can be encoded as dynamic mechanisms that operate at different levels of program code granularity. The practicality of the approach is demonstrated through a prototype implementation of a framework for enforcing security policies at various levels of code granularity on Java bytecode applications.
The emergence of Industrial Cyber-Physical Systems (ICPS) in today's business world is still steadily progressing to new dimensions. Although they bring many new advantages to business processes and enable automation and a wider range of service capability, they also propose a variety of new challenges. One major challenge, which is introduced by such System-of-Systems (SoS), lies in the security aspect. As security may not have had that significant role in traditional embedded system engineering, a generic way to measure the level of security within an ICPS would provide a significant benefit for system engineers and involved stakeholders. Even though many security metrics and frameworks exist, most of them insufficiently consider an SoS context and the challenges of such environments. Therefore, we aim to define a security metric for ICPS, which measures the level of security during the system design, tests, and integration as well as at runtime. For this, we try to focus on a semantic point of view, which on one hand has not been considered in security metric definitions yet, and on the other hand allows us to handle the complexity of SoS architectures. Furthermore, our approach allows combining the critical characteristics of an ICPS, like uncertainty, required reliability, multi-criticality and safety aspects.
Language-based information-flow control (IFC) techniques often rely on special purpose, ad-hoc primitives to address different covert channels that originate in the runtime system, beyond the scope of language constructs. Since these piecemeal solutions may not compose securely, there is a need for a unified mechanism to control covert channels. As a first step towards this goal, we argue for the design of a general interface that allows programs to safely interact with the runtime system and the available computing resources. To coordinate the communication between programs and the runtime system, we propose the use of asynchronous exceptions (interrupts), which, to the best of our knowledge, have not been considered before in the context of IFC languages. Since asynchronous exceptions can be raised at any point during execution-often due to the occurrence of an external event-threads must temporarily mask them out when manipulating locks and shared data structures to avoid deadlocks and, therefore, breaking program invariants. Crucially, the naive combination of asynchronous exceptions with existing features of IFC languages (e.g., concurrency and synchronization variables) may open up new possibilities of information leakage. In this paper, we present MACasync, a concurrent, statically enforced IFC language that, as a novelty, features asynchronous exceptions. We show how asynchronous exceptions easily enable (out of the box) useful programming patterns like speculative execution and some degree of resource management. We prove that programs in MACasync satisfy progress-sensitive non-interference and mechanize our formal claims in the Agda proof assistant.
Information Flow Control (IFC) is a collection of techniques for ensuring a no-write-down no-read-up style security policy known as noninterference. Traditional methods for both static (e.g. type systems) and dynamic (e.g. runtime monitors) IFC suffer from untenable numbers of false alarms on real-world programs. Secure Multi-Execution (SME) promises to provide secure information flow control without modifying the behaviour of already secure programs, a property commonly referred to as transparency. Implementations of SME exist for the web in the form of the FlowFox browser and as plug-ins to several programming languages. Furthermore, SME can in theory work in a black-box manner, meaning that it can be programming language agnostic, making it perfect for securing legacy or third-party systems. As such SME, and its variants like Multiple Facets (MF) and Faceted Secure Multi-Execution (FSME), appear to be a family of panaceas for the security engineer. The question is, how come, given all these advantages, that these techniques are not ubiquitous in practice? The answer lies, partially, in the issue of runtime and memory overhead. SME and its variants are prohibitively expensive to deploy in many non-trivial situations. The natural question is why is this the case? On the surface, the reason is simple. The techniques in the SME family all rely on the idea of multi-execution, running all or parts of a program multiple times to achieve noninterference. Naturally, this causes some overhead. However, the predominant thinking in the IFC community has been that these overheads can be overcome. In this paper we argue that there are fundamental reasons to expect this not to be the case and prove two key theorems: (1) All transparent enforcement is polynomial time equivalent to multi-execution. (2) All black-box enforcement takes time exponential in the number of principals in the security lattice. Our methods also allow us to answer, in the affirmative, an open question about the possibility of secure and transparent enforcement of a security condition known as Termination Insensitive Noninterference.