Biblio
Industrial control system (ICS) denotes a system consisting of actuators, control stations, and network that manages processes and functions in an industrial setting. The ICS community faces two major problems to keep pace with the broader trends of Industry 4.0: (1) a data rich, information poor (DRIP) syndrome, and (2) risk of financial and safety harms due to security breaches. In this paper, we propose a private cloud in the loop ICS architecture for real-time analytics that can bridge the gap between low data utilization and security hardening.
Developing software is hard. Developing software that is resilient and does not crash at the occurrence of unexpected inputs or events is even harder, especially with IoT devices and real-time requirements, e.g., due to interactions with human beings. Therefore, there is a need for a software architecture that helps software developers to build fault-tolerant software with as little pain and effort as possible. To this end, we have designed a fault tolerance framework for automation systems that lets developers be mostly oblivious to fault tolerance issues. Thus they can focus on the application logic encapsulated in (micro)services. That is, the developer only needs to specify the required fault tolerance level by description, not implementation. The fault tolerance aspects are transparent to the developer, as the framework takes care of them. This approach is particularly suited for the development for mixed-criticality systems, where different parts have very different and demanding functional and non-functional requirements. For such systems highly specialized developers are needed and removing the burden of fault tolerance results in faster time to market and safer and more dependable systems.
Configuring and maintaining an enterprise network is a challenging and error-prone process. Administrators must often consider security policies from a variety of sources simultaneously, including regulatory requirements, industry standards, and to mitigate attack vectors. Erroneous implementation of a policy, however, can result in costly data breaches and intrusions. Relying on humans to discover and troubleshoot violations is slow and prone to error, considering the speed at which new attack vectors propagate and the increasing network dynamics, partly an effect of SDN. To ensure the network is always in a state consistent with the desired policies, administrators need frameworks to automatically diagnose and repair violations in real-time. To address this problem, we present NEAt, a system analogous to a smartphone's autocorrect feature that enables on-the-fly repair to policy-violating updates. NEAt modifies the forwarding behavior of updates to automatically repair violations of properties such as reachability, service chaining, and segmentation. NEAt sits between an SDN controller and the forwarding devices, and intercepts updates proposed by SDN applications. If an update violates the policy defined by an administrator, such as reachability or segmentation, NEAt transforms the update into one that complies with the policy. Unlike domain-specific languages or synthesis platforms, NEAt allows enterprise networks to leverage the advanced functionality of SDN applications while simultaneously achieving strong, automated enforcement of general policies.
Artificial software diversity is an effective way to prevent software vulnerabilities and errors to be exploited in code-reuse attacks. This is achieved by lowering the individual probability of a successful attack to a level that makes the attack unfeasible. Unfortunately, the existing approaches are not applicable to safety-critical real-time systems as they induce unacceptable performance overheads, they violate safety and timing guarantees, or they assume hardware resources which are typically not available in embedded systems. To overcome these problems, we propose a safe diversity approach that preserves the timing properties of real-time processes by controlling its impact on the worst case execution time (WCET). Our main idea is to use block-level diversity with a large, but fixed set of movable instruction sequences, and to use static WCET analysis to identify non-critical areas of code where it can safely be split into more movable instruction sequences.
Embedded virtualization has emerged as a valuable way to increase security, reduce costs, improve software quality and decrease design time. The late adoption of hardware-assisted virtualization in embedded processors induced the development of hypervisors primarily based on para-virtualization. Recently, embedded processor designers developed virtualization extensions for their processor architectures similar to those adopted in cloud computing years ago. Now, the hypervisors are migrating to a mixed approach, where basic operating system functionalities take advantage of full-virtualization and advanced functionalities such as inter-domain communication remain para-virtualized. In this paper, we discuss the key features for embedded virtualization. We show how our embedded hypervisor was designed to support these features, taking advantage of the hardware-assisted virtualization available to the MIPS family of processors. Different aspects of our hypervisor are evaluated and compared to other similar approaches. A hardware platform was used to run benchmarks on virtualized instances of both Linux and a RTOS for performance analysis. Finally, the results obtained show that our hypervisor can be applied as a sound solution for the IoT.
Compositional theories and technologies facilitate the decomposition of a complex system into components, as well as their integration via interfaces. Component interfaces hide the internal details of the components, thereby reducing integration complexity. A system is said to be composable if the properties established and validated for components in isolation hold once the components are integrated to form the system. This brings us the question: "Is compositionality related to composability?" This paper answers this question in the affirmative; it considers a previously known interface for compositionality and shows that it can be used for composability. It also presents a run-time policing mechanism for this interface.
Many solutions for composability and compositionality rely on specifying the interface for a component using bandwidth. Some previous works specify period (P) and budget (Q) as an interface for a component. Q/P provides us with a bandwidth (the share of a processor that this component may request); P specifies the time-granularity of the allocation of this processing capacity. Other works add another parameter deadline which can help to provide tighter bounds on how this processing capacity is distributed. Yet other works use the parameters α and Δ where α is the bandwidth and Δ specifies how smoothly this bandwidth is distributed. It is known [4] that such bandwidth-like interfaces carry a cost: there are tasksets that could be guaranteed to be schedulable if tasks were scheduled directly on the processor, but with bandwidth-like interfaces, it is impossible to guarantee the taskset to be schedulable. And it is also known that this penalty can be infinite, i.e., the use of bandwidth-like interfaces may require the use of a processor that has a speed that is k times faster, and one can show this for any k. This brings the question: "What is the average-case performance penalty of bandwidth-like interfaces?" This paper addresses this question. We answer the question by randomly generating tasksets and then for each of these tasksets, compute a lower bound on how much faster a processor needs to be when a bandwidth-like scheme is used. We do not consider any specific bandwidth-like scheme; instead, we derive an expression that states a lower bound on how much faster a processor needs to be when a bandwidth-like scheme is used. For the distributions considered in this paper, we find that (i) the experimental results depend on the experimental setup, (ii) this lower bound on the penalty was never larger than 4.0, (iii) for one experimental setup, for each taskset, it was greater than 2.4, (iv) the histogram of this penalty appears to be unimodal, and (v) for implicit-deadline sporadic tasks, this lower bound on the penalty was exactly 1.
NSTX used MDSplus extensively to record data, relay information and control data acquisition hardware. For NSTX-U the same functionality is expected as well as an expansion into the realm of securely maintaining parameters for machine protection. Specifically, we designed the Digital Coil Protection System (DCPS) to use MDSplus to manage our physical and electrical limit values and relay information about the state of our acquisition system to DCPS. Additionally, test and development systems need to use many of the same resources concurrently without causing interference with other critical systems. Further complications include providing access to critical, protected data without risking changes being made to it by unauthorized users or through unsupported or uncontrolled methods either maliciously or unintentionally. To achieve a level of confidence with an existing software system designed with minimal security controls, a number of changes to how MDSplus is used were designed and implemented. Trees would need to be verified and checked for changes before use. Concurrent creation of trees from vastly different use-cases and varying requirements would need to be supported. This paper will further discuss the impetus for developing such designs and the methods used to implement them.