Visible to the public Biblio

Filters: Keyword is Real-time  [Clear All Filters]
2022-09-29
Zhang, Zhengjun, Liu, Yanqiang, Chen, Jiangtao, Qi, Zhengwei, Zhang, Yifeng, Liu, Huai.  2021.  Performance Analysis of Open-Source Hypervisors for Automotive Systems. 2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS). :530–537.
Nowadays, automotive products are intelligence intensive and thus inevitably handle multiple functionalities under the current high-speed networking environment. The embedded virtualization has high potentials in the automotive industry, thanks to its advantages in function integration, resource utilization, and security. The invention of ARM virtualization extensions has made it possible to run open-source hypervisors, such as Xen and KVM, for embedded applications. Nevertheless, there is little work to investigate the performance of these hypervisors on automotive platforms. This paper presents a detailed analysis of different types of open-source hypervisors that can be applied in the ARM platform. We carry out the virtualization performance experiment from the perspectives of CPU, memory, file I/O, and some OS operation performance on Xen and Jailhouse. A series of microbenchmark programs have been designed, specifically to evaluate the real-time performance of various hypervisors and the relevant overhead. Compared with Xen, Jailhouse has better latency performance, stable latency, and little interference jitter. The performance experiment results help us summarize the advantages and disadvantages of these hypervisors in automotive applications.
2022-08-26
Wulf, Cornelia, Willig, Michael, Göhringer, Diana.  2021.  A Survey on Hypervisor-based Virtualization of Embedded Reconfigurable Systems. 2021 31st International Conference on Field-Programmable Logic and Applications (FPL). :249–256.
The increase of size, capabilities, and speed of FPGAs enables the shared usage of reconfigurable resources by multiple applications and even operating systems. While research on FPGA virtualization in HPC-datacenters and cloud is already well advanced, it is a rather new concept for embedded systems. The necessity for FPGA virtualization of embedded systems results from the trend to integrate multiple environments into the same hardware platform. As multiple guest operating systems with different requirements, e.g., regarding real-time, security, safety, or reliability share the same resources, the focus of research lies on isolation under the constraint of having minimal impact on the overall system. Drivers for this development are, e.g., computation intensive AI-based applications in the automotive or medical field, embedded 5G edge computing systems, or the consolidation of electronic control units (ECUs) on a centralized MPSoC with the goal to increase reliability by reducing complexity. This survey outlines key concepts of hypervisor-based virtualization of embedded reconfigurable systems. Hypervisor approaches are compared and classified into FPGA-based hypervisors, MPSoC-based hypervisors and hypervisors for distributed embedded reconfigurable systems. Strong points and limitations are pointed out and future trends for virtualization of embedded reconfigurable systems are identified.
2022-07-12
Akowuah, Francis, Kong, Fanxin.  2021.  Real-Time Adaptive Sensor Attack Detection in Autonomous Cyber-Physical Systems. 2021 IEEE 27th Real-Time and Embedded Technology and Applications Symposium (RTAS). :237—250.
Cyber-Physical Systems (CPS) tightly couple information technology with physical processes, which rises new vulnerabilities such as physical attacks that are beyond conventional cyber attacks. Attackers may non-invasively compromise sensors and spoof the controller to perform unsafe actions. This issue is even emphasized with the increasing autonomy in CPS. While this fact has motivated many defense mechanisms against sensor attacks, a clear vision on the timing and usability (or the false alarm rate) of attack detection still remains elusive. Existing works tend to pursue an unachievable goal of minimizing the detection delay and false alarm rate at the same time, while there is a clear trade-off between the two metrics. Instead, we argue that attack detection should bias different metrics when a system sits in different states. For example, if the system is close to unsafe states, reducing the detection delay is preferable to lowering the false alarm rate, and vice versa. To achieve this, we make the following contributions. In this paper, we propose a real-time adaptive sensor attack detection framework. The framework can dynamically adapt the detection delay and false alarm rate so as to meet a detection deadline and improve the usability according to different system status. The core component of this framework is an attack detector that identifies anomalies based on a CUSUM algorithm through monitoring the cumulative sum of difference (or residuals) between the nominal (predicted) and observed sensor values. We augment this algorithm with a drift parameter that can govern the detection delay and false alarm. The second component is a behavior predictor that estimates nominal sensor values fed to the core component for calculating the residuals. The predictor uses a deep learning model that is offline extracted from sensor data through leveraging convolutional neural network (CNN) and recurrent neural network (RNN). The model relies on little knowledge of the system (e.g., dynamics), but uncovers and exploits both the local and complex long-term dependencies in multivariate sequential sensor measurements. The third component is a drift adaptor that estimates a detection deadline and then determines the drift parameter fed to the detector component for adjusting the detection delay and false alarms. Finally, we implement the proposed framework and validate it using realistic sensor data of automotive CPS to demonstrate its efficiency and efficacy.
2022-05-20
Zahra, Ayima, Asif, Muhammad, Nagra, Arfan Ali, Azeem, Muhammad, Gilani, Syed A..  2021.  Vulnerabilities and Security Threats for IoT in Transportation and Fleet Management. 2021 4th International Conference on Computing Information Sciences (ICCIS). :1–5.
The fields of transportation and fleet management have been evolving at a rapid pace and most of these changes are due to numerous incremental developments in the area. However, a comprehensive study that critically compares and contrasts all the existing techniques and methodologies in the area is still missing. This paper presents a comparative analysis of the vulnerabilities and security threats for IoT and their mitigation strategies in the context of transportation and fleet management. Moreover, we attempt to classify the existing strategies based on their underlying principles.
2021-11-29
Zhang, Lin, Chen, Xin, Kong, Fanxin, Cardenas, Alvaro A..  2020.  Real-Time Attack-Recovery for Cyber-Physical Systems Using Linear Approximations. 2020 IEEE Real-Time Systems Symposium (RTSS). :205–217.
Attack detection and recovery are fundamental elements for the operation of safe and resilient cyber-physical systems. Most of the literature focuses on attack-detection, while leaving attack-recovery as an open problem. In this paper, we propose novel attack-recovery control for securing cyber-physical systems. Our recovery control consists of new concepts required for a safe response to attacks, which includes the removal of poisoned data, the estimation of the current state, a prediction of the reachable states, and the online design of a new controller to recover the system. The synthesis of such recovery controllers for cyber-physical systems has barely investigated so far. To fill this void, we present a formal method-based approach to online compute a recovery control sequence that steers a system under an ongoing sensor attack from the current state to a target state such that no unsafe state is reachable on the way. The method solves a reach-avoid problem on a Linear Time-Invariant (LTI) model with the consideration of an error bound $ε$ $\geq$ 0. The obtained recovery control is guaranteed to work on the original system if the behavioral difference between the LTI model and the system's plant dynamics is not larger than $ε$. Since a recovery control should be obtained and applied at the runtime of the system, in order to keep its computational time cost as low as possible, our approach firstly builds a linear programming restriction with the accordingly constrained safety and target specifications for the given reach-avoid problem, and then uses a linear programming solver to find a solution. To demonstrate the effectiveness of our method, we provide (a) the comparison to the previous work over 5 system models under 3 sensor attack scenarios: modification, delay, and reply; (b) a scalability analysis based on a scalable model to evaluate the performance of our method on large-scale systems.
2021-01-25
Dangal, P., Bloom, G..  2020.  Towards Industrial Security Through Real-time Analytics. 2020 IEEE 23rd International Symposium on Real-Time Distributed Computing (ISORC). :156–157.

Industrial control system (ICS) denotes a system consisting of actuators, control stations, and network that manages processes and functions in an industrial setting. The ICS community faces two major problems to keep pace with the broader trends of Industry 4.0: (1) a data rich, information poor (DRIP) syndrome, and (2) risk of financial and safety harms due to security breaches. In this paper, we propose a private cloud in the loop ICS architecture for real-time analytics that can bridge the gap between low data utilization and security hardening.

2020-09-28
Sliwa, Benjamin, Haferkamp, Marcus, Al-Askary, Manar, Dorn, Dennis, Wietfeld, Christian.  2018.  A radio-fingerprinting-based vehicle classification system for intelligent traffic control in smart cities. 2018 Annual IEEE International Systems Conference (SysCon). :1–5.
The measurement and provision of precise and up-to-date traffic-related key performance indicators is a key element and crucial factor for intelligent traffic control systems in upcoming smart cities. The street network is considered as a highly-dynamic Cyber Physical System (CPS) where measured information forms the foundation for dynamic control methods aiming to optimize the overall system state. Apart from global system parameters like traffic flow and density, specific data, such as velocity of individual vehicles as well as vehicle type information, can be leveraged for highly sophisticated traffic control methods like dynamic type-specific lane assignments. Consequently, solutions for acquiring these kinds of information are required and have to comply with strict requirements ranging from accuracy over cost-efficiency to privacy preservation. In this paper, we present a system for classifying vehicles based on their radio-fingerprint. In contrast to other approaches, the proposed system is able to provide real-time capable and precise vehicle classification as well as cost-efficient installation and maintenance, privacy preservation and weather independence. The system performance in terms of accuracy and resource-efficiency is evaluated in the field using comprehensive measurements. Using a machine learning based approach, the resulting success ratio for classifying cars and trucks is above 99%.
2020-05-15
Fan, Renshi, Du, Gaoming, Xu, Pengfei, Li, Zhenmin, Song, Yukun, Zhang, Duoli.  2019.  An Adaptive Routing Scheme Based on Q-learning and Real-time Traffic Monitoring for Network-on-Chip. 2019 IEEE 13th International Conference on Anti-counterfeiting, Security, and Identification (ASID). :244—248.
In the Network on Chip (NoC), performance optimization has always been a research focus. Compared with the static routing scheme, dynamical routing schemes can better reduce the data of packet transmission latency under network congestion. In this paper, we propose a dynamical Q-learning routing approach with real-time monitoring of NoC. Firstly, we design a real-time monitoring scheme and the corresponding circuits to record the status of traffic congestion for NoC. Secondly, we propose a novel method of Q-learning. This method finds an optimal path based on the lowest traffic congestion. Finally, we dynamically redistribute network tasks to increase the packet transmission speed and balance the traffic load. Compared with the C-XY routing and DyXY routing, our method achieved improvement in terms of 25.6%-49.5% and 22.9%-43.8%.
2019-08-26
Hasircioglu, Burak, Pignolet, Yvonne-Anne, Sivanthi, Thanikesavan.  2018.  Transparent Fault Tolerance for Real-Time Automation Systems. Proceedings of the 1st International Workshop on Internet of People, Assistive Robots and Things. :7-12.

Developing software is hard. Developing software that is resilient and does not crash at the occurrence of unexpected inputs or events is even harder, especially with IoT devices and real-time requirements, e.g., due to interactions with human beings. Therefore, there is a need for a software architecture that helps software developers to build fault-tolerant software with as little pain and effort as possible. To this end, we have designed a fault tolerance framework for automation systems that lets developers be mostly oblivious to fault tolerance issues. Thus they can focus on the application logic encapsulated in (micro)services. That is, the developer only needs to specify the required fault tolerance level by description, not implementation. The fault tolerance aspects are transparent to the developer, as the framework takes care of them. This approach is particularly suited for the development for mixed-criticality systems, where different parts have very different and demanding functional and non-functional requirements. For such systems highly specialized developers are needed and removing the burden of fault tolerance results in faster time to market and safer and more dependable systems.

2019-02-25
Pareek, Alok, Khaladkar, Bhushan, Sen, Rajkumar, Onat, Basar, Nadimpalli, Vijay, Lakshminarayanan, Mahadevan.  2018.  Real-time ETL in Striim. Proceedings of the International Workshop on Real-Time Business Intelligence and Analytics. :3:1–3:10.
In the new digital economy, on demand access of real time enterprise data is critical to modernize cross organizational, cross partner, and online consumer functions. In addition to on premise legacy data, enterprises are producing an enormous amount of real-time data through new hybrid cloud applications; these event streams need to be collected, transformed and analyzed in real-time to make critical business decision. Traditional Extract-Load-Transform (ETL) processes are no longer sufficient and need to be re-architected to account for streaming, heterogeneity, usability, extensibility (custom processing), and continuous validity. Striim is a novel end-to-end distributed streaming ETL and intelligence platform that enables rapid development and deployment of streaming applications. Striim's real-time ETL engine has been architected from ground-up to enable both business users and developers to build and deploy streaming applications. In this paper, we describe some of the core features of Striim's ETL engine (i) built-in adapters to extract and load data in real-time from legacy and new cloud sources/targets (ii) an extensible SQL-based transformation engine to transform events; users can inject custom logic via a component called Open Processor (iv) New primitives like MODIFY, BEFORE and AFTER and (v) built-in data validation that continuously checks if everything is continually making it to the destination.
2018-12-10
Oyekanlu, E..  2018.  Distributed Osmotic Computing Approach to Implementation of Explainable Predictive Deep Learning at Industrial IoT Network Edges with Real-Time Adaptive Wavelet Graphs. 2018 IEEE First International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). :179–188.
Challenges associated with developing analytics solutions at the edge of large scale Industrial Internet of Things (IIoT) networks close to where data is being generated in most cases involves developing analytics solutions from ground up. However, this approach increases IoT development costs and system complexities, delay time to market, and ultimately lowers competitive advantages associated with delivering next-generation IoT designs. To overcome these challenges, existing, widely available, hardware can be utilized to successfully participate in distributed edge computing for IIoT systems. In this paper, an osmotic computing approach is used to illustrate how distributed osmotic computing and existing low-cost hardware may be utilized to solve complex, compute-intensive Explainable Artificial Intelligence (XAI) deep learning problem from the edge, through the fog, to the network cloud layer of IIoT systems. At the edge layer, the C28x digital signal processor (DSP), an existing low-cost, embedded, real-time DSP that has very wide deployment and integration in several IoT industries is used as a case study for constructing real-time graph-based Coiflet wavelets that could be used for several analytic applications including deep learning pre-processing applications at the edge and fog layers of IIoT networks. Our implementation is the first known application of the fixed-point C28x DSP to construct Coiflet wavelets. Coiflet Wavelets are constructed in the form of an osmotic microservice, using embedded low-level machine language to program the C28x at the network edge. With the graph-based approach, it is shown that an entire Coiflet wavelet distribution could be generated from only one wavelet stored in the C28x based edge device, and this could lead to significant savings in memory at the edge of IoT networks. Pearson correlation coefficient is used to select an edge generated Coiflet wavelet and the selected wavelet is used at the fog layer for pre-processing and denoising IIoT data to improve data quality for fog layer based deep learning application. Parameters for implementing deep learning at the fog layer using LSTM networks have been determined in the cloud. For XAI, communication network noise is shown to have significant impact on results of predictive deep learning at IIoT network fog layer.
2018-10-26
Zhou, Wenxuan, Croft, Jason, Liu, Bingzhe, Caesar, Matthew.  2017.  NEAt: Network Error Auto-Correct. Proceedings of the Symposium on SDN Research. :157–163.

Configuring and maintaining an enterprise network is a challenging and error-prone process. Administrators must often consider security policies from a variety of sources simultaneously, including regulatory requirements, industry standards, and to mitigate attack vectors. Erroneous implementation of a policy, however, can result in costly data breaches and intrusions. Relying on humans to discover and troubleshoot violations is slow and prone to error, considering the speed at which new attack vectors propagate and the increasing network dynamics, partly an effect of SDN. To ensure the network is always in a state consistent with the desired policies, administrators need frameworks to automatically diagnose and repair violations in real-time. To address this problem, we present NEAt, a system analogous to a smartphone's autocorrect feature that enables on-the-fly repair to policy-violating updates. NEAt modifies the forwarding behavior of updates to automatically repair violations of properties such as reachability, service chaining, and segmentation. NEAt sits between an SDN controller and the forwarding devices, and intercepts updates proposed by SDN applications. If an update violates the policy defined by an administrator, such as reachability or segmentation, NEAt transforms the update into one that complies with the policy. Unlike domain-specific languages or synthesis platforms, NEAt allows enterprise networks to leverage the advanced functionality of SDN applications while simultaneously achieving strong, automated enforcement of general policies.

2018-05-09
Fellmuth, J., Herber, P., Pfeffer, T. F., Glesner, S..  2017.  Securing Real-Time Cyber-Physical Systems Using WCET-Aware Artificial Diversity. 2017 IEEE 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). :454–461.

Artificial software diversity is an effective way to prevent software vulnerabilities and errors to be exploited in code-reuse attacks. This is achieved by lowering the individual probability of a successful attack to a level that makes the attack unfeasible. Unfortunately, the existing approaches are not applicable to safety-critical real-time systems as they induce unacceptable performance overheads, they violate safety and timing guarantees, or they assume hardware resources which are typically not available in embedded systems. To overcome these problems, we propose a safe diversity approach that preserves the timing properties of real-time processes by controlling its impact on the worst case execution time (WCET). Our main idea is to use block-level diversity with a large, but fixed set of movable instruction sequences, and to use static WCET analysis to identify non-critical areas of code where it can safely be split into more movable instruction sequences.

2017-05-30
Moratelli, Carlos, Johann, Sergio, Hessel, Fabiano.  2016.  Exploring Embedded Systems Virtualization Using MIPS Virtualization Module. Proceedings of the ACM International Conference on Computing Frontiers. :214–221.

Embedded virtualization has emerged as a valuable way to increase security, reduce costs, improve software quality and decrease design time. The late adoption of hardware-assisted virtualization in embedded processors induced the development of hypervisors primarily based on para-virtualization. Recently, embedded processor designers developed virtualization extensions for their processor architectures similar to those adopted in cloud computing years ago. Now, the hypervisors are migrating to a mixed approach, where basic operating system functionalities take advantage of full-virtualization and advanced functionalities such as inter-domain communication remain para-virtualized. In this paper, we discuss the key features for embedded virtualization. We show how our embedded hypervisor was designed to support these features, taking advantage of the hardware-assisted virtualization available to the MIPS family of processors. Different aspects of our hypervisor are evaluated and compared to other similar approaches. A hardware platform was used to run benchmarks on virtualized instances of both Linux and a RTOS for performance analysis. Finally, the results obtained show that our hypervisor can be applied as a sound solution for the IoT.

2017-05-16
Andersson, Björn.  2016.  Turning Compositionality into Composability. SIGBED Rev.. 13:25–30.

Compositional theories and technologies facilitate the decomposition of a complex system into components, as well as their integration via interfaces. Component interfaces hide the internal details of the components, thereby reducing integration complexity. A system is said to be composable if the properties established and validated for components in isolation hold once the components are integrated to form the system. This brings us the question: "Is compositionality related to composability?" This paper answers this question in the affirmative; it considers a previously known interface for compositionality and shows that it can be used for composability. It also presents a run-time policing mechanism for this interface.

Andersson, Björn.  2016.  Evaluating the Average-case Performance Penalty of Bandwidth-like Interfaces. SIGBED Rev.. 13:33–40.

Many solutions for composability and compositionality rely on specifying the interface for a component using bandwidth. Some previous works specify period (P) and budget (Q) as an interface for a component. Q/P provides us with a bandwidth (the share of a processor that this component may request); P specifies the time-granularity of the allocation of this processing capacity. Other works add another parameter deadline which can help to provide tighter bounds on how this processing capacity is distributed. Yet other works use the parameters α and Δ where α is the bandwidth and Δ specifies how smoothly this bandwidth is distributed. It is known [4] that such bandwidth-like interfaces carry a cost: there are tasksets that could be guaranteed to be schedulable if tasks were scheduled directly on the processor, but with bandwidth-like interfaces, it is impossible to guarantee the taskset to be schedulable. And it is also known that this penalty can be infinite, i.e., the use of bandwidth-like interfaces may require the use of a processor that has a speed that is k times faster, and one can show this for any k. This brings the question: "What is the average-case performance penalty of bandwidth-like interfaces?" This paper addresses this question. We answer the question by randomly generating tasksets and then for each of these tasksets, compute a lower bound on how much faster a processor needs to be when a bandwidth-like scheme is used. We do not consider any specific bandwidth-like scheme; instead, we derive an expression that states a lower bound on how much faster a processor needs to be when a bandwidth-like scheme is used. For the distributions considered in this paper, we find that (i) the experimental results depend on the experimental setup, (ii) this lower bound on the penalty was never larger than 4.0, (iii) for one experimental setup, for each taskset, it was greater than 2.4, (iv) the histogram of this penalty appears to be unimodal, and (v) for implicit-deadline sporadic tasks, this lower bound on the penalty was exactly 1.

2017-02-23
Tchilinguirian, G. J., Erickson, K. G..  2015.  Securing MDSplus for the NSTX-U Digital Coil Protection System. 2015 IEEE 26th Symposium on Fusion Engineering (SOFE). :1–4.

NSTX used MDSplus extensively to record data, relay information and control data acquisition hardware. For NSTX-U the same functionality is expected as well as an expansion into the realm of securely maintaining parameters for machine protection. Specifically, we designed the Digital Coil Protection System (DCPS) to use MDSplus to manage our physical and electrical limit values and relay information about the state of our acquisition system to DCPS. Additionally, test and development systems need to use many of the same resources concurrently without causing interference with other critical systems. Further complications include providing access to critical, protected data without risking changes being made to it by unauthorized users or through unsupported or uncontrolled methods either maliciously or unintentionally. To achieve a level of confidence with an existing software system designed with minimal security controls, a number of changes to how MDSplus is used were designed and implemented. Trees would need to be verified and checked for changes before use. Concurrent creation of trees from vastly different use-cases and varying requirements would need to be supported. This paper will further discuss the impetus for developing such designs and the methods used to implement them.