Biblio
Heterogeneous system-on-chip platforms with multiple processing cores are becoming increasingly common in safety-and security-critical embedded systems. To facilitate a logical isolation of physically connected on-chip components, internal communication links of such platforms are often equipped with dedicated access protection units. When performed manually, however, the configuration of these units can be both time-consuming and error-prone. To resolve this issue, we present a formal model and a corresponding design methodology that allows developers to specify access permissions and information flow requirements for embedded systems in a mostly platform-independent manner. As part of the methodology, the consistency between the permissions and the requirements is automatically verified and an extensible generation framework is used to transform the abstract permission declarations into configuration code for individual access protection units. We present a prototypical implementation of this approach and validate it by generating configuration code for the access protection unit of a commercially available multiprocessor system-on-chip.
This work describes a top down systems security requirements analysis approach for understanding and eliciting security requirements for a notional small unmanned aerial system (SUAS). More specifically, the System-Theoretic Process Analysis approach for Security (STPA-Sec) is used to understand and elicit systems security requirements. The effort employs STPA-Sec on a notional SUAS system case study to detail the development of functional-level security requirements, design-level engineering considerations, and architectural-level security specification criteria early in the system life cycle when the solution trade-space is largest rather than merely examining components and adding protections during system operation or sustainment. These details were elaborated during a semester independent study research effort by two United States Air Force Academy Systems Engineering cadets, guided by their instructor and a series of working group sessions with UAS operators and subject matter experts. This work provides insight into a viable systems security requirements analysis approach which results in traceable security, safety, and resiliency requirements that can be designed-for, built-to, and verified with confidence.
Internet of Things (IoT) is experiencing significant growth in the safety-critical applications which have caused new security challenges. These devices are becoming targets for different types of physical attacks, which are exacerbated by their diversity and accessibility. Therefore, there is a strict necessity to support embedded software developers to identify and remediate the vulnerabilities and create resilient applications against such attacks. In this paper, we propose a hardware security vulnerability assessment based on fault injection of an embedded application. In our security assessment, we apply a fault injection attack by using our clock glitch generator on a critical medical IoT device. Furthermore, we analyze the potential risks of ignoring these attacks in this embedded application. The results will inform the embedded software developers of various security risks and the required steps to improve the security of similar MCU-based applications. Our hardware security assessment approach is easy to apply and can lead to secure embedded IoT applications against fault attacks.
Github Gist is a service provided by Github which is used by developers to share code snippets. While sharing, developers may inadvertently introduce security smells in code snippets as well, such as hard-coded passwords. Security smells are recurrent coding patterns that are indicative of security weaknesses, which could potentially lead to security breaches. The goal of this paper is to help software practitioners avoid insecure coding practices through an empirical study of security smells in publicly-available GitHub Gists. Through static analysis, we found 13 types of security smells with 4,403 occurrences in 5,822 publicly-available Python Gists. 1,817 of those Gists, which is around 31%, have at least one security smell including 689 instances of hard-coded secrets. We also found no significance relation between the presence of these security smells and the reputation of the Gist author. Based on our findings, we advocate for increased awareness and rigorous code review efforts related to software security for Github Gists so that propagation of insecure coding practices are mitigated.
Software vulnerabilities often remain hidden until an attacker exploits the weak/insecure code. Therefore, testing the software from a vulnerability discovery perspective becomes challenging for developers if they do not inspect their code thoroughly (which is time-consuming). We propose that vulnerability prediction using certain software metrics can support the testing process by identifying vulnerable code-components (e.g., functions, classes, etc.). Once a code-component is predicted as vulnerable, the developers can focus their testing efforts on it, thereby avoiding the time/effort required for testing the entire application. The current paper presents a study that compares how software metrics perform as vulnerability predictors for software projects developed in two different languages (Java vs Python). The goal of this research is to analyze the vulnerability prediction performance of software metrics for different programming languages. We designed and conducted experiments on security vulnerabilities reported for three Java projects (Apache Tomcat 6, Tomcat 7, Apache CXF) and two Python projects (Django and Keystone). In this paper, we focus on a specific type of code component: Functions. We apply Machine Learning models for predicting vulnerable functions. Overall results show that software metrics-based vulnerability prediction is more useful for Java projects than Python projects (i.e., software metrics when used as features were able to predict Java vulnerable functions with a higher recall and precision compared to Python vulnerable functions prediction).
Verifying complex Cyber-Physical Systems (CPS) is increasingly important given the push to deploy safety-critical autonomous features. Unfortunately, traditional verification methods do not scale to the complexity of these systems and do not provide systematic methods to protect verified properties when not all the components can be verified. To address these challenges, this paper proposes a real-time mixed-trust computing framework that combines verification and protection. The framework introduces a new task model, where an application task can have both an untrusted and a trusted part. The untrusted part allows complex computations supported by a full OS with a realtime scheduler running in a VM hosted by a trusted hypervisor. The trusted part is executed by another scheduler within the hypervisor and is thus protected from the untrusted part. If the untrusted part fails to finish by a specific time, the trusted part is activated to preserve safety (e.g., prevent a crash) including its timing guarantees. This framework is the first allowing the use of untrusted components for CPS critical functions while preserving logical and timing guarantees, even in the presence of malicious attackers. We present the framework design and implementation along with the schedulability analysis and the coordination protocol between the trusted and untrusted parts. We also present our Raspberry Pi 3 implementation along with experiments showing the behavior of the system under failures of untrusted components, and a drone application to demonstrate its practicality.
Emerging intelligent systems have stringent constraints including cost and power consumption. When they are used in critical applications, resiliency becomes another key requirement. Much research into techniques for fault tolerance and dependability has been successfully applied to highly critical systems, such as those used in space, where cost is not an overriding constraint. Further, most resiliency techniques were focused on dealing with failures in the hardware and bugs in the software. The next generation of systems used in critical applications will also have to be tolerant to test escapes after manufacturing, soft errors and transients in the electronics, hardware bugs, hardware and software Trojans and viruses, as well as intrusions and other security attacks during operation. This paper will assess the impact of these threats on the results produced by a critical system, and proposed solutions to each of them. It is argued that run-time checks at the application-level are necessary to deal with errors in the results.
In this paper, we analyze the cyber resilience for the energy delivery systems (EDS) using critical system functionality (CSF). Some research works focus on identification of critical cyber components and services to address the resiliency for the EDS. Analysis based on the devices and services excluding the system behavior during an adverse event would provide partial analysis of cyber resilience. To address the gap, in this work, we utilize the vulnerability graph representation of EDS to compute the system functionality under adverse condition. We use network criticality metric to determine CSF. We estimate the criticality metric using graph Laplacian matrix and network performance after removing links (i.e., disabling control functions, or services). We model the resilience of the EDS using CSF, and system recovery curve. We also provide a comprehensive analysis of cyber resilience by determining the critical devices using TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and AHP (Analytical Hierarchy Process) methods. We present use cases of EDS illustrating the way control functions and services in EDS map to the vulnerability graph model. The simulation results show that we can estimate the resilience metric using different types of graphs that may assist in making an informed decision about EDS resilience.
Security model is an important subject in the field of low energy independence complexity theory. It takes security strategy as the core, changes the system from static protection to dynamic protection, and provides the basis for the rapid response of the system. A large number of empirical studies have been conducted to verify the cache consistency. The development of object oriented language is pure object oriented language, and the other is mixed object oriented language, that is, adding class, inheritance and other elements in process language and other languages. This paper studies a new object-oriented language application, namely GUT for a write-back cache, which is based on the study of simulation algorithm to solve all these challenges in the field of low energy independence complexity theory.
As safety-critical systems become increasingly interconnected, a system's operations depend on the reliability and security of the computing components and the interconnections among them. Therefore, a growing body of research seeks to tie safety analysis to security analysis. Specifically, it is important to analyze system safety under different attacker models. In this paper, we develop generic parameterizable state automaton templates to model the effects of an attack. Then, given an attacker model, we generate a state automaton that represents the system operation under the threat of the attacker model. We use a railway signaling system as our case study and consider threats to the communication protocol and the commands issued to physical devices. Our results show that while less skilled attackers are not able to violate system safety, more dedicated and skilled attackers can affect system safety. We also consider several countermeasures and show how well they can deter attacks.
Embedded systems that communicate with each other over the internet and build up a larger, loosely coupled (hardware) system with an unknown configuration at runtime is often referred to as a cyberphysical system. Many of these systems can become, due to its associated risks during their operation, safety critical. With increased complexity of such systems, the number of configurations can either be infinite or even unknown at design time. Hence, a certification at design time for such systems that documents a safe interaction for all possible configurations of all participants at runtime can become unfeasible. If such systems come together in a new configuration, a mechanism is required that can decide whether or not it is safe for them to interact. Such a mechanism can generally not be part of such systems for the sake of trust. Therefore, we present in the following sections the SEnSE device, short for Secure and Safe Embedded, that tackles these challenges and provides a secure and safe integration of safety-critical embedded systems.
The software supply chain is a source of cybersecurity risk for many commercial and government organizations. Public data may be used to inform automated tools for detecting software supply chain risk during continuous integration and deployment. We link data from the National Vulnerability Database (NVD) with open version control data for the open source project OpenSSL, a widely used secure networking library that made the news when a significant vulnerability, Heartbleed, was discovered in 2014. We apply the Alhazmi-Malaiya Logistic (AML) model for software vulnerability discovery to this case. This model predicts a sigmoid cumulative vulnerability discovery function over time. Some versions of OpenSSL do not conform to the predictions of the model because they contain a temporary plateau in the cumulative vulnerability discovery plot. This temporary plateau feature is an empirical signature of a security failure mode that may be useful in future studies of software supply chain risk.
While vehicle to everything (V2X) communication enables safety-critical automotive control systems to better support various connected services to improve safety and convenience of drivers, they also allow automotive attack surfaces to increase dynamically in modern vehicles. Many researchers as well as hackers have already demonstrated that they can take remote control of the targeted car by exploiting the vulnerabilities of in-vehicle networks such as Controller Area Networks (CANs). For assuring CAN security, we focus on how to authenticate electronic control units (ECUs) in real-time by addressing the security challenges of in-vehicle networks. In this paper, we propose a novel and lightweight authentication protocol with an attack-resilient tree algorithm, which is based on one-way hash chain. The protocol can be easily deployed in CAN by performing a firmware update of ECU. We have shown analytically that the protocol achieves a high level of security. In addition, the performance of the proposed protocol is validated on CANoe simulator for virtual ECUs and Freescale S12XF used in real vehicles. The results show that our protocol is more efficient than other authentication protocol in terms of authentication time, response time, and service delay.
Over the last decade, a globalization of the software industry took place, which facilitated the sharing and reuse of code across existing project boundaries. At the same time, such global reuse also introduces new challenges to the software engineering community, with not only components but also their problems and vulnerabilities being now shared. For example, vulnerabilities found in APIs no longer affect only individual projects but instead might spread across projects and even global software ecosystem borders. Tracing these vulnerabilities at a global scale becomes an inherently difficult task since many of the existing resources required for such analysis still rely on proprietary knowledge representation. In this research, we introduce an ontology-based knowledge modeling approach that can eliminate such information silos. More specifically, we focus on linking security knowledge with other software knowledge to improve traceability and trust in software products (APIs). Our approach takes advantage of the Semantic Web and its reasoning services, to trace and assess the impact of security vulnerabilities across project boundaries. We present a case study, to illustrate the applicability and flexibility of our ontological modeling approach by tracing vulnerabilities across project and resource boundaries.
Artificial software diversity is an effective way to prevent software vulnerabilities and errors to be exploited in code-reuse attacks. This is achieved by lowering the individual probability of a successful attack to a level that makes the attack unfeasible. Unfortunately, the existing approaches are not applicable to safety-critical real-time systems as they induce unacceptable performance overheads, they violate safety and timing guarantees, or they assume hardware resources which are typically not available in embedded systems. To overcome these problems, we propose a safe diversity approach that preserves the timing properties of real-time processes by controlling its impact on the worst case execution time (WCET). Our main idea is to use block-level diversity with a large, but fixed set of movable instruction sequences, and to use static WCET analysis to identify non-critical areas of code where it can safely be split into more movable instruction sequences.
Increasing interest in cyber-physical systems with integrated computational and physical capabilities that can interact with humans can be identified in research and practice. Since these systems can be classified as safety- and security-critical systems the need for safety and security assurance and certification will grow. Moreover, these systems are typically characterized by fragmentation, interconnectedness, heterogeneity, short release cycles, cross organizational nature and high interference between safety and security requirements. These properties combined with the assurance of compliance to multiple standards, carrying out certification and re-certification, and the lack of an approach to model, document and integrate safety and security requirements represent a major challenge. In order to address this gap we developed a domain agnostic approach to model security and safety requirements in an integrated view to support certification processes during design and run-time phases of cyber-physical systems.