Visible to the public Architecture Modeling for Resource Margin EstimationConflict Detection Enabled

Video: 

Rate Monotonic Analysis (RMA), ARINC 653, or similar schedulability analyses have been traditionally used to guarantee that all real-time tasks have sufficient computing resources to meet mission requirements. Schedulability analysis provides critical evidence for most safety or airworthiness certification processes. We illustrate an architecture model, inspire by AADL, that has been used to conduct RMA and also illustrate that it can be augmented to study interference effects due to shared computing resources.

The architecture model describes the system hardware architecture, software architecture and the mapping between the two. Thus this model captures one or more deployment scenarios from a resource usage perspective when populated with relevant design data in design databases/code and the profiling data captured from tests in the lab. RMA can then be used to mathematically guarantee schedulability, and calculate available resource margins.

We demonstrate that such a model can be augmented to empirically estimate resource margins available when using modern multicore processors with shared cache. Multicore processors offer significant advantages of weight, power, and space to embedded real-time avionics system developers, making their adoption almost inevitable. System sustainability and economies of scale will increasingly make use of these multicore devices indispensable. However, predictability of performance is a significant challenge to airworthiness certification. Specifically, new hardware capabilities of multicore architectures incur cache interference effects, needing additional consideration beyond those in traditional schedulability analysis to establish worst-case timing for applications. Effective use of RMA depends on knowing the maximum or Worst-Case Execution Time (WCET) for every task in the real-time system. Multicore processors present challenges to the measurement of WCET due to the use of a last-level cache (LLC), which is typically shared by all cores in the processor, hence resulting in inteference across cores.

To study the effect of interference due to shared hardware resources, we augmented the RMA deployment architecture model with the ability to automatically create, deploy, execute and analyze multiple deployment models, or tests. Each test represents a point in a high-dimension parameter space that describes a combination of application characteristics that may affect performance in the presence of shared cache. The model is used to automatically execute the tests and collect profiling data, and extract the inflated execution times due to cache interference. This methodology and automated tool can thus be used for empirically establishing a reasonable, high-confidence upper bound for cache interference effects, and estimate available resource margins in the context of engineering practices commonly used in certifiable real-time systems.

Preliminary results indicate that cache interference is more sensitive to certain parameters such as application memory size and number of cores loaded. Once established, this characterization is then used to (a) locate a given application configuration within the cache interference stress bounds, thus estimating available safety margins, (b) calculate adjustments needed to worst-case execution times for performance analysis, and (c) guide application resource allocation while taking cache interference into account.

The modeling capability described above is built upon the nForge platform whose underlying tooling includes a scalable graph database, coupled with a highly programmable tool architecture and has been used in practice in analyses of a large avionics system. 1

1 This material is based on research sponsored by the Air Force Research Laboratory and the F-35 Joint Program

Office undercontract number FA8750-15-C-0275. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government. Approved for public release 26 December 2019 # 88ABW-2019-6081

nHansa Inc., San Jose, CA

Srini Srinivasan is currently the founder and CTO of nHansa Inc of San Jose, CA. nHansa is engaged in the R&D and commercialization of software tools for embedded real-time systems, targeted for the design and analysis of performance- and safety-critical applications, such as aerospace and autonomous systems. Srini has over 30 years of experience in the embedded real-time systems domain both as a practitioner and as a tool vendor. Early in his career as a practitioner, he played an instrumental role in the development of an automated safety system for a nuclear power plant, and subsequent safety certification for operation. He was then a founder and the CEO of TimeSys Corporation, a vendor of schedulability analysis tools, Real-time Java and Real-time Linux products.

Contributor(s): 
Srini Srinivasan
Russell Kegley
Mark Gerhardt
Rich Hilliard
Clifford Granger
Jonathan Preston
Steven Drager
Matthew Anderson
Richard Rosa
Alan Charsagua