Biblio
Mixed-criticality scheduling theory (MCSh) was developed to allow for more resource-efficient implementation of systems comprising different components that need to have their correctness validated at different levels of assurance. As originally defined, MCSh deals exclusively with pre-runtime verification of such systems; hence many mixed-criticality scheduling algorithms that have been developed tend to exhibit rather poor survivability characteristics during run-time. (E.g., MCSh allows for less-important (“Lo-criticality”) workloads to be completely discarded in the event that run-time behavior is not compliant with the assumptions under which the correctness of the LO-criticality workload should be verified.) Here we seek to extend MCSh to incorporate survivability considerations, by proposing quantitative metrics for the robustness and resilience of mixed-criticality scheduling algorithms. Such metrics allow us to make quantitative assertions regarding the survivability characteristics of mixed-criticality scheduling algorithms, and to compare different algorithms from the perspective of their survivability. We propose that MCSh seek to develop scheduling algorithms that possess superior survivability characteristics, thereby obtaining algorithms with better survivability properties than current ones (which, since they have been developed within a survivability-agnostic framework, tend to focus exclusively on pre-runtime verification and ignore survivability issues entirely).
In mixed-criticality systems functionalities of different criticalities, that need to have their correctness validated to different levels of assurance, co-exist upon a shared platform. Multiple specifications at differing levels of assurance may be provided for such systems; the specifications that are trusted at very high levels of assurance tend to be more conservative than those at lower levels of assurance. Prior research on the scheduling of such mixed-criticality systems has primarily focused upon the case where multiple estimates of the worst-case execution time (WCET) of pieces of code are provided; in this paper, a model is considered in which multiple estimates are instead provided for the rate at which event-triggered processes are executed. An algorithm is derived for scheduling such systems upon a preemptive uniprocessor; the effectiveness of this algorithm is demonstrated quantitatively via the speedup factor metric.