Visible to the public Biblio

Filters: Keyword is Processor scheduling  [Clear All Filters]
2023-05-19
Li, Jiacong, Lv, Hang, Lei, Bo.  2022.  A Cross-Domain Data Security Sharing Approach for Edge Computing based on CP-ABE. 2022 23rd Asia-Pacific Network Operations and Management Symposium (APNOMS). :1—6.
Cloud computing is a unified management and scheduling model of computing resources. To satisfy multiple resource requirements for various application, edge computing has been proposed. One challenge of edge computing is cross-domain data security sharing problem. Ciphertext policy attribute-based encryption (CP-ABE) is an effective way to ensure data security sharing. However, many existing schemes focus on could computing, and do not consider the features of edge computing. In order to address this issue, we propose a cross-domain data security sharing approach for edge computing based on CP-ABE. Besides data user attributes, we also consider access control from edge nodes to user data. Our scheme first calculates public-secret key peer of each edge node based on its attributes, and then uses it to encrypt secret key of data ciphertext to ensure data security. In addition, our scheme can add non-user access control attributes such as time, location, frequency according to the different demands. In this paper we take time as example. Finally, the simulation experiments and analysis exhibit the feasibility and effectiveness of our approach.
2023-04-27
Spliet, Roy, Mullins, Robert D..  2022.  Sim-D: A SIMD Accelerator for Hard Real-Time Systems. IEEE Transactions on Computers. 71:851–865.
Emerging safety-critical systems require high-performance data-parallel architectures and, problematically, ones that can guarantee tight and safe worst-case execution times. Given the complexity of existing architectures like GPUs, it is unlikely that sufficiently accurate models and algorithms for timing analysis will emerge in the foreseeable future. This motivates our work on Sim-D, a clean-slate approach to designing a real-time data-parallel architecture. Sim-D enforces a predictable execution model by isolating compute- and access resources in hardware. The DRAM controller uninterruptedly transfers tiles of data, requested by entire work-groups. This permits work-groups to be executed as a sequence of deterministic access- and compute phases, scheduling phases from up to two work-groups in parallel. Evaluation using a cycle-accurate timing model shows that Sim-D can achieve performance on par with an embedded-grade NVIDIA TK1 GPU under two conditions: applications refrain from using indirect DRAM transfers into large buffers, and Sim-D's scratchpads provide sufficient bandwidth. Sim-D's design facilitates derivation of safe WCET bounds that are tight within 12.7 percent on average, at an additional average performance penalty of \textbackslashsim∼9.2 percent caused by scheduling restrictions on phases.
Conference Name: IEEE Transactions on Computers
2023-02-24
Coleman, Jared, Kiamari, Mehrdad, Clark, Lillian, D'Souza, Daniel, Krishnamachari, Bhaskar.  2022.  Graph Convolutional Network-based Scheduler for Distributing Computation in the Internet of Robotic Things. MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM). :1070—1075.
Existing solutions for scheduling arbitrarily complex distributed applications on networks of computational nodes are insufficient for scenarios where the network topology is changing rapidly. New Internet of Things (IoT) domains like the Internet of Robotic Things (IoRT) and the Internet of Battlefield Things (IoBT) demand solutions that are robust and efficient in environments that experience constant and/or rapid change. In this paper, we demonstrate how recent advancements in machine learning (in particular, in graph convolutional neural networks) can be leveraged to solve the task scheduling problem with decent performance and in much less time than traditional algorithms.
2023-02-17
Liu, Xuanyu, Cheng, Guozhen, Wang, Yawen, Zhang, Shuai.  2022.  Overview of Scientific Workflow Security Scheduling in Clouds. 2021 International Conference on Advanced Computing and Endogenous Security. :1–6.
With the development of cloud computing technology, more and more scientific researchers choose to deliver scientific workflow tasks to public cloud platforms for execution. This mode effectively reduces scientific research costs while also bringing serious security risks. In response to this problem, this article summarizes the current security issues facing cloud scientific workflows, and analyzes the importance of studying cloud scientific workflow security issues. Then this article analyzes, summarizes and compares the current cloud scientific workflow security methods from three perspectives: system architecture, security model, and security strategy. Finally made a prospect for the future development direction.
2022-08-12
Winderix, Hans, Mühlberg, Jan Tobias, Piessens, Frank.  2021.  Compiler-Assisted Hardening of Embedded Software Against Interrupt Latency Side-Channel Attacks. 2021 IEEE European Symposium on Security and Privacy (EuroS&P). :667—682.
Recent controlled-channel attacks exploit timing differences in the rudimentary fetch-decode-execute logic of processors. These new attacks also pose a threat to software on embedded systems. Even when Trusted Execution Environments (TEEs) are used, interrupt latency attacks allow untrusted code to extract application secrets from a vulnerable enclave by scheduling interruption of the enclave. Constant-time programming is effective against these attacks but, as we explain in this paper, can come with some disadvantages regarding performance. To deal with this new threat, we propose a novel algorithm that hardens programs during compilation by aligning the execution time of corresponding instructions in secret-dependent branches. Our results show that, on a class of embedded systems with deterministic execution times, this approach eliminates interrupt latency side-channel leaks and mitigates limitations of constant-time programming. We have implemented our approach in the LLVM compiler infrastructure for the San-cus TEE, which extends the openMSP430 microcontroller, and we discuss applicability to other architectures. We make our implementation and benchmarks available for further research.
2022-08-10
Zhan, Zhi-Hui, Wu, Sheng-Hao, Zhang, Jun.  2021.  A New Evolutionary Computation Framework for Privacy-Preserving Optimization. 2021 13th International Conference on Advanced Computational Intelligence (ICACI). :220—226.
Evolutionary computation (EC) is a kind of advanced computational intelligence (CI) algorithm and advanced artificial intelligence (AI) algorithm. EC algorithms have been widely studied for solving optimization and scheduling problems in various real-world applications, which act as one of the Big Three in CI and AI, together with fuzzy systems and neural networks. Even though EC has been fast developed in recent years, there is an assumption that the algorithm designer can obtain the objective function of the optimization problem so that they can calculate the fitness values of the individuals to follow the “survival of the fittest” principle in natural selection. However, in a real-world application scenario, there is a kind of problem that the objective function is privacy so that the algorithm designer can not obtain the fitness values of the individuals directly. This is the privacy-preserving optimization problem (PPOP) where the assumption of available objective function does not check out. How to solve the PPOP is a new emerging frontier with seldom study but is also a challenging research topic in the EC community. This paper proposes a rank-based cryptographic function (RCF) to protect the fitness value information. Especially, the RCF is adopted by the algorithm user to encrypt the fitness values of all the individuals as rank so that the algorithm designer does not know the exact fitness information but only the rank information. Nevertheless, the RCF can protect the privacy of the algorithm user but still can provide sufficient information to the algorithm designer to drive the EC algorithm. We have applied the RCF privacy-preserving method to two typical EC algorithms including particle swarm optimization (PSO) and differential evolution (DE). Experimental results show that the RCF-based privacy-preserving PSO and DE can solve the PPOP without performance loss.
2022-05-24
Chan, Matthew.  2021.  Bare-metal hypervisor virtual servers with a custom-built automatic scheduling system for educational use. 2021 Fourth International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1–5.
In contrast to traditional physical servers, a custom-built system utilizing a bare-metal hypervisor virtual server environment provides advantages of both cost savings and flexibility in terms of systems configuration. This system is designed to facilitate hands-on experience for Computer Science students, particularly those specializing in systems administration and computer networking. This multi-purpose and functional system uses an automatic advanced virtual server reservation system (AAVSRsv), written in C++, to schedule and manage virtual servers. The use of such a system could be extended to additional courses focusing on such topics as cloud computing, database systems, information assurance, as well as ethical hacking and system defense. The design can also be replicated to offer training sessions to other information technology professionals.
2022-05-06
Saravanan, M, Pratap Sircar, Rana.  2021.  Quantum Evolutionary Algorithm for Scheduling Resources in Virtualized 5G RAN Environment. 2021 IEEE 4th 5G World Forum (5GWF). :111–116.
Radio is the most important part of any wireless network. Radio Access Network (RAN) has been virtualized and disaggregated into different functions whose location is best defined by the requirements and economics of the use case. This Virtualized RAN (vRAN) architecture separates network functions from the underlying hardware and so 5G can leverage virtualization of the RAN to implement these functions. The easy expandability and manageability of the vRAN support the expansion of the network capacity and deployment of new features and algorithms for streamlining resource usage. In this paper, we try to address the problem of scheduling 5G vRAN with mid-haul network capacity constraints as a combinatorial optimization problem. We transformed it to a Quadratic Unconstrained Binary Optimization (QUBO) problem by using a newly proposed quantum-based algorithm and compared our implementation with existing classical algorithms. This work has demonstrated the advantage of quantum computers in solving a particular optimization problem in the Telecommunication domain and paves the way for solving critical real-world problems using quantum computers faster and better.
2022-05-03
Stavrinides, Georgios L., Karatza, Helen D..  2021.  Security and Cost Aware Scheduling of Real-Time IoT Workflows in a Mist Computing Environment. 2021 8th International Conference on Future Internet of Things and Cloud (FiCloud). :34—41.

In this paper we propose a security and cost aware scheduling heuristic for real-time workflow jobs that process Internet of Things (IoT) data with various security requirements. The environment under study is a four-tier architecture, consisting of IoT, mist, fog and cloud layers. The resources in the mist, fog and cloud tiers are considered to be heterogeneous. The proposed scheduling approach is compared to a baseline strategy, which is security aware, but not cost aware. The performance evaluation of both heuristics is conducted via simulation, under different values of security level probabilities for the initial IoT input data of the entry tasks of the workflow jobs.

2022-02-22
Xuguang, Zhu.  2021.  A Certainty-guaranteed inter/intra-core communication method for multi-core embedded systems. 2021 IEEE International Conference on Power Electronics, Computer Applications (ICPECA). :1024—1027.

In order to meet the actual needs of operating system localization and high-security operating system, this paper proposes a multi-core embedded high-security operating system inter-core communication mechanism centered on private memory on the core based on the cache mechanism of DSP processors such as Feiteng design. In order to apply it to the multi-core embedded high-security operating system, this paper also combines the priority scheduling scheme used in the design of our actual operating system to analyze the certainty of inter-core communication. The analysis result is: under this communication mechanism There is an upper limit for end-to-end delay, so the certainty of the communication mechanism is guaranteed and can be applied to multi-core high-security embedded operating systems.

2021-09-30
Serino, Anthony, Cheng, Liang.  2020.  Real-Time Operating Systems for Cyber-Physical Systems: Current Status and Future Research. 2020 International Conferences on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics). :419–425.
This paper studies the current status and future directions of RTOS (Real-Time Operating Systems) for time-sensitive CPS (Cyber-Physical Systems). GPOS (General Purpose Operating Systems) existed before RTOS but did not meet performance requirements for time sensitive CPS. Many GPOS have put forward adaptations to meet the requirements of real-time performance, and this paper compares RTOS and GPOS and shows their pros and cons for CPS applications. Furthermore, comparisons among select RTOS such as VxWorks, RTLinux, and FreeRTOS have been conducted in terms of scheduling, kernel, and priority inversion. Various tools for WCET (Worst-Case Execution Time) estimation are discussed. This paper also presents a CPS use case of RTOS, i.e. JetOS for avionics, and future advancements in RTOS such as multi-core RTOS, new RTOS architecture and RTOS security for CPS.
2021-09-01
Walter, Dominik, Witterauf, Michael, Teich, Jürgen.  2020.  Real-time Scheduling of I/O Transfers for Massively Parallel Processor Arrays. 2020 18th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE). :1—11.
The following topics are dealt with: formal verification; formal specification; cyber-physical systems; program verification; mobile robots; control engineering computing; temporal logic; security of data; Internet of Things; traffic engineering computing.
2021-05-05
Samriya, Jitendra Kumar, Kumar, Narander.  2020.  Fuzzy Ant Bee Colony For Security And Resource Optimization In Cloud Computing. 2020 5th International Conference on Computing, Communication and Security (ICCCS). :1—5.

Cloud computing (CC) systems prevail to be the widespread computational paradigms for offering immense scalable and elastic services. Computing resources in cloud environment should be scheduled to facilitate the providers to utilize the resources moreover the users could get low cost applications. The most prominent need in job scheduling is to ensure Quality of service (QoS) to the user. In the boundary of the third party the scheduling takes place hence it is a significant condition for assuring its security. The main objective of our work is to offer QoS i.e. cost, makespan, minimized migration of task with security enforcement moreover the proposed algorithm guarantees that the admitted requests are executed without violating service level agreement (SLA). These objectives are attained by the proposed Fuzzy Ant Bee Colony algorithm. The experimental outcome confirms that secured job scheduling objective with assured QoS is attained by the proposed algorithm.

2021-03-01
Zhang, Y., Groves, T., Cook, B., Wright, N. J., Coskun, A. K..  2020.  Quantifying the impact of network congestion on application performance and network metrics. 2020 IEEE International Conference on Cluster Computing (CLUSTER). :162–168.
In modern high-performance computing (HPC) systems, network congestion is an important factor that contributes to performance degradation. However, how network congestion impacts application performance is not fully understood. As Aries network, a recent HPC network architecture featuring a dragonfly topology, is equipped with network counters measuring packet transmission statistics on each router, these network metrics can potentially be utilized to understand network performance. In this work, by experiments on a large HPC system, we quantify the impact of network congestion on various applications' performance in terms of execution time, and we correlate application performance with network metrics. Our results demonstrate diverse impacts of network congestion: while applications with intensive MPI operations (such as HACC and MILC) suffer from more than 40% extension in their execution times under network congestion, applications with less intensive MPI operations (such as Graph500 and HPCG) are mostly not affected. We also demonstrate that a stall-to-flit ratio metric derived from Aries network counters is positively correlated with performance degradation and, thus, this metric can serve as an indicator of network congestion in HPC systems.
2020-12-01
Yang, R., Ouyang, X., Chen, Y., Townend, P., Xu, J..  2018.  Intelligent Resource Scheduling at Scale: A Machine Learning Perspective. 2018 IEEE Symposium on Service-Oriented System Engineering (SOSE). :132—141.

Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.

2020-11-02
Wang, Nan, Yao, Manting, Jiang, Dongxu, Chen, Song, Zhu, Yu.  2018.  Security-Driven Task Scheduling for Multiprocessor System-on-Chips with Performance Constraints. 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). :545—550.

The high penetration of third-party intellectual property (3PIP) brings a high risk of malicious inclusions and data leakage in products due to the planted hardware Trojans, and system level security constraints have recently been proposed for MPSoCs protection against hardware Trojans. However, secret communication still can be established in the context of the proposed security constraints, and thus, another type of security constraints is also introduced to fully prevent such malicious inclusions. In addition, fulfilling the security constraints incurs serious overhead of schedule length, and a two-stage performance-constrained task scheduling algorithm is then proposed to maintain most of the security constraints. In the first stage, the schedule length is iteratively reduced by assigning sets of adjacent tasks into the same core after calculating the maximum weight independent set of a graph consisting of all timing critical paths. In the second stage, tasks are assigned to proper IP vendors and scheduled to time periods with a minimization of cores required. The experimental results show that our work reduces the schedule length of a task graph, while only a small number of security constraints are violated.

2020-02-26
Tran, Geoffrey Phi, Walters, John Paul, Crago, Stephen.  2019.  Increased Fault-Tolerance and Real-Time Performance Resiliency for Stream Processing Workloads through Redundancy. 2019 IEEE International Conference on Services Computing (SCC). :51–55.

Data analytics and telemetry have become paramount to monitoring and maintaining quality-of-service in addition to business analytics. Stream processing-a model where a network of operators receives and processes continuously arriving discrete elements-is well-suited for these needs. Current and previous studies and frameworks have focused on continuity of operations and aggregate performance metrics. However, real-time performance and tail latency are also important. Timing errors caused by either performance or failed communication faults also affect real-time performance more drastically than aggregate metrics. In this paper, we introduce redundancy in the stream data to improve the real-time performance and resiliency to timing errors caused by either performance or failed communication faults. We also address limitations in previous solutions using a fine-grained acknowledgment tracking scheme to both increase the effectiveness for resiliency to performance faults and enable effectiveness for failed communication faults. Our results show that fine-grained acknowledgment schemes can improve the tail and mean latencies by approximately 30%. We also show that these schemes can improve resiliency to performance faults compared to existing work. Our improvements result in 47.4% to 92.9% fewer missed deadlines compared to 17.3% to 50.6% for comparable topologies and redundancy levels in the state of the art. Finally, we show that redundancies of 25% to 100% can reduce the number of data elements that miss their deadline constraints by 0.76% to 14.04% for applications with high fan-out and by 7.45% up to 50% for applications with no fan-out.

2019-12-09
Li, Wenjuan, Cao, Jian, Hu, Keyong, Xu, Jie, Buyya, Rajkumar.  2019.  A Trust-Based Agent Learning Model for Service Composition in Mobile Cloud Computing Environments. IEEE Access. 7:34207–34226.
Mobile cloud computing has the features of resource constraints, openness, and uncertainty which leads to the high uncertainty on its quality of service (QoS) provision and serious security risks. Therefore, when faced with complex service requirements, an efficient and reliable service composition approach is extremely important. In addition, preference learning is also a key factor to improve user experiences. In order to address them, this paper introduces a three-layered trust-enabled service composition model for the mobile cloud computing systems. Based on the fuzzy comprehensive evaluation method, we design a novel and integrated trust management model. Service brokers are equipped with a learning module enabling them to better analyze customers' service preferences, especially in cases when the details of a service request are not totally disclosed. Because traditional methods cannot totally reflect the autonomous collaboration between the mobile cloud entities, a prototype system based on the multi-agent platform JADE is implemented to evaluate the efficiency of the proposed strategies. The experimental results show that our approach improves the transaction success rate and user satisfaction.
2018-05-09
Rahbari, D., Kabirzadeh, S., Nickray, M..  2017.  A security aware scheduling in fog computing by hyper heuristic algorithm. 2017 3rd Iranian Conference on Intelligent Systems and Signal Processing (ICSPIS). :87–92.

Fog computing provides a new architecture for the implementation of the Internet of Things (IoT), which can connect sensor nodes to the cloud using the edge of the network. This structure has improved the latency and energy consumption in the cloud. In this heterogeneous and distributed environment, resource allocation is very important. Hence, scheduling will be a challenge to increase productivity and allocate resources appropriately to the tasks. Programs that run in this environment should be protected from intruders. We consider three parameters as authentication, integrity, and confidentiality to maintain security in fog devices. These parameters have time and computational overhead. In the proposed approach, we schedule the modules for the run in fog devices by heuristic algorithms based on data mining technique. The objective function is included CPU utilization, bandwidth, and security overhead. We compare the proposed algorithm with several heuristic algorithms. The results show that our proposed algorithm improved the average energy consumption of 63.27%, cost 44.71% relative to the PSO, ACO, SA algorithms.

2018-05-02
Rjoub, G., Bentahar, J..  2017.  Cloud Task Scheduling Based on Swarm Intelligence and Machine Learning. 2017 IEEE 5th International Conference on Future Internet of Things and Cloud (FiCloud). :272–279.

Cloud computing is the expansion of parallel computing, distributed computing. The technology of cloud computing becomes more and more widely used, and one of the fundamental issues in this cloud environment is related to task scheduling. However, scheduling in Cloud environments represents a difficult issue since it is basically NP-complete. Thus, many variants based on approximation techniques, especially those inspired by Swarm Intelligence (SI) have been proposed. This paper proposes a machine learning algorithm to guide the cloud choose the scheduling technique by using multi criteria decision to optimize the performance. The main contribution of our work is to minimize the makespan of a given task set. The new strategy is simulated using the CloudSim toolkit package where the impact of the algorithm is checked with different numbers of VMs varying from 2 to 50, and different task sizes between 30 bytes and 2700 bytes. Experiment results show that the proposed algorithm minimizes the execution time and the makespan between 7% and 75%, and improves the performance of the load balancing scheduling.

2018-02-21
Lu, Y., Chen, G., Luo, L., Tan, K., Xiong, Y., Wang, X., Chen, E..  2017.  One more queue is enough: Minimizing flow completion time with explicit priority notification. IEEE INFOCOM 2017 - IEEE Conference on Computer Communications. :1–9.

Ideally, minimizing the flow completion time (FCT) requires millions of priorities supported by the underlying network so that each flow has its unique priority. However, in production datacenters, the available switch priority queues for flow scheduling are very limited (merely 2 or 3). This practical constraint seriously degrades the performance of previous approaches. In this paper, we introduce Explicit Priority Notification (EPN), a novel scheduling mechanism which emulates fine-grained priorities (i.e., desired priorities or DP) using only two switch priority queues. EPN can support various flow scheduling disciplines with or without flow size information. We have implemented EPN on commodity switches and evaluated its performance with both testbed experiments and extensive simulations. Our results show that, with flow size information, EPN achieves comparable FCT as pFabric that requires clean-slate switch hardware. And EPN also outperforms TCP by up to 60.5% if it bins the traffic into two priority queues according to flow size. In information-agnostic setting, EPN outperforms PIAS with two priority queues by up to 37.7%. To the best of our knowledge, EPN is the first system that provides millions of priorities for flow scheduling with commodity switches.

Zheng, H., Zhang, X..  2017.  Optimizing Task Assignment with Minimum Cost on Heterogeneous Embedded Multicore Systems Considering Time Constraint. 2017 ieee 3rd international conference on big data security on cloud (bigdatasecurity), ieee international conference on high performance and smart computing (hpsc), and ieee international conference on intelligent data and security (ids). :225–230.
Time and cost are the most critical performance metrics for computer systems including embedded system, especially for the battery-based embedded systems, such as PC, mainframe computer, and smart phone. Most of the previous work focuses on saving energy in a deterministic way by taking the average or worst scenario into account. However, such deterministic approaches usually are inappropriate in modeling energy consumption because of uncertainties in conditional instructions on processors and time-varying external environments. Through studying the relationship between energy consumption, execution time and completion probability of tasks on heterogeneous multi-core architectures this paper proposes an optimal energy efficiency and system performance model and the OTHAP (Optimizing Task Heterogeneous Assignment with Probability) algorithm to address the Processor and Voltage Assignment with Probability (PVAP) problem of data-dependent aperiodic tasks in real-time embedded systems, ensuring that all the tasks can be done under the time constraint with areal-time embedded systems guaranteed probability. We adopt a task DAG (Directed Acyclic Graph) to model the PVAP problem. We first use a processor scheduling algorithm to map the task DAG onto a set of voltage-variable processors, and then use our dynamic programming algorithm to assign a proper voltage to each task and The experimental results demonstrate our approach outperforms state-of-the-art algorithms in this field (maximum improvement of 24.6%).
2014-09-26
Kashyap, V., Wiedermann, B., Hardekopf, B..  2011.  Timing- and Termination-Sensitive Secure Information Flow: Exploring a New Approach. Security and Privacy (SP), 2011 IEEE Symposium on. :413-428.

Secure information flow guarantees the secrecy and integrity of data, preventing an attacker from learning secret information (secrecy) or injecting untrusted information (integrity). Covert channels can be used to subvert these security guarantees, for example, timing and termination channels can, either intentionally or inadvertently, violate these guarantees by modifying the timing or termination behavior of a program based on secret or untrusted data. Attacks using these covert channels have been published and are known to work in practiceâ as techniques to prevent non-covert channels are becoming increasingly practical, covert channels are likely to become even more attractive for attackers to exploit. The goal of this paper is to understand the subtleties of timing and termination-sensitive noninterference, explore the space of possible strategies for enforcing noninterference guarantees, and formalize the exact guarantees that these strategies can enforce. As a result of this effort we create a novel strategy that provides stronger security guarantees than existing work, and we clarify claims in existing work about what guarantees can be made.