Biblio
In this paper we propose a security and cost aware scheduling heuristic for real-time workflow jobs that process Internet of Things (IoT) data with various security requirements. The environment under study is a four-tier architecture, consisting of IoT, mist, fog and cloud layers. The resources in the mist, fog and cloud tiers are considered to be heterogeneous. The proposed scheduling approach is compared to a baseline strategy, which is security aware, but not cost aware. The performance evaluation of both heuristics is conducted via simulation, under different values of security level probabilities for the initial IoT input data of the entry tasks of the workflow jobs.
In order to meet the actual needs of operating system localization and high-security operating system, this paper proposes a multi-core embedded high-security operating system inter-core communication mechanism centered on private memory on the core based on the cache mechanism of DSP processors such as Feiteng design. In order to apply it to the multi-core embedded high-security operating system, this paper also combines the priority scheduling scheme used in the design of our actual operating system to analyze the certainty of inter-core communication. The analysis result is: under this communication mechanism There is an upper limit for end-to-end delay, so the certainty of the communication mechanism is guaranteed and can be applied to multi-core high-security embedded operating systems.
Cloud computing (CC) systems prevail to be the widespread computational paradigms for offering immense scalable and elastic services. Computing resources in cloud environment should be scheduled to facilitate the providers to utilize the resources moreover the users could get low cost applications. The most prominent need in job scheduling is to ensure Quality of service (QoS) to the user. In the boundary of the third party the scheduling takes place hence it is a significant condition for assuring its security. The main objective of our work is to offer QoS i.e. cost, makespan, minimized migration of task with security enforcement moreover the proposed algorithm guarantees that the admitted requests are executed without violating service level agreement (SLA). These objectives are attained by the proposed Fuzzy Ant Bee Colony algorithm. The experimental outcome confirms that secured job scheduling objective with assured QoS is attained by the proposed algorithm.
Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.
The high penetration of third-party intellectual property (3PIP) brings a high risk of malicious inclusions and data leakage in products due to the planted hardware Trojans, and system level security constraints have recently been proposed for MPSoCs protection against hardware Trojans. However, secret communication still can be established in the context of the proposed security constraints, and thus, another type of security constraints is also introduced to fully prevent such malicious inclusions. In addition, fulfilling the security constraints incurs serious overhead of schedule length, and a two-stage performance-constrained task scheduling algorithm is then proposed to maintain most of the security constraints. In the first stage, the schedule length is iteratively reduced by assigning sets of adjacent tasks into the same core after calculating the maximum weight independent set of a graph consisting of all timing critical paths. In the second stage, tasks are assigned to proper IP vendors and scheduled to time periods with a minimization of cores required. The experimental results show that our work reduces the schedule length of a task graph, while only a small number of security constraints are violated.
Data analytics and telemetry have become paramount to monitoring and maintaining quality-of-service in addition to business analytics. Stream processing-a model where a network of operators receives and processes continuously arriving discrete elements-is well-suited for these needs. Current and previous studies and frameworks have focused on continuity of operations and aggregate performance metrics. However, real-time performance and tail latency are also important. Timing errors caused by either performance or failed communication faults also affect real-time performance more drastically than aggregate metrics. In this paper, we introduce redundancy in the stream data to improve the real-time performance and resiliency to timing errors caused by either performance or failed communication faults. We also address limitations in previous solutions using a fine-grained acknowledgment tracking scheme to both increase the effectiveness for resiliency to performance faults and enable effectiveness for failed communication faults. Our results show that fine-grained acknowledgment schemes can improve the tail and mean latencies by approximately 30%. We also show that these schemes can improve resiliency to performance faults compared to existing work. Our improvements result in 47.4% to 92.9% fewer missed deadlines compared to 17.3% to 50.6% for comparable topologies and redundancy levels in the state of the art. Finally, we show that redundancies of 25% to 100% can reduce the number of data elements that miss their deadline constraints by 0.76% to 14.04% for applications with high fan-out and by 7.45% up to 50% for applications with no fan-out.
Fog computing provides a new architecture for the implementation of the Internet of Things (IoT), which can connect sensor nodes to the cloud using the edge of the network. This structure has improved the latency and energy consumption in the cloud. In this heterogeneous and distributed environment, resource allocation is very important. Hence, scheduling will be a challenge to increase productivity and allocate resources appropriately to the tasks. Programs that run in this environment should be protected from intruders. We consider three parameters as authentication, integrity, and confidentiality to maintain security in fog devices. These parameters have time and computational overhead. In the proposed approach, we schedule the modules for the run in fog devices by heuristic algorithms based on data mining technique. The objective function is included CPU utilization, bandwidth, and security overhead. We compare the proposed algorithm with several heuristic algorithms. The results show that our proposed algorithm improved the average energy consumption of 63.27%, cost 44.71% relative to the PSO, ACO, SA algorithms.
Cloud computing is the expansion of parallel computing, distributed computing. The technology of cloud computing becomes more and more widely used, and one of the fundamental issues in this cloud environment is related to task scheduling. However, scheduling in Cloud environments represents a difficult issue since it is basically NP-complete. Thus, many variants based on approximation techniques, especially those inspired by Swarm Intelligence (SI) have been proposed. This paper proposes a machine learning algorithm to guide the cloud choose the scheduling technique by using multi criteria decision to optimize the performance. The main contribution of our work is to minimize the makespan of a given task set. The new strategy is simulated using the CloudSim toolkit package where the impact of the algorithm is checked with different numbers of VMs varying from 2 to 50, and different task sizes between 30 bytes and 2700 bytes. Experiment results show that the proposed algorithm minimizes the execution time and the makespan between 7% and 75%, and improves the performance of the load balancing scheduling.
Ideally, minimizing the flow completion time (FCT) requires millions of priorities supported by the underlying network so that each flow has its unique priority. However, in production datacenters, the available switch priority queues for flow scheduling are very limited (merely 2 or 3). This practical constraint seriously degrades the performance of previous approaches. In this paper, we introduce Explicit Priority Notification (EPN), a novel scheduling mechanism which emulates fine-grained priorities (i.e., desired priorities or DP) using only two switch priority queues. EPN can support various flow scheduling disciplines with or without flow size information. We have implemented EPN on commodity switches and evaluated its performance with both testbed experiments and extensive simulations. Our results show that, with flow size information, EPN achieves comparable FCT as pFabric that requires clean-slate switch hardware. And EPN also outperforms TCP by up to 60.5% if it bins the traffic into two priority queues according to flow size. In information-agnostic setting, EPN outperforms PIAS with two priority queues by up to 37.7%. To the best of our knowledge, EPN is the first system that provides millions of priorities for flow scheduling with commodity switches.
Secure information flow guarantees the secrecy and integrity of data, preventing an attacker from learning secret information (secrecy) or injecting untrusted information (integrity). Covert channels can be used to subvert these security guarantees, for example, timing and termination channels can, either intentionally or inadvertently, violate these guarantees by modifying the timing or termination behavior of a program based on secret or untrusted data. Attacks using these covert channels have been published and are known to work in practiceâ as techniques to prevent non-covert channels are becoming increasingly practical, covert channels are likely to become even more attractive for attackers to exploit. The goal of this paper is to understand the subtleties of timing and termination-sensitive noninterference, explore the space of possible strategies for enforcing noninterference guarantees, and formalize the exact guarantees that these strategies can enforce. As a result of this effort we create a novel strategy that provides stronger security guarantees than existing work, and we clarify claims in existing work about what guarantees can be made.