Biblio
The operating system is extremely important for both "Made in China 2025" and ubiquitous electric power Internet of Things. By investigating of five key requirements for ubiquitous electric power Internet of Things at the OS level (performance, ecosystem, information security, functional security, developer framework), this paper introduces the intelligent NARI microkernel Operating System and its innovative schemes. It is implemented with microkernel architecture based on the trusted computing. Some technologies such as process based fine-grained real-time scheduling algorithm, sigma0 efficient message channel and service process binding in multicore are applied to improve system performance. For better ecological expansion, POSIX standard API is compatible, Linux container, embedded virtualization and intelligent interconnection technology are supported. Native process sandbox and mimicry defense are considered for security mechanism design. Multi-level exception handling and multidimensional partition isolation are adopted to provide High Reliability. Theorem-assisted proof tools based on Isabelle/HOL is used to verify the design and implementation of NARI microkernel OS. Developer framework including tools, kit and specification is discussed when developing both system software and user software on this IoT OS.
Use-After-Free (UAF) vulnerabilities are caused by the program operating on a dangling pointer and can be exploited to compromise critical software systems. While there have been many tools to mitigate UAF vulnerabilities, UAF remains one of the most common attack vectors. UAF is particularly di cult to detect in concurrent programs, in which a UAF may only occur with rare thread schedules. In this paper, we present a novel technique, UFO, that can precisely predict UAFs based on a single observed execution trace with a provably higher detection capability than existing techniques with no false positives. The key technical advancement of UFO is an extended maximal thread causality model that captures the largest possible set of feasible traces that can be inferred from a given multithreaded execution trace. By formulating UAF detection as a constraint solving problem atop this model, we can explore a much larger thread scheduling space than classical happens-before based techniques. We have evaluated UFO on several real-world large complex C/C++ programs including Chromium and FireFox. UFO scales to real-world systems with hundreds of millions of events in their execution and has detected a large number of real concurrency UAFs.
The advent of smart grids offers us the opportunity to better manage the electricity grids. One of the most interesting challenges in the modern grids is the consumer demand management. Indeed, the development in Information and Communication Technologies (ICTs) encourages the development of demand-side management systems. In this paper, we propose a distributed energy demand scheduling approach that uses minimal interactions between consumers to optimize the energy demand. We formulate the consumption scheduling as a constrained optimization problem and use game theory to solve this problem. On one hand, the proposed approach aims to reduce the total energy cost of a building's consumers. This imposes the cooperation between all the consumers to achieve the collective goal. On the other hand, the privacy of each user must be protected, which means that our distributed approach must operate with a minimal information exchange. The performance evaluation shows that the proposed approach reduces the total energy cost, each consumer's individual cost, as well as the peak to average ratio.
Scheduling in the cloud is a complex task due to the number and variety of resources available and the volatility of usage-patterns of resources considering that the resource setting is on the service provider. This complexity is compounded further when Security issues and Quality of Service (QoS) are also factored in. The aim of this paper is to describe a model that based on Security (SSM) as a key element that cloud services rely on which affects the performance, cost and time concerns within the security constraints of the cloud service. Definition of the Scheduling Security Model (SSM), and evaluation through worked example that can meet the customer requirements of cost and the quality of service in the required time.
To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated autoscaling policies, that is, policies that dynamically provision resources for the customer. Although selecting and tuning autoscaling policies is a challenging task for datacenter operators, so far relatively few studies investigate the performance of autoscaling for workloads of workflows. Complementing previous knowledge, in this work we propose the first comprehensive performance study in the field. Using trace-based simulation, we compare state-of-the-art autoscaling policies across multiple application domains, workload arrival patterns (e.g., burstiness), and system utilization levels. We further investigate the interplay between autoscaling and regular allocation policies, and the complexity cost of autoscaling. Our quantitative study focuses not only on traditional performance metrics and on state-of-the-art elasticity metrics, but also on time-and memory-related autoscaling-complexity metrics. Our main results give strong and quantitative evidence about previously unreported operational behavior, for example, that autoscaling policies perform differently across application domains and allocation and provisioning policies should be co-designed.
Fog computing provides a new architecture for the implementation of the Internet of Things (IoT), which can connect sensor nodes to the cloud using the edge of the network. This structure has improved the latency and energy consumption in the cloud. In this heterogeneous and distributed environment, resource allocation is very important. Hence, scheduling will be a challenge to increase productivity and allocate resources appropriately to the tasks. Programs that run in this environment should be protected from intruders. We consider three parameters as authentication, integrity, and confidentiality to maintain security in fog devices. These parameters have time and computational overhead. In the proposed approach, we schedule the modules for the run in fog devices by heuristic algorithms based on data mining technique. The objective function is included CPU utilization, bandwidth, and security overhead. We compare the proposed algorithm with several heuristic algorithms. The results show that our proposed algorithm improved the average energy consumption of 63.27%, cost 44.71% relative to the PSO, ACO, SA algorithms.
Vulnerabilities of controller that is caused by separation of control and forwarding lead to a threat which attacker can take remote access detection in SDN. The current work proposes a controller architecture called secure control plane (SCP) that enhances security and increase the difficulty of the attack through a rotation of heterogeneous and multiple controllers. Specifically, a dynamic-scheduling method based on Bayesian Stackelberg Games is put forward to maximize security reward of defender during each migration. Secondly, introducing a self-cleaning mechanism combined with game strategy aims at improving the secure level and form a closed-loop defense mechanism; Finally, the experiments described quantitatively defender will get more secure gain based on the game strategy compared with traditional strategy (pure and random strategies), and the self-cleaning mechanism can make the control plane to be in a higher level of security.
Cloud computing is the expansion of parallel computing, distributed computing. The technology of cloud computing becomes more and more widely used, and one of the fundamental issues in this cloud environment is related to task scheduling. However, scheduling in Cloud environments represents a difficult issue since it is basically NP-complete. Thus, many variants based on approximation techniques, especially those inspired by Swarm Intelligence (SI) have been proposed. This paper proposes a machine learning algorithm to guide the cloud choose the scheduling technique by using multi criteria decision to optimize the performance. The main contribution of our work is to minimize the makespan of a given task set. The new strategy is simulated using the CloudSim toolkit package where the impact of the algorithm is checked with different numbers of VMs varying from 2 to 50, and different task sizes between 30 bytes and 2700 bytes. Experiment results show that the proposed algorithm minimizes the execution time and the makespan between 7% and 75%, and improves the performance of the load balancing scheduling.
Ideally, minimizing the flow completion time (FCT) requires millions of priorities supported by the underlying network so that each flow has its unique priority. However, in production datacenters, the available switch priority queues for flow scheduling are very limited (merely 2 or 3). This practical constraint seriously degrades the performance of previous approaches. In this paper, we introduce Explicit Priority Notification (EPN), a novel scheduling mechanism which emulates fine-grained priorities (i.e., desired priorities or DP) using only two switch priority queues. EPN can support various flow scheduling disciplines with or without flow size information. We have implemented EPN on commodity switches and evaluated its performance with both testbed experiments and extensive simulations. Our results show that, with flow size information, EPN achieves comparable FCT as pFabric that requires clean-slate switch hardware. And EPN also outperforms TCP by up to 60.5% if it bins the traffic into two priority queues according to flow size. In information-agnostic setting, EPN outperforms PIAS with two priority queues by up to 37.7%. To the best of our knowledge, EPN is the first system that provides millions of priorities for flow scheduling with commodity switches.
With the development of Software Defined Networking, its software programmability and openness brings new idea for network security. Therefore, many Software Defined Security Architectures emerged at the right moment. Software Defined Security decouples security control plane and security data plane. In Software Defined Security Architectures, underlying security devices are abstracted as security resources in resource pool, intellectualized and automated security business management and orchestration can be realized through software programming in security control plane. However, network management has been becoming extremely complicated due to expansible network scale, varying network devices, lack of abstraction and heterogeneity of network especially. Therefore, new-type open security devices are needed in SDS Architecture for unified management so that they can be conveniently abstracted as security resources in resource pool. This paper firstly analyses why open security devices are needed in SDS architecture and proposes a method of opening security devices. Considering this new architecture requires a new security scheduling mechanism, this paper proposes a security resource scheduling algorithm which is used for managing and scheduling security resources in resource pool according to user s security demand. The security resource scheduling algorithm aims to allocate a security protection task to a suitable security resource in resource pool so that improving security protection efficiency. In the algorithm, we use BP neural network to predict the execution time of security tasks to improve the performance of the algorithm. The simulation result shows that the algorithm has ideal performance. Finally, a usage scenario is given to illustrate the role of security resource scheduling in software defined security architecture.
This paper will suggest a robust method for a network layer Moving Target Defense (MTD) using symmetric packet scheduling rules. The MTD is implemented and tested on a Supervisory Control and Data Acquisition (SCADA) network testbed. This method is shown to be efficient while providing security benefits to the issues faced by the static nature of SCADA networks. The proposed method is an automated tool that may provide defense in depth when be used in conjunction with other MTDs and traditional security devices.
Wireless Sensor Networks (WSNs) are becoming more and more popular to support a wide range of Internet of Things (IoT) applications. Time-Slotted Channel Hopping (TSCH) is a technique to enable ultra reliable and ultra low-power wireless multi-hop networks. TSCH consist of a channel hopping scheme for sending link-layer frames in different time slots and frequencies in order to efficiently combat external interference and multi-path fading. The keystone of TSCH is the scheduling algorithm, which determines for every node at which opportunity (a combination of time slots and channels) it is allowed to send. However, current scheduling algorithms are not suited for dense deployments and have important scalability limitations. In this paper, we investigate TSCH's scheduling performance in dense deployments and show how the scheduling can be improved for such environments. We performed an extensive analysis of the scalability for different scheduling approaches showing the performance drops as the number of nodes increases. Moreover, we propose a novel textlessutextgreaterDetextless/utextgreatercentralized textlessutextgreaterBrtextless/utextgreateroadcast-based textlessutextgreaterStextless/utextgreatercheduling algorithm called DeBraS, based on selective broadcasting to inform nodes about each other's schedule. Through extensive simulations, we show that DeBraS is highly more scalable than centralized solutions and that it outperforms the current decentralized 6Tisch algorithms in up to 88.5% in terms of throughput for large network sizes.
Within few years, Cloud computing has emerged as the most promising IT business model. Thanks to its various technical and financial advantages, Cloud computing continues to convince every day new users coming from scientific and industrial sectors. To satisfy the various users' requirements, Cloud providers must maximize the performance of their IT resources to ensure the best service at the lowest cost. The performance optimization efforts in the Cloud can be achieved at different levels and aspects. In the present paper, we propose to introduce a fuzzy logic process in scheduling strategy for public Cloud in order to improve the response time, processing time and total cost. In fact, fuzzy logic has proven his ability to solve the problem of optimization in several fields such as data mining, image processing, networking and much more.
Metaheuristics include a wide range of optimization algorithms. Some of them are very well known and with proven value, as they solve successfully many examples of combinatorial NP-hard problems. Some examples of Metaheuristics are Genetic Algorithms (GA), Simulated Annealing (SA) or Ant Colony Optimization (ACO). Our company is devoted to making steel and is the biggest steelmaker in the world. Combining several industrial processes to produce 84.6 million tones (public official data of 2015) involves huge effort. Metaheuristics are applied to different scenarios inside our operations to optimize different areas: logistics, production scheduling or resource assignment, saving costs and helping to reach operational excellence, critical for our survival in a globalized world. Rather than obtaining the global optimal solution, the main interest of an industrial company is to have "good solutions", close to the optimal, but within a very short response time, and this latter requirement is the main difference with respect to the traditional research approach from the academic world. Production is continuous and it cannot be stopped or wait for calculations, in addition, reducing production speed implies decreasing productivity and making the facilities less competitive. Disruptions are common events, making rescheduling imperative while foremen wait for new instructions to operate. This position paper explains the problem of the time response in our industrial environment, the solutions we have investigated and some results already achieved.
In order to be resilient to attacks, a cyber-physical system (CPS) must be able to detect attacks before they can cause significant damage. To achieve this, \emph{intrusion detection systems} (IDS) may be deployed, which can detect attacks and alert human operators, who can then intervene. However, the resource-constrained nature of many CPS poses a challenge, since reliable IDS can be computationally expensive. Consequently, computational nodes may not be able to perform intrusion detection continuously, which means that we have to devise a schedule for performing intrusion detection. While a uniformly random schedule may be optimal in a purely cyber system, an optimal schedule for protecting CPS must also take into account the physical properties of the system, since the set of adversarial actions and their consequences depend on the physical systems. Here, in the context of water distribution networks, we study IDS scheduling problems in two settings and under the constraints on the available battery supplies. In the first problem, the objective is to design, for a given duration of time $T$, scheduling schemes for IDS so that the probability of detecting an attack is maximized within that duration. We propose efficient heuristic algorithms for this general problem and evaluate them on various networks. In the second problem, our objective is to design scheduling schemes for IDS so that the overall lifetime of the network is maximized while ensuring that an intruder attack is always detected. Various strategies to deal with this problem are presented and evaluated for various networks.
In this paper, we propose SAFE (Security Aware FlexRay scheduling Engine), to provide a problem definition and a design framework for FlexRay static segment schedule to address the new challenge on security. From a high level specification of the application, the architecture and communication middleware are synthesized to satisfy security requirements, in addition to extensibility, costs, and end-to-end latencies. The proposed design process is applied to two industrial case studies consisting of a set of active safety functions and an X-by-wire system respectively.