Visible to the public Biblio

Filters: Keyword is scheduling  [Clear All Filters]
2023-04-28
Li, Zhiyu, Zhou, Xiang, Weng, Wenbin.  2022.  Operator Partitioning and Parallel Scheduling Optimization for Deep Learning Compiler. 2022 IEEE 5th International Conference on Automation, Electronics and Electrical Engineering (AUTEEE). :205–211.
TVM(tensor virtual machine) as a deep learning compiler which supports the conversion of machine learning models into TVM IR(intermediate representation) and to optimise the generation of high-performance machine code for various hardware platforms. While the traditional approach is to parallelise the cyclic transformations of operators, in this paper we partition the implementation of the operators in the deep learning compiler TVM with parallel scheduling to derive a faster running time solution for the operators. An optimisation algorithm for partitioning and parallel scheduling is designed for the deep learning compiler TVM, where operators such as two-dimensional convolutions are partitioned into multiple smaller implementations and several partitioned operators are run in parallel scheduling to derive the best operator partitioning and parallel scheduling decisions by means of performance estimation. To evaluate the effectiveness of the algorithm, multiple examples of the two-dimensional convolution operator, the average pooling operator, the maximum pooling operator, and the ReLU activation operator with different input sizes were tested on the CPU platform, and the performance of these operators was experimentally shown to be improved and the operators were run speedily.
2023-02-24
Coleman, Jared, Kiamari, Mehrdad, Clark, Lillian, D'Souza, Daniel, Krishnamachari, Bhaskar.  2022.  Graph Convolutional Network-based Scheduler for Distributing Computation in the Internet of Robotic Things. MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM). :1070—1075.
Existing solutions for scheduling arbitrarily complex distributed applications on networks of computational nodes are insufficient for scenarios where the network topology is changing rapidly. New Internet of Things (IoT) domains like the Internet of Robotic Things (IoRT) and the Internet of Battlefield Things (IoBT) demand solutions that are robust and efficient in environments that experience constant and/or rapid change. In this paper, we demonstrate how recent advancements in machine learning (in particular, in graph convolutional neural networks) can be leveraged to solve the task scheduling problem with decent performance and in much less time than traditional algorithms.
2023-02-17
Liu, Xuanyu, Cheng, Guozhen, Wang, Yawen, Zhang, Shuai.  2022.  Overview of Scientific Workflow Security Scheduling in Clouds. 2021 International Conference on Advanced Computing and Endogenous Security. :1–6.
With the development of cloud computing technology, more and more scientific researchers choose to deliver scientific workflow tasks to public cloud platforms for execution. This mode effectively reduces scientific research costs while also bringing serious security risks. In response to this problem, this article summarizes the current security issues facing cloud scientific workflows, and analyzes the importance of studying cloud scientific workflow security issues. Then this article analyzes, summarizes and compares the current cloud scientific workflow security methods from three perspectives: system architecture, security model, and security strategy. Finally made a prospect for the future development direction.
2022-10-20
Torquato, Matheus, Maciel, Paulo, Vieira, Marco.  2020.  Security and Availability Modeling of VM Migration as Moving Target Defense. 2020 IEEE 25th Pacific Rim International Symposium on Dependable Computing (PRDC). :50—59.
Moving Target Defense (MTD) is a defensive mechanism based on dynamic system reconfiguration to prevent or thwart cyberattacks. In the last years, considerable progress has been made regarding MTD approaches for virtualized environments, and Virtual Machine (VM) migration is the core of most of these approaches. However, VM migration produces system downtime, meaning that each MTD reconfiguration affects system availability. Therefore, a method for a combined evaluation of availability and security is of utmost importance for VM migration-based MTD design. In this paper, we propose a Stochastic Reward Net (SRN) for the probability of attack success and availability evaluation of an MTD based on VM migration scheduling. We study the MTD system under different conditions regarding 1) VM migration scheduling, 2) VM migration failure probability, and 3) attack success rate. Our results highlight the tradeoff between availability and security when applying MTD based on VM migration. The approach and results may provide inputs for designing and evaluating MTD policies based on VM migration.
2022-08-10
Zhan, Zhi-Hui, Wu, Sheng-Hao, Zhang, Jun.  2021.  A New Evolutionary Computation Framework for Privacy-Preserving Optimization. 2021 13th International Conference on Advanced Computational Intelligence (ICACI). :220—226.
Evolutionary computation (EC) is a kind of advanced computational intelligence (CI) algorithm and advanced artificial intelligence (AI) algorithm. EC algorithms have been widely studied for solving optimization and scheduling problems in various real-world applications, which act as one of the Big Three in CI and AI, together with fuzzy systems and neural networks. Even though EC has been fast developed in recent years, there is an assumption that the algorithm designer can obtain the objective function of the optimization problem so that they can calculate the fitness values of the individuals to follow the “survival of the fittest” principle in natural selection. However, in a real-world application scenario, there is a kind of problem that the objective function is privacy so that the algorithm designer can not obtain the fitness values of the individuals directly. This is the privacy-preserving optimization problem (PPOP) where the assumption of available objective function does not check out. How to solve the PPOP is a new emerging frontier with seldom study but is also a challenging research topic in the EC community. This paper proposes a rank-based cryptographic function (RCF) to protect the fitness value information. Especially, the RCF is adopted by the algorithm user to encrypt the fitness values of all the individuals as rank so that the algorithm designer does not know the exact fitness information but only the rank information. Nevertheless, the RCF can protect the privacy of the algorithm user but still can provide sufficient information to the algorithm designer to drive the EC algorithm. We have applied the RCF privacy-preserving method to two typical EC algorithms including particle swarm optimization (PSO) and differential evolution (DE). Experimental results show that the RCF-based privacy-preserving PSO and DE can solve the PPOP without performance loss.
2022-05-24
Huang, Yudong, Wang, Shuo, Feng, Tao, Wang, Jiasen, Huang, Tao, Huo, Ru, Liu, Yunjie.  2021.  Towards Network-Wide Scheduling for Cyclic Traffic in IP-based Deterministic Networks. 2021 4th International Conference on Hot Information-Centric Networking (HotICN). :117–122.
The emerging time-sensitive applications, such as industrial automation, smart grids, and telesurgery, pose strong demands for enabling large-scale IP-based deterministic networks. The IETF DetNet working group recently proposes a Cycle Specified Queuing and Forwarding (CSQF) solution. However, CSQF only specifies an underlying device-level primitive while how to achieve network-wide flow scheduling remains undefined. Previous scheduling mechanisms are mostly oriented to the context of local area networks, making them inapplicable to the cyclic traffic in wide area networks. In this paper, we design the Cycle Tags Planning (CTP) mechanism, a first mathematical model to enable network-wide scheduling for cyclic traffic in large-scale deterministic networks. Then, a novel scheduling algorithm named flow offset and cycle shift (FO-CS) is designed to compute the flows' cycle tags. The FO-CS algorithm is evaluated under long-distance network topologies in remote industrial control scenarios. Compared with the Naive algorithm without using FO-CS, simulation results demonstrate that FO-CS improves the scheduling flow number by 31.2% in few seconds.
2022-05-06
Saravanan, M, Pratap Sircar, Rana.  2021.  Quantum Evolutionary Algorithm for Scheduling Resources in Virtualized 5G RAN Environment. 2021 IEEE 4th 5G World Forum (5GWF). :111–116.
Radio is the most important part of any wireless network. Radio Access Network (RAN) has been virtualized and disaggregated into different functions whose location is best defined by the requirements and economics of the use case. This Virtualized RAN (vRAN) architecture separates network functions from the underlying hardware and so 5G can leverage virtualization of the RAN to implement these functions. The easy expandability and manageability of the vRAN support the expansion of the network capacity and deployment of new features and algorithms for streamlining resource usage. In this paper, we try to address the problem of scheduling 5G vRAN with mid-haul network capacity constraints as a combinatorial optimization problem. We transformed it to a Quadratic Unconstrained Binary Optimization (QUBO) problem by using a newly proposed quantum-based algorithm and compared our implementation with existing classical algorithms. This work has demonstrated the advantage of quantum computers in solving a particular optimization problem in the Telecommunication domain and paves the way for solving critical real-world problems using quantum computers faster and better.
2022-04-20
Deschamps, Henrick, Cappello, Gerlando, Cardoso, Janette, Siron, Pierre.  2017.  Toward a Formalism to Study the Scheduling of Cyber-Physical Systems Simulations. 2017 IEEE/ACM 21st International Symposium on Distributed Simulation and Real Time Applications (DS-RT). :1–8.
This paper presents ongoing work on the formalism of Cyber-Physical Systems (CPS) simulations. These systems are distributed real-time systems, and their simulations might be distributed or not. In this paper, we propose a model to describe the modular components forming a simulation of a CPS. The main goal is to introduce a model of generic simulation distributed architecture, on which we are able to execute a logical architecture of simulation. This architecture of simulation allows the expression of structural and behavioural constraints on the simulation, abstracting its execution. We will propose two implementations of the execution architecture based on generic architectures of distributed simulation: $\cdot$ The High Level Architecture (HLA), an IEEE standard for distributed simulation, and one of its open-source implementation of RunTime Infrastructure (RTI): CERTI. $\cdot$ The Distributed Simulation Scheduler (DSS), an Airbus framework scheduling predefined models. Finally, we present the initial results obtained applying our formalism to the open-source case study from the ROSACE case study.
2021-09-16
Konjaang, J. Kok, Xu, Lina.  2020.  Cost Optimised Heuristic Algorithm (COHA) for Scientific Workflow Scheduling in IaaS Cloud Environment. 2020 IEEE 6th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :162–168.
Cloud computing, a multipurpose and high-performance internet-based computing, can model and transform a large range of application requirements into a set of workflow tasks. It allows users to represent their computational needs conveniently for data retrieval, reformatting, and analysis. However, workflow applications are big data applications and often take long hours to finish executing due to their nature and data size. In this paper, we study the cost optimised scheduling algorithms in cloud and proposed a novel task splitting algorithm named Cost optimised Heuristic Algorithm (COHA) for the cloud scheduler to optimise the execution cost. In this algorithm, the large tasks are split into sub-tasks to reduce their execution time. The design purpose is to enable all tasks to adequately meet their deadlines. We have carefully tested the performance of the COHA with a list of workflow inputs. The simulation results have convincingly demonstrated that COHA can effectively perform VM allocation and deployment, and well handle randomly arrived tasks. It can efficiently reduce execution costs while also allowing all tasks to properly finish before their deadlines. Overall, the improvements in our algorithm have remarkably reduced the execution cost by 32.5% for Sipht, 3.9% for Montage, and 1.2% for CyberShake workflows when compared to the state of art work.
2021-05-05
Samriya, Jitendra Kumar, Kumar, Narander.  2020.  Fuzzy Ant Bee Colony For Security And Resource Optimization In Cloud Computing. 2020 5th International Conference on Computing, Communication and Security (ICCCS). :1—5.

Cloud computing (CC) systems prevail to be the widespread computational paradigms for offering immense scalable and elastic services. Computing resources in cloud environment should be scheduled to facilitate the providers to utilize the resources moreover the users could get low cost applications. The most prominent need in job scheduling is to ensure Quality of service (QoS) to the user. In the boundary of the third party the scheduling takes place hence it is a significant condition for assuring its security. The main objective of our work is to offer QoS i.e. cost, makespan, minimized migration of task with security enforcement moreover the proposed algorithm guarantees that the admitted requests are executed without violating service level agreement (SLA). These objectives are attained by the proposed Fuzzy Ant Bee Colony algorithm. The experimental outcome confirms that secured job scheduling objective with assured QoS is attained by the proposed algorithm.

2021-04-08
Lin, X., Zhang, Z., Chen, M., Sun, Y., Li, Y., Liu, M., Wang, Y., Liu, M..  2020.  GDGCA: A Gene Driven Cache Scheduling Algorithm in Information-Centric Network. 2020 IEEE 3rd International Conference on Information Systems and Computer Aided Education (ICISCAE). :167–172.
The disadvantages and inextensibility of the traditional network require more novel thoughts for the future network architecture, as for ICN (Information-Centric Network), is an information centered and self-caching network, ICN is deeply rooted in the 5G era, of which concept is user-centered and content-centered. Although the ICN enables cache replacement of content, an information distribution scheduling algorithm is still needed to allocate resources properly due to its limited cache capacity. This paper starts with data popularity, information epilepsy and other data related attributes in the ICN environment. Then it analyzes the factors affecting the cache, proposes the concept and calculation method of Gene value. Since the ICN is still in a theoretical state, this paper describes an ICN scenario that is close to the reality and processes a greedy caching algorithm named GDGCA (Gene Driven Greedy Caching Algorithm). The GDGCA tries to design an optimal simulation model, which based on the thoughts of throughput balance and satisfaction degree (SSD), then compares with the regular distributed scheduling algorithm in related research fields, such as the QoE indexes and satisfaction degree under different Poisson data volumes and cycles, the final simulation results prove that GDGCA has better performance in cache scheduling of ICN edge router, especially with the aid of Information Gene value.
2021-03-29
Salim, M. N., Hutahaean, I. W., Susanti, B. H..  2020.  Fixed Point Attack on Lin et al.’s Modified Hash Function Scheme based on SMALLPRESENT-[8] Algorithm. 2020 International Conference on ICT for Smart Society (ICISS). CFP2013V-ART:1–7.
Lin et al.'s scheme is a hash function Message Authentication Codes (MAC) block cipher based scheme that's composed of the compression function. Fixed point messages have been found on SMALLPRESENT-[s] algorithm. The vulnerability of block cipher algorithm against fixed point attacks can affect the vulnerability of block cipher based hash function schemes. This paper applies fixed point attack against Lin et al.'s modified scheme based on SMALLPRESENT-[8] algorithm. Fixed point attack was done using fixed point message from SMALLPRESENT-[8] algorithm which used as Initial Value (IV) on the scheme branch. The attack result shows that eight fixed point messages are successfully discovered on the B1 branch. The fixed point messages discovery on B1 and B2 branches form 18 fixed point messages on Lin et al.'s modified scheme with different IVs and keys. The discovery of fixed point messages shows that Lin et al.'s modified scheme is vulnerable to fixed point attack.
2020-12-21
Portaluri, G., Giordano, S..  2020.  Gambling on fairness: a fair scheduler for IIoT communications based on the shell game. 2020 IEEE 25th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1–6.
The Industrial Internet of Things (IIoT) paradigm represents nowadays the cornerstone of the industrial automation since it has introduced new features and services for different environments and has granted the connection of industrial machine sensors and actuators both to local processing and to the Internet. One of the most advanced network protocol stack for IoT-IIoT networks that have been developed is 6LoWPAN which supports IPv6 on top of Low-power Wireless Personal Area Networks (LoWPANs). 6LoWPAN is usually coupled with the IEEE 802.15.4 low-bitrate and low-energy MAC protocol that relies on the time-slotted channel hopping (TSCH) technique. In TSCH networks, a coordinator node synchronizes all end-devices and specifies whether (and when) they can transmit or not in order to improve their energy efficiency. In this scenario, the scheduling strategy adopted by the coordinator plays a crucial role that impacts dramatically on the network performance. In this paper, we present a novel scheduling strategy for time-slot allocation in IIoT communications which aims at the improvement of the overall network fairness. The proposed strategy mimics the well-known shell game turning the totally unfair mechanics of this game into a fair scheduling strategy. We compare our proposal with three allocation strategies, and we evaluate the fairness of each scheduler showing that our allocator outperforms the others.
2020-12-17
Staschulat, J., Lütkebohle, I., Lange, R..  2020.  The rclc Executor: Domain-specific deterministic scheduling mechanisms for ROS applications on microcontrollers: work-in-progress. 2020 International Conference on Embedded Software (EMSOFT). :18—19.

Robots are networks of a variety of computing devices, such as powerful computing platforms but also tiny microcontrollers. The Robot Operating System (ROS) is the dominant framework for powerful computing devices. While ROS version 2 adds important features like quality of service and security, it cannot be directly applied to microcontrollers because of its large memory footprint. The micro-ROS project has ported the ROS 2 API to microcontrollers. However, the standard ROS 2 concepts are not enough for real-time performance: In the ROS 2 release “Foxy”, the standard ROS 2 Executor, which is the central component responsible for handling timers and incoming message data, is neither real-time capable nor deterministic. Domain-specific requirements of mobile robots, like sense-plan-act control loops, cannot be addressed with the standard ROS 2 Executor. In this paper, we present an advanced Executor for the ROS 2 C API which provides deterministic scheduling and supports domain-specific requirements. A proof-of-concept is demonstrated on a 32-bit microcontroller.

2020-12-02
Niz, D. de, Andersson, B., Klein, M., Lehoczky, J., Vasudevan, A., Kim, H., Moreno, G..  2019.  Mixed-Trust Computing for Real-Time Systems. 2019 IEEE 25th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA). :1—11.

Verifying complex Cyber-Physical Systems (CPS) is increasingly important given the push to deploy safety-critical autonomous features. Unfortunately, traditional verification methods do not scale to the complexity of these systems and do not provide systematic methods to protect verified properties when not all the components can be verified. To address these challenges, this paper proposes a real-time mixed-trust computing framework that combines verification and protection. The framework introduces a new task model, where an application task can have both an untrusted and a trusted part. The untrusted part allows complex computations supported by a full OS with a realtime scheduler running in a VM hosted by a trusted hypervisor. The trusted part is executed by another scheduler within the hypervisor and is thus protected from the untrusted part. If the untrusted part fails to finish by a specific time, the trusted part is activated to preserve safety (e.g., prevent a crash) including its timing guarantees. This framework is the first allowing the use of untrusted components for CPS critical functions while preserving logical and timing guarantees, even in the presence of malicious attackers. We present the framework design and implementation along with the schedulability analysis and the coordination protocol between the trusted and untrusted parts. We also present our Raspberry Pi 3 implementation along with experiments showing the behavior of the system under failures of untrusted components, and a drone application to demonstrate its practicality.

2020-12-01
Yang, R., Ouyang, X., Chen, Y., Townend, P., Xu, J..  2018.  Intelligent Resource Scheduling at Scale: A Machine Learning Perspective. 2018 IEEE Symposium on Service-Oriented System Engineering (SOSE). :132—141.

Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.

2020-11-30
Cheng, D., Zhou, X., Ding, Z., Wang, Y., Ji, M..  2019.  Heterogeneity Aware Workload Management in Distributed Sustainable Datacenters. IEEE Transactions on Parallel and Distributed Systems. 30:375–387.
The tremendous growth of cloud computing and large-scale data analytics highlight the importance of reducing datacenter power consumption and environmental impact of brown energy. While many Internet service operators have at least partially powered their datacenters by green energy, it is challenging to effectively utilize green energy due to the intermittency of renewable sources, such as solar or wind. We find that the geographical diversity of internet-scale services can be carefully scheduled to improve the efficiency of applying green energy in datacenters. In this paper, we propose a holistic heterogeneity-aware cloud workload management approach, sCloud, that aims to maximize the system goodput in distributed self-sustainable datacenters. sCloud adaptively places the transactional workload to distributed datacenters, allocates the available resource to heterogeneous workloads in each datacenter, and migrates batch jobs across datacenters, while taking into account the green power availability and QoS requirements. We formulate the transactional workload placement as a constrained optimization problem that can be solved by nonlinear programming. Then, we propose a batch job migration algorithm to further improve the system goodput when the green power supply varies widely at different locations. Finally, we extend sCloud by integrating a flexible batch job manager to dynamically control the job execution progress without violating the deadlines. We have implemented sCloud in a university cloud testbed with real-world weather conditions and workload traces. Experimental results demonstrate sCloud can achieve near-to-optimal system performance while being resilient to dynamic power availability. sCloud with the flexible batch job management approach outperforms a heterogeneity-oblivious approach by 37 percent in improving system goodput and 33 percent in reducing QoS violations.
2020-11-16
Huyck, P..  2019.  Safe and Secure Data Fusion — Use of MILS Multicore Architecture to Reduce Cyber Threats. 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC). :1–9.
Data fusion, as a means to improve aircraft and air traffic safety, is a recent focus of some researchers and system developers. Increases in data volume and processing needs necessitate more powerful hardware and more flexible software architectures to satisfy these needs. Such improvements in processed data also mean the overall system becomes more complex and correspondingly, resulting in a potentially significantly larger cyber-attack space. Today's multicore processors are one means of satisfying the increased computational needs of data fusion-based systems. When coupled with a real-time operating system (RTOS) capable of flexible core and application scheduling, large cabinets of (power hungry) single-core processors may be avoided. The functional and assurance capabilities of such an RTOS can be critical elements in providing application isolation, constrained data flows, and restricted hardware access (including covert channel prevention) necessary to reduce the overall cyber-attack space. This paper examines fundamental considerations of a multiple independent levels of security (MILS) architecture when supported by a multicore-based real-time operating system. The paper draws upon assurance activities and functional properties associated with a previous Common Criteria evaluation assurance level (EAL) 6+ / High-Robustness Separation Kernel certification effort and contrast those with activities performed as part of a MILS multicore related project. The paper discusses key characteristics and functional capabilities necessary to achieve overall system security and safety. The paper defines architectural considerations essential for scheduling applications on a multicore processor to reduce security risks. For civil aircraft systems, the paper discusses the applicability of the security assurance and architecture configurations to system providers looking to increase their resilience to cyber threats.
2020-11-02
Wang, Nan, Yao, Manting, Jiang, Dongxu, Chen, Song, Zhu, Yu.  2018.  Security-Driven Task Scheduling for Multiprocessor System-on-Chips with Performance Constraints. 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). :545—550.

The high penetration of third-party intellectual property (3PIP) brings a high risk of malicious inclusions and data leakage in products due to the planted hardware Trojans, and system level security constraints have recently been proposed for MPSoCs protection against hardware Trojans. However, secret communication still can be established in the context of the proposed security constraints, and thus, another type of security constraints is also introduced to fully prevent such malicious inclusions. In addition, fulfilling the security constraints incurs serious overhead of schedule length, and a two-stage performance-constrained task scheduling algorithm is then proposed to maintain most of the security constraints. In the first stage, the schedule length is iteratively reduced by assigning sets of adjacent tasks into the same core after calculating the maximum weight independent set of a graph consisting of all timing critical paths. In the second stage, tasks are assigned to proper IP vendors and scheduled to time periods with a minimization of cores required. The experimental results show that our work reduces the schedule length of a task graph, while only a small number of security constraints are violated.

2020-10-06
Zaman, Tarannum Shaila, Han, Xue, Yu, Tingting.  2019.  SCMiner: Localizing System-Level Concurrency Faults from Large System Call Traces. 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE). :515—526.

Localizing concurrency faults that occur in production is hard because, (1) detailed field data, such as user input, file content and interleaving schedule, may not be available to developers to reproduce the failure; (2) it is often impractical to assume the availability of multiple failing executions to localize the faults using existing techniques; (3) it is challenging to search for buggy locations in an application given limited runtime data; and, (4) concurrency failures at the system level often involve multiple processes or event handlers (e.g., software signals), which can not be handled by existing tools for diagnosing intra-process(thread-level) failures. To address these problems, we present SCMiner, a practical online bug diagnosis tool to help developers understand how a system-level concurrency fault happens based on the logs collected by the default system audit tools. SCMiner achieves online bug diagnosis to obviate the need for offline bug reproduction. SCMiner does not require code instrumentation on the production system or rely on the assumption of the availability of multiple failing executions. Specifically, after the system call traces are collected, SCMiner uses data mining and statistical anomaly detection techniques to identify the failure-inducing system call sequences. It then maps each abnormal sequence to specific application functions. We have conducted an empirical study on 19 real-world benchmarks. The results show that SCMiner is both effective and efficient at localizing system-level concurrency faults.

Baruah, Sanjoy, Burns, Alan.  2019.  Incorporating Robustness and Resilience into Mixed-Criticality Scheduling Theory. 2019 IEEE 22nd International Symposium on Real-Time Distributed Computing (ISORC). :155—162.

Mixed-criticality scheduling theory (MCSh) was developed to allow for more resource-efficient implementation of systems comprising different components that need to have their correctness validated at different levels of assurance. As originally defined, MCSh deals exclusively with pre-runtime verification of such systems; hence many mixed-criticality scheduling algorithms that have been developed tend to exhibit rather poor survivability characteristics during run-time. (E.g., MCSh allows for less-important (“Lo-criticality”) workloads to be completely discarded in the event that run-time behavior is not compliant with the assumptions under which the correctness of the LO-criticality workload should be verified.) Here we seek to extend MCSh to incorporate survivability considerations, by proposing quantitative metrics for the robustness and resilience of mixed-criticality scheduling algorithms. Such metrics allow us to make quantitative assertions regarding the survivability characteristics of mixed-criticality scheduling algorithms, and to compare different algorithms from the perspective of their survivability. We propose that MCSh seek to develop scheduling algorithms that possess superior survivability characteristics, thereby obtaining algorithms with better survivability properties than current ones (which, since they have been developed within a survivability-agnostic framework, tend to focus exclusively on pre-runtime verification and ignore survivability issues entirely).

2020-07-20
Guelton, Serge, Guinet, Adrien, Brunet, Pierrick, Martinez, Juan Manuel, Dagnat, Fabien, Szlifierski, Nicolas.  2018.  [Research Paper] Combining Obfuscation and Optimizations in the Real World. 2018 IEEE 18th International Working Conference on Source Code Analysis and Manipulation (SCAM). :24–33.
Code obfuscation is the de facto standard to protect intellectual property when delivering code in an unmanaged environment. It relies on additive layers of code tangling techniques, white-box encryption calls and platform-specific or tool-specific countermeasures to make it harder for a reverse engineer to access critical pieces of data or to understand core algorithms. The literature provides plenty of different obfuscation techniques that can be used at compile time to transform data or control flow in order to provide some kind of protection against different reverse engineering scenarii. Scheduling code transformations to optimize a given metric is known as the pass scheduling problem, a problem known to be NP-hard, but solved in a practical way using hard-coded sequences that are generally satisfactory. Adding code obfuscation to the problem introduces two new dimensions. First, as a code obfuscator needs to find a balance between obfuscation and performance, pass scheduling becomes a multi-criteria optimization problem. Second, obfuscation passes transform their inputs in unconventional ways, which means some pass combinations may not be desirable or even valid. This paper highlights several issues met when blindly chaining different kind of obfuscation and optimization passes, emphasizing the need of a formal model to combine them. It proposes a non-intrusive formalism to leverage on sequential pass management techniques. The model is validated on real-world scenarii gathered during the development of an industrial-strength obfuscator on top of the LLVM compiler infrastructure.
Stroup, Ronald L., Niewoehner, Kevin R..  2019.  Application of Artificial Intelligence in the National Airspace System – A Primer. 2019 Integrated Communications, Navigation and Surveillance Conference (ICNS). :1–14.

The National Airspace System (NAS), as a portion of the US' transportation system, has not yet begun to model or adopt integration of Artificial Intelligence (AI) technology. However, users of the NAS, i.e., Air transport operators, UAS operators, etc. are beginning to use this technology throughout their operations. At issue within the broader aviation marketplace, is the continued search for a solution set to the persistent daily delays and schedule perturbations that occur within the NAS. Despite billions invested through the NAS Modernization Program, the delays persist in the face of reduced demand for commercial routings. Every delay represents an economic loss to commercial transport operators, passengers, freighters, and any business depending on the transportation performance. Therefore, the FAA needs to begin to address from an advanced concepts perspective, what this wave of new technology will affect as it is brought to bear on various operations performance parameters, including safety, security, efficiency, and resiliency solution sets. This paper is the first in a series of papers we are developing to explore the application of AI in the National Airspace System (NAS). This first paper is meant to get everyone in the aviation community on the same page, a primer if you will, to start the technical discussions. This paper will define AI; the capabilities associated with AI; current use cases within the aviation ecosystem; and how to prepare for insertion of AI in the NAS. The next series of papers will look at NAS Operations Theory utilizing AI capabilities and eventually leading to a future intelligent NAS (iNAS) environment.

2020-07-16
Ma, Siyou, Feng, Gao, Yan, Yunqiang.  2019.  Study on Hybrid Collaborative Simulation Testing Method Towards CPS. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :51—56.

CPS is generally complex to study, analyze, and design, as an important means to ensure the correctness of design and implementation of CPS system, simulation test is difficult to fully test, verify and evaluate the components or subsystems in the CPS system due to the inconsistent development progress of the com-ponents or subsystems in the CPS system. To address this prob-lem, we designed a hybrid P2P based collaborative simulation test framework composed of full physical nodes, hardware in the loop(HIL) nodes and full digital nodes to simulate the compo-nents or subsystems in the CPS system of different development progress, based on the framework, we then proposed collabora-tive simulation control strategy comprising sliding window based clock synchronization, dynamic adaptive time advancement and multi-priority task scheduling with preemptive time threshold. Experiments showed that the hybrid collaborative simulation testing method proposed in this paper can fully test CPS more effectively.

2020-05-15
Wang, Jihe, Zhang, Meng, Qiu, Meikang.  2018.  A Diffusional Schedule for Traffic Reducing on Network-on-Chip. 2018 5th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2018 4th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :206—210.
pubcrawl, Network on Chip Security, Scalability, resiliency, resilience, metrics, Tasks on NoC (Network-on-Chip) are less efficient because of long-distance data synchronization. An inefficient task schedule strategy can lead to a large number of remote data accessing that ruins the speedup of parallel execution of multiple tasks. Thus, we propose an energy efficient task schedule to reduce task traffic with a diffusional pattern. The task mapping algorithm can optimize traffic distribution by limit tasks into a small area to reduce NoC activities. Comparing to application-layer optimization, our task mapping can obtain 20% energy saving and 15% latency reduction on average.