Visible to the public Biblio

Found 7504 results

Filters: Keyword is Metrics  [Clear All Filters]
2023-04-28
Baksi, Rudra Prasad.  2022.  Pay or Not Pay? A Game-Theoretical Analysis of Ransomware Interactions Considering a Defender’s Deception Architecture 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks - Supplemental Volume (DSN-S). :53–54.
Malware created by the Advanced Persistent Threat (APT) groups do not typically carry out the attacks in a single stage. The “Cyber Kill Chain” framework developed by Lockheed Martin describes an APT through a seven stage life cycle [5] . APT groups are generally nation state actors [1] . They perform highly targeted attacks and do not stop until the goal is achieved [7] . Researchers are always working toward developing a system and a process to create an environment safe from APT type attacks [2] . In this paper, the threat considered is ransomware which are developed by APT groups. WannaCry is an example of a highly sophisticated ransomware created by the Lazurus group of North Korea and its level of sophistication is evident from the existence of a contingency plan of attack upon being discovered [3] [6] . The major contribution of this research is the analysis of APT type ransomware using game theory to present optimal strategies for the defender through the development of equilibrium solutions when faced with APT type ransomware attack. The goal of the equilibrium solutions is to help the defender in preparedness before the attack and in minimization of losses during and after the attack.
Deng, Zijie, Feng, Guocong, Huang, Qingshui, Zou, Hong, Zhang, Jiafa.  2022.  Research on Enterprise Information Security Risk Assessment System Based on Bayesian Neural Network. 2022 IEEE 2nd International Conference on Data Science and Computer Application (ICDSCA). :938–941.
Information security construction is a social issue, and the most urgent task is to do an excellent job in information risk assessment. The bayesian neural network currently plays a vital role in enterprise information security risk assessment, which overcomes the subjective defects of traditional assessment results and operates efficiently. The risk quantification method based on fuzzy theory and Bayesian regularization BP neural network mainly uses fuzzy theory to process the original data and uses the processed data as the input value of the neural network, which can effectively reduce the ambiguity of language description. At the same time, special neural network training is carried out for the confusion that the neural network is easy to fall into the optimal local problem. Finally, the risk is verified and quantified through experimental simulation. This paper mainly discusses the problem of enterprise information security risk assessment based on a Bayesian neural network, hoping to provide strong technical support for enterprises and organizations to carry out risk rectification plans. Therefore, the above method provides a new information security risk assessment idea.
Jain, Ashima, Tripathi, Khushboo, Jatain, Aman, Chaudhary, Manju.  2022.  A Game Theory based Attacker Defender Model for IDS in Cloud Security. 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom). :190–194.

Cloud security has become a serious challenge due to increasing number of attacks day-by-day. Intrusion Detection System (IDS) requires an efficient security model for improving security in the cloud. This paper proposes a game theory based model, named as Game Theory Cloud Security Deep Neural Network (GT-CSDNN) for security in cloud. The proposed model works with the Deep Neural Network (DNN) for classification of attack and normal data. The performance of the proposed model is evaluated with CICIDS-2018 dataset. The dataset is normalized and optimal points about normal and attack data are evaluated based on the Improved Whale Algorithm (IWA). The simulation results show that the proposed model exhibits improved performance as compared with existing techniques in terms of accuracy, precision, F-score, area under the curve, False Positive Rate (FPR) and detection rate.

Dutta, Ashutosh, Hammad, Eman, Enright, Michael, Behmann, Fawzi, Chorti, Arsenia, Cheema, Ahmad, Kadio, Kassi, Urbina-Pineda, Julia, Alam, Khaled, Limam, Ahmed et al..  2022.  Security and Privacy. 2022 IEEE Future Networks World Forum (FNWF). :1–71.
The digital transformation brought on by 5G is redefining current models of end-to-end (E2E) connectivity and service reliability to include security-by-design principles necessary to enable 5G to achieve its promise. 5G trustworthiness highlights the importance of embedding security capabilities from the very beginning while the 5G architecture is being defined and standardized. Security requirements need to overlay and permeate through the different layers of 5G systems (physical, network, and application) as well as different parts of an E2E 5G architecture within a risk-management framework that takes into account the evolving security-threats landscape. 5G presents a typical use-case of wireless communication and computer networking convergence, where 5G fundamental building blocks include components such as Software Defined Networks (SDN), Network Functions Virtualization (NFV) and the edge cloud. This convergence extends many of the security challenges and opportunities applicable to SDN/NFV and cloud to 5G networks. Thus, 5G security needs to consider additional security requirements (compared to previous generations) such as SDN controller security, hypervisor security, orchestrator security, cloud security, edge security, etc. At the same time, 5G networks offer security improvement opportunities that should be considered. Here, 5G architectural flexibility, programmability and complexity can be harnessed to improve resilience and reliability. The working group scope fundamentally addresses the following: •5G security considerations need to overlay and permeate through the different layers of the 5G systems (physical, network, and application) as well as different parts of an E2E 5G architecture including a risk management framework that takes into account the evolving security threats landscape. •5G exemplifies a use-case of heterogeneous access and computer networking convergence, which extends a unique set of security challenges and opportunities (e.g., related to SDN/NFV and edge cloud, etc.) to 5G networks. Similarly, 5G networks by design offer potential security benefits and opportunities through harnessing the architecture flexibility, programmability and complexity to improve its resilience and reliability. •The IEEE FNI security WG's roadmap framework follows a taxonomic structure, differentiating the 5G functional pillars and corresponding cybersecurity risks. As part of cross collaboration, the security working group will also look into the security issues associated with other roadmap working groups within the IEEE Future Network Initiative.
ISSN: 2770-7679
Iqbal, Sarfraz.  2022.  Analyzing Initial Design Theory Components for Developing Information Security Laboratories. 2022 6th International Conference on Cryptography, Security and Privacy (CSP). :36–40.
Online information security labs intended for training and facilitating hands-on learning for distance students at master’s level are not easy to develop and administer. This research focuses on analyzing the results of a DSR project for design, development, and implementation of an InfoSec lab. This research work contributes to the existing research by putting forth an initial outline of a generalized model for design theory for InfoSec labs aimed at hands-on education of students in the field of information security. The anatomy of design theory framework is used to analyze the necessary components of the anticipated design theory for InfoSec labs in future.
Wang, Man.  2022.  Research on Network Confrontation Information Security Protection System under Computer Deep Learning. 2022 IEEE 2nd International Conference on Data Science and Computer Application (ICDSCA). :1442–1447.
Aiming at the single hopping strategy in the terminal information hopping active defense technology, a variety of heterogeneous hopping modes are introduced into the terminal information hopping system, the definition of the terminal information is expanded, and the adaptive adjustment of the hopping strategy is given. A network adversarial training simulation system is researched and designed, and related subsystems are discussed from the perspective of key technologies and their implementation, including interactive adversarial training simulation system, adversarial training simulation support software system, adversarial training simulation evaluation system and adversarial training Mock Repository. The system can provide a good environment for network confrontation theory research and network confrontation training simulation, which is of great significance.
Zhu, Yuwen, Yu, Lei.  2022.  A Modeling Method of Cyberspace Security Structure Based on Layer-Level Division. 2022 IEEE 5th International Conference on Computer and Communication Engineering Technology (CCET). :247–251.
As the cyberspace structure becomes more and more complex, the problems of dynamic network space topology, complex composition structure, large spanning space scale, and a high degree of self-organization are becoming more and more important. In this paper, we model the cyberspace elements and their dependencies by combining the knowledge of graph theory. Layer adopts a network space modeling method combining virtual and real, and level adopts a spatial iteration method. Combining the layer-level models into one, this paper proposes a fast modeling method for cyberspace security structure model with network connection relationship, hierarchical relationship, and vulnerability information as input. This method can not only clearly express the individual vulnerability constraints in the network space, but also clearly express the hierarchical relationship of the complex dependencies of network individuals. For independent network elements or independent network element groups, it has flexibility and can greatly reduce the computational complexity in later applications.
Gao, Hongbin, Wang, Shangxing, Zhang, Hongbin, Liu, Bin, Zhao, Dongmei, Liu, Zhen.  2022.  Network Security Situation Assessment Method Based on Absorbing Markov Chain. 2022 International Conference on Networking and Network Applications (NaNA). :556–561.
This paper has a new network security evaluation method as an absorbing Markov chain-based assessment method. This method is different from other network security situation assessment methods based on graph theory. It effectively refinement issues such as poor objectivity of other methods, incomplete consideration of evaluation factors, and mismatching of evaluation results with the actual situation of the network. Firstly, this method collects the security elements in the network. Then, using graph theory combined with absorbing Markov chain, the threat values of vulnerable nodes are calculated and sorted. Finally, the maximum possible attack path is obtained by blending network asset information to determine the current network security status. The experimental results prove that the method fully considers the vulnerability and threat node ranking and the specific case of system network assets, which makes the evaluation result close to the actual network situation.
Hu, Yuanyuan, Cao, Xiaolong, Li, Guoqing.  2022.  The Design and Realization of Information Security Technology and Computer Quality System Structure. 2022 International Conference on Artificial Intelligence in Everything (AIE). :460–464.
With the development of computer technology and information security technology, computer networks will increasingly become an important means of information exchange, permeating all areas of social life. Therefore, recognizing the vulnerabilities and potential threats of computer networks as well as various security problems that exist in reality, designing and researching computer quality architecture, and ensuring the security of network information are issues that need to be resolved urgently. The purpose of this article is to study the design and realization of information security technology and computer quality system structure. This article first summarizes the basic theory of information security technology, and then extends the core technology of information security. Combining the current status of computer quality system structure, analyzing the existing problems and deficiencies, and using information security technology to design and research the computer quality system structure on this basis. This article systematically expounds the function module data, interconnection structure and routing selection of the computer quality system structure. And use comparative method, observation method and other research methods to design and research the information security technology and computer quality system structure. Experimental research shows that when the load of the computer quality system structure studied this time is 0 or 100, the data loss rate of different lengths is 0, and the correct rate is 100, which shows extremely high feasibility.
López, Hiram H., Matthews, Gretchen L., Valvo, Daniel.  2022.  Secure MatDot codes: a secure, distributed matrix multiplication scheme. 2022 IEEE Information Theory Workshop (ITW). :149–154.
This paper presents secure MatDot codes, a family of evaluation codes that support secure distributed matrix multiplication via a careful selection of evaluation points that exploit the properties of the dual code. We show that the secure MatDot codes provide security against the user by using locally recoverable codes. These new codes complement the recently studied discrete Fourier transform codes for distributed matrix multiplication schemes that also provide security against the user. There are scenarios where the associated costs are the same for both families and instances where the secure MatDot codes offer a lower cost. In addition, the secure MatDot code provides an alternative way to handle the matrix multiplication by identifying the fastest servers in advance. In this way, it can determine a product using fewer servers, specified in advance, than the MatDot codes which achieve the optimal recovery threshold for distributed matrix multiplication schemes.
Zhang, Xin, Sun, Hongyu, He, Zhipeng, Gu, MianXue, Feng, Jingyu, Zhang, Yuqing.  2022.  VDBWGDL: Vulnerability Detection Based On Weight Graph And Deep Learning. 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :186–190.
Vulnerability detection has always been an essential part of maintaining information security, and the existing work can significantly improve the performance of vulnerability detection. However, due to the differences in representation forms and deep learning models, various methods still have some limitations. In order to overcome this defect, We propose a vulnerability detection method VDBWGDL, based on weight graphs and deep learning. Firstly, it accurately locates vulnerability-sensitive keywords and generates variant codes that satisfy vulnerability trigger logic and programmer programming style through code variant methods. Then, the control flow graph is sliced for vulnerable code keywords and program critical statements. The code block is converted into a vector containing rich semantic information and input into the weight map through the deep learning model. According to specific rules, different weights are set for each node. Finally, the similarity is obtained through the similarity comparison algorithm, and the suspected vulnerability is output according to different thresholds. VDBWGDL improves the accuracy and F1 value by 3.98% and 4.85% compared with four state-of-the-art models. The experimental results prove the effectiveness of VDBWGDL.
ISSN: 2325-6664
Wang, Yiwen, Liang, Jifan, Ma, Xiao.  2022.  Local Constraint-Based Ordered Statistics Decoding for Short Block Codes. 2022 IEEE Information Theory Workshop (ITW). :107–112.
In this paper, we propose a new ordered statistics decoding (OSD) for linear block codes, which is referred to as local constraint-based OSD (LC-OSD). Distinguished from the conventional OSD, which chooses the most reliable basis (MRB) for re-encoding, the LC-OSD chooses an extended MRB on which local constraints are naturally imposed. A list of candidate codewords is then generated by performing a serial list Viterbi algorithm (SLVA) over the trellis specified with the local constraints. To terminate early the SLVA for complexity reduction, we present a simple criterion which monitors the ratio of the bound on the likelihood of the unexplored candidate codewords to the sum of the hard-decision vector’s likelihood and the up-to-date optimal candidate’s likelihood. Simulation results show that the LC-OSD can have a much less number of test patterns than that of the conventional OSD but cause negligible performance loss. Comparisons with other complexity-reduced OSDs are also conducted, showing the advantages of the LC-OSD in terms of complexity.
Jiang, Zhenghong.  2022.  Source Code Vulnerability Mining Method based on Graph Neural Network. 2022 IEEE 2nd International Conference on Electronic Technology, Communication and Information (ICETCI). :1177–1180.
Vulnerability discovery is an important field of computer security research and development today. Because most of the current vulnerability discovery methods require large-scale manual auditing, and the code parsing process is cumbersome and time-consuming, the vulnerability discovery effect is reduced. Therefore, for the uncertainty of vulnerability discovery itself, it is the most basic tool design principle that auxiliary security analysts cannot completely replace them. The purpose of this paper is to study the source code vulnerability discovery method based on graph neural network. This paper analyzes the three processes of data preparation, source code vulnerability mining and security assurance of the source code vulnerability mining method, and also analyzes the suspiciousness and particularity of the experimental results. The empirical analysis results show that the types of traditional source code vulnerability mining methods become more concise and convenient after using graph neural network technology, and we conducted a survey and found that more than 82% of people felt that the design source code vulnerability mining method used When it comes to graph neural networks, it is found that the design efficiency has become higher.
Aladi, Ahmed, Alsusa, Emad.  2022.  A Secure Turbo Codes Design on Physical Layer Security Based on Interleaving and Puncturing. 2022 IEEE 96th Vehicular Technology Conference (VTC2022-Fall). :1–7.
Nowadays, improving the reliability and security of the transmitted data has gained more attention with the increase in emerging power-limited and lightweight communication devices. Also, the transmission needs to meet specific latency requirements. Combining data encryption and encoding in one physical layer block has been exploited to study the effect on security and latency over traditional sequential data transmission. Some of the current works target secure error-correcting codes that may be candidates for post-quantum computing. However, modifying the popularly used channel coding techniques to guarantee secrecy and maintain the same error performance and complexity at the decoder is challenging since the structure of the channel coding blocks is altered which results in less optimal decoding performance. Also, the redundancy nature of the error-correcting codes complicates the encryption method. In this paper, we briefly review the proposed security schemes on Turbo codes. Then, we propose a secure turbo code design and compare it with the relevant security schemes in the literature. We show that the proposed method is more secure without adding complexity.
ISSN: 2577-2465
Zhu, Tingting, Liang, Jifan, Ma, Xiao.  2022.  Ternary Convolutional LDGM Codes with Applications to Gaussian Source Compression. 2022 IEEE International Symposium on Information Theory (ISIT). :73–78.
We present a ternary source coding scheme in this paper, which is a special class of low density generator matrix (LDGM) codes. We prove that a ternary linear block LDGM code, whose generator matrix is randomly generated with each element independent and identically distributed, is universal for source coding in terms of the symbol-error rate (SER). To circumvent the high-complex maximum likelihood decoding, we introduce a special class of convolutional LDGM codes, called block Markov superposition transmission of repetition (BMST-R) codes, which are iteratively decodable by a sliding window algorithm. Then the presented BMST-R codes are applied to construct a tandem scheme for Gaussian source compression, where a dead-zone quantizer is introduced before the ternary source coding. The main advantages of this scheme are its universality and flexibility. The dead-zone quantizer can choose a proper quantization level according to the distortion requirement, while the LDGM codes can adapt the code rate to approach the entropy of the quantized sequence. Numerical results show that the proposed scheme performs well for ternary sources over a wide range of code rates and that the distortion introduced by quantization dominates provided that the code rate is slightly greater than the discrete entropy.
ISSN: 2157-8117
Yang, Hongna, Zhang, Yiwei.  2022.  On an extremal problem of regular graphs related to fractional repetition codes. 2022 IEEE International Symposium on Information Theory (ISIT). :1566–1571.
Fractional repetition (FR) codes are a special family of regenerating codes with the repair-by-transfer property. The constructions of FR codes are naturally related to combinatorial designs, graphs, and hypergraphs. Given the file size of an FR code, it is desirable to determine the minimum number of storage nodes needed. The problem is related to an extremal graph theory problem, which asks for the minimum number of vertices of an α-regular graph such that any subgraph with k vertices has at most δ edges. In this paper, we present a class of regular graphs for this problem to give the bounds for the minimum number of storage nodes for the FR codes.
ISSN: 2157-8117
Tang, Shibo, Wang, Xingxin, Gao, Yifei, Hu, Wei.  2022.  Accelerating SoC Security Verification and Vulnerability Detection Through Symbolic Execution. 2022 19th International SoC Design Conference (ISOCC). :207–208.
Model checking is one of the most commonly used technique in formal verification. However, the exponential scale state space renders exhaustive state enumeration inefficient even for a moderate System on Chip (SoC) design. In this paper, we propose a method that leverages symbolic execution to accelerate state space search and pinpoint security vulnerabilities. We automatically convert the hardware design to functionally equivalent C++ code and utilize the KLEE symbolic execution engine to perform state exploration through heuristic search. To reduce the search space, we symbolically represent essential input signals while making non-critical inputs concrete. Experiment results have demonstrated that our method can precisely identify security vulnerabilities at significantly lower computation cost.
Abraham, Jacob, Ehret, Alan, Kinsy, Michel A..  2022.  A Compiler for Transparent Namespace-Based Access Control for the Zeno Architecture. 2022 IEEE International Symposium on Secure and Private Execution Environment Design (SEED). :1–10.
With memory safety and security issues continuing to plague modern systems, security is rapidly becoming a first class priority in new architectures and competes directly with performance and power efficiency. The capability-based architecture model provides a promising solution to many memory vulnerabilities by replacing plain addresses with capabilities, i.e., addresses and related metadata. A key advantage of the capability model is compatibility with existing code bases. Capabilities can be implemented transparently to a programmer, i.e., without source code changes. Capabilities leverage semantics in source code to describe access permissions but require customized compilers to translate the semantics to their binary equivalent.In this work, we introduce a complete capabilityaware compiler toolchain for such secure architectures. We illustrate the compiler construction with a RISC-V capability-based architecture, called Zeno. As a securityfocused, large-scale, global shared memory architecture, Zeno implements a Namespace-based capability model for accesses. Namespace IDs (NSID) are encoded with an extended addressing model to associate them with access permission metadata elsewhere in the system. The NSID extended addressing model requires custom compiler support to fully leverage the protections offered by Namespaces. The Zeno compiler produces code transparently to the programmer that is aware of Namespaces and maintains their integrity. The Zeno assembler enables custom Zeno instructions which support secure memory operations. Our results show that our custom toolchain moderately increases the binary size compared to nonZeno compilation. We find the minimal overhead incurred by the additional NSID management instructions to be an acceptable trade-off for the memory safety and security offered by Zeno Namespaces.
Moses, William S., Narayanan, Sri Hari Krishna, Paehler, Ludger, Churavy, Valentin, Schanen, Michel, Hückelheim, Jan, Doerfert, Johannes, Hovland, Paul.  2022.  Scalable Automatic Differentiation of Multiple Parallel Paradigms through Compiler Augmentation. SC22: International Conference for High Performance Computing, Networking, Storage and Analysis. :1–18.
Derivatives are key to numerous science, engineering, and machine learning applications. While existing tools generate derivatives of programs in a single language, modern parallel applications combine a set of frameworks and languages to leverage available performance and function in an evolving hardware landscape. We propose a scheme for differentiating arbitrary DAG-based parallelism that preserves scalability and efficiency, implemented into the LLVM-based Enzyme automatic differentiation framework. By integrating with a full-fledged compiler backend, Enzyme can differentiate numerous parallel frameworks and directly control code generation. Combined with its ability to differentiate any LLVM-based language, this flexibility permits Enzyme to leverage the compiler tool chain for parallel and differentiation-specitic optimizations. We differentiate nine distinct versions of the LULESH and miniBUDE applications, written in different programming languages (C++, Julia) and parallel frameworks (OpenMP, MPI, RAJA, Julia tasks, MPI.jl), demonstrating similar scalability to the original program. On benchmarks with 64 threads or nodes, we find a differentiation overhead of 3.4–6.8× on C++ and 5.4–12.5× on Julia.
Li, Zongjie, Ma, Pingchuan, Wang, Huaijin, Wang, Shuai, Tang, Qiyi, Nie, Sen, Wu, Shi.  2022.  Unleashing the Power of Compiler Intermediate Representation to Enhance Neural Program Embeddings. 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE). :2253–2265.
Neural program embeddings have demonstrated considerable promise in a range of program analysis tasks, including clone identification, program repair, code completion, and program synthesis. However, most existing methods generate neural program embeddings di-rectly from the program source codes, by learning from features such as tokens, abstract syntax trees, and control flow graphs. This paper takes a fresh look at how to improve program embed-dings by leveraging compiler intermediate representation (IR). We first demonstrate simple yet highly effective methods for enhancing embedding quality by training embedding models alongside source code and LLVM IR generated by default optimization levels (e.g., -02). We then introduce IRGEN, a framework based on genetic algorithms (GA), to identify (near-)optimal sequences of optimization flags that can significantly improve embedding quality. We use IRGEN to find optimal sequences of LLVM optimization flags by performing GA on source code datasets. We then extend a popular code embedding model, CodeCMR, by adding a new objective based on triplet loss to enable a joint learning over source code and LLVM IR. We benchmark the quality of embedding using a rep-resentative downstream application, code clone detection. When CodeCMR was trained with source code and LLVM IRs optimized by findings of IRGEN, the embedding quality was significantly im-proved, outperforming the state-of-the-art model, CodeBERT, which was trained only with source code. Our augmented CodeCMR also outperformed CodeCMR trained over source code and IR optimized with default optimization levels. We investigate the properties of optimization flags that increase embedding quality, demonstrate IRGEN's generalization in boosting other embedding models, and establish IRGEN's use in settings with extremely limited training data. Our research and findings demonstrate that a straightforward addition to modern neural code embedding models can provide a highly effective enhancement.
Chen, Ligeng, He, Zhongling, Wu, Hao, Xu, Fengyuan, Qian, Yi, Mao, Bing.  2022.  DIComP: Lightweight Data-Driven Inference of Binary Compiler Provenance with High Accuracy. 2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER). :112–122.
Binary analysis is pervasively utilized to assess software security and test vulnerabilities without accessing source codes. The analysis validity is heavily influenced by the inferring ability of information related to the code compilation. Among the compilation information, compiler type and optimization level, as the key factors determining how binaries look like, are still difficult to be inferred efficiently with existing tools. In this paper, we conduct a thorough empirical study on the binary's appearance under various compilation settings and propose a lightweight binary analysis tool based on the simplest machine learning method, called DIComP to infer the compiler and optimization level via most relevant features according to the observation. Our comprehensive evaluations demonstrate that DIComP can fully recognize the compiler provenance, and it is effective in inferring the optimization levels with up to 90% accuracy. Also, it is efficient to infer thousands of binaries at a millisecond level with our lightweight machine learning model (1MB).
Li, Zhiyu, Zhou, Xiang, Weng, Wenbin.  2022.  Operator Partitioning and Parallel Scheduling Optimization for Deep Learning Compiler. 2022 IEEE 5th International Conference on Automation, Electronics and Electrical Engineering (AUTEEE). :205–211.
TVM(tensor virtual machine) as a deep learning compiler which supports the conversion of machine learning models into TVM IR(intermediate representation) and to optimise the generation of high-performance machine code for various hardware platforms. While the traditional approach is to parallelise the cyclic transformations of operators, in this paper we partition the implementation of the operators in the deep learning compiler TVM with parallel scheduling to derive a faster running time solution for the operators. An optimisation algorithm for partitioning and parallel scheduling is designed for the deep learning compiler TVM, where operators such as two-dimensional convolutions are partitioned into multiple smaller implementations and several partitioned operators are run in parallel scheduling to derive the best operator partitioning and parallel scheduling decisions by means of performance estimation. To evaluate the effectiveness of the algorithm, multiple examples of the two-dimensional convolution operator, the average pooling operator, the maximum pooling operator, and the ReLU activation operator with different input sizes were tested on the CPU platform, and the performance of these operators was experimentally shown to be improved and the operators were run speedily.
Kudrjavets, Gunnar, Kumar, Aditya, Nagappan, Nachiappan, Rastogi, Ayushi.  2022.  The Unexplored Terrain of Compiler Warnings. 2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). :283–284.
The authors' industry experiences suggest that compiler warnings, a lightweight version of program analysis, are valuable early bug detection tools. Significant costs are associated with patches and security bulletins for issues that could have been avoided if compiler warnings were addressed. Yet, the industry's attitude towards compiler warnings is mixed. Practices range from silencing all compiler warnings to having a zero-tolerance policy as to any warnings. Current published data indicates that addressing compiler warnings early is beneficial. However, support for this value theory stems from grey literature or is anecdotal. Additional focused research is needed to truly assess the cost-benefit of addressing warnings.
2023-04-27
Rafique, Wajid, Hafid, Abdelhakim Senhaji, Cherkaoui, Soumaya.  2022.  Complementing IoT Services Using Software-Defined Information Centric Networks: A Comprehensive Survey. IEEE Internet of Things Journal. 9:23545–23569.
IoT connects a large number of physical objects with the Internet that capture and exchange real-time information for service provisioning. Traditional network management schemes face challenges to manage vast amounts of network traffic generated by IoT services. Software-defined networking (SDN) and information-centric networking (ICN) are two complementary technologies that could be integrated to solve the challenges of different aspects of IoT service provisioning. ICN offers a clean-slate design to accommodate continuously increasing network traffic by considering content as a network primitive. It provides a novel solution for information propagation and delivery for large-scale IoT services. On the other hand, SDN allocates overall network management responsibilities to a central controller, where network elements act merely as traffic forwarding components. An SDN-enabled network supports ICN without deploying ICN-capable hardware. Therefore, the integration of SDN and ICN provides benefits for large-scale IoT services. This article provides a comprehensive survey on software-defined information-centric Internet of Things (SDIC-IoT) for IoT service provisioning. We present critical enabling technologies of SDIC-IoT, discuss its architecture, and describe its benefits for IoT service provisioning. We elaborate on key IoT service provisioning requirements and discuss how SDIC-IoT supports different aspects of IoT services. We define different taxonomies of SDIC-IoT literature based on various performance parameters. Furthermore, we extensively discuss different use cases, synergies, and advances to realize the SDIC-IoT concept. Finally, we present current challenges and future research directions of IoT service provisioning using SDIC-IoT.
Conference Name: IEEE Internet of Things Journal
Spliet, Roy, Mullins, Robert D..  2022.  Sim-D: A SIMD Accelerator for Hard Real-Time Systems. IEEE Transactions on Computers. 71:851–865.
Emerging safety-critical systems require high-performance data-parallel architectures and, problematically, ones that can guarantee tight and safe worst-case execution times. Given the complexity of existing architectures like GPUs, it is unlikely that sufficiently accurate models and algorithms for timing analysis will emerge in the foreseeable future. This motivates our work on Sim-D, a clean-slate approach to designing a real-time data-parallel architecture. Sim-D enforces a predictable execution model by isolating compute- and access resources in hardware. The DRAM controller uninterruptedly transfers tiles of data, requested by entire work-groups. This permits work-groups to be executed as a sequence of deterministic access- and compute phases, scheduling phases from up to two work-groups in parallel. Evaluation using a cycle-accurate timing model shows that Sim-D can achieve performance on par with an embedded-grade NVIDIA TK1 GPU under two conditions: applications refrain from using indirect DRAM transfers into large buffers, and Sim-D's scratchpads provide sufficient bandwidth. Sim-D's design facilitates derivation of safe WCET bounds that are tight within 12.7 percent on average, at an additional average performance penalty of \textbackslashsim∼9.2 percent caused by scheduling restrictions on phases.
Conference Name: IEEE Transactions on Computers