Biblio
Code churn has been successfully used to identify defect inducing changes in software development. Our recent analysis of the cross-release code churn showed that several design metrics exhibit moderate correlation with the number of defects in complex systems. The goal of this paper is to explore whether cross-release code churn can be used to identify critical design change and contribute to prediction of defects for software in evolution. In our case study, we used two types of data from consecutive releases of open-source projects, with and without cross-release code churn, to build standard prediction models. The prediction models were trained on earlier releases and tested on the following ones, evaluating the performance in terms of AUC, GM and effort aware measure Pop. The comparison of their performance was used to answer our research question. The obtained results showed that the prediction model performs better when cross-release code churn is included. Practical implication of this research is to use cross-release code churn to aid in safe planning of next release in software development.
With so much our daily lives relying on digital devices like personal computers and cell phones, there is a growing demand for code that not only functions properly, but is secure and keeps user data safe. However, ensuring this is not such an easy task, and many developers do not have the required skills or resources to ensure their code is secure. Many code analysis tools have been written to find vulnerabilities in newly developed code, but this technology tends to produce many false positives, and is still not able to identify all of the problems. Other methods of finding software vulnerabilities automatically are required. This proof-of-concept study applied natural language processing on Java byte code to locate SQL injection vulnerabilities in a Java program. Preliminary findings show that, due to the high number of terms in the dataset, using singular decision trees will not produce a suitable model for locating SQL injection vulnerabilities, while random forest structures proved more promising. Still, further work is needed to determine the best classification tool.
The evolution of the microelectronics manufacturing industry is characterized by increased complexity, analysis, integration, distribution, data sharing and collaboration, all of which is enabled by the big data explosion. This evolution affords a number of opportunities in improved productivity and quality, and reduced cost, however it also brings with it a number of risks associated with maintaining security of data systems. The International Roadmap for Devices and System Factory Integration International Focus Team (IRDS FI IFT) determined that a security technology roadmap for the industry is needed to better understand the needs, challenges and potential solutions for security in the microelectronics industry and its supply chain. As a first step in providing this roadmap, the IFT conducted a security survey, soliciting input from users, suppliers and OEMs. Preliminary results indicate that data partitioning with IP protection is the number one topic of concern, with the need for industry-wide standards as the second most important topic. Further, the "fear" of security breach is considered to be a significant hindrance to Advanced Process Control efforts as well as use of cloud-based solutions. The IRDS FI IFT will endeavor to provide components of a security roadmap for the industry in the 2018 FI chapter, leveraging the output of the survey effort combined with follow-up discussions with users and consultations with experts.
Cybersecurity assurance plays an important role in managing trust in smart grid communication systems. In this paper, cybersecurity assurance controls for smart grid communication networks and devices are delineated from the more technical functional controls to provide insights on recent innovative risk-based approaches to cybersecurity assurance in smart grid systems. The cybersecurity assurance control baselining presented in this paper is based on requirements and guidelines of the new family of IEC 62443 standards on network and systems security of industrial automation and control systems. The paper illustrates how key cybersecurity control baselining and tailoring concepts of the U.S. NIST SP 800-53 can be adopted in smart grid security architecture. The paper outlines the application of IEC 62443 standards-based security zoning and assignment of security levels to the zones in smart grid system architectures. To manage trust in the smart grid system architecture, cybersecurity assurance base lining concepts are applied per security impact levels. Selection and justification of security assurance controls presented in the paper is utilizing the approach common in Security Technical Implementation Guides (STIGs) of the U.S. Defense Information Systems Agency. As shown in the paper, enhanced granularity for managing trust both on the overall system and subsystem levels of smart grid systems can be achieved by implementation of the instructions of the CNSSI 1253 of the U.S. Committee of National Security Systems on security categorization and control selection for national security systems.
In the realm of Internet of Things (IoT), information security is a critical issue. Security standards, including their assessment items, are essential instruments in the evaluation of systems security. However, a key question remains open: ``Which test cases are most effective for security assessment?'' To create security assessment designs with suitable assessment items, we need to know the security properties and assessment dimensions covered by a standard. We propose an approach for selecting and analyzing security assessment items; its foundations come from a set of assessment heuristics and it aims to increase the coverage of assessment dimensions and security characteristics in assessment designs. The main contribution of this paper is the definition of a core set of security assessment heuristics. We systematize the security assessment process by means of a conceptual formalization of the security assessment area. Our approach can be applied to security standards to select or to prioritize assessment items with respect to 11 security properties and 6 assessment dimensions. The approach is flexible allowing the inclusion of dimensions and properties. Our proposal was applied to a well know security standard (ISO/IEC 27001) and its assessment items were analyzed. The proposal is meant to support: (i) the generation of high-coverage assessment designs, which include security assessment items with assured coverage of the main security characteristics, and (ii) evaluation of security standards with respect to the coverage of security aspects.
The success and widespread adoption of the Internet of Things (IoT) has increased many folds over the last few years. Industries, technologists and home users recognise the importance of IoT in their lives. Essentially, IoT has brought vast industrial revolution and has helped automate many processes within organisations and homes. However, the rapid growth of IoT is also a cause for significant concern. IoT is not only plagued with security, authentication and access control issues, it also doesn't work as well as it should with fourth industrial revolution, commonly known as Industry 4.0. The absence of effective regulation, standards and weak governance has led to a continual downward trend in the security of IoT networks and devices, as well as given rise to a broad range of privacy issues. This paper examines the IoT industry and discusses the urgent need for standardisation, the benefits of governance as well as the issues affecting the IoT sector due to the absence of regulation. Additionally, through this paper, we are introducing an IoT security framework (IoTSFW) for organisations to bridge the current lack of guidelines in the IoT industry. Implementation of the guidelines, defined in the proposed framework, will assist organisations in achieving security, privacy, sustainability and scalability within their IoT networks.
This Since the past century, the digital design industry has performed an outstanding role in the development of electronics. Hence, a great variety of designs are developed daily, these designs must be submitted to high standards of verification in order to ensure the 100% of reliability and the achievement of all design requirements. The Universal Verification Methodology (UVM) is the current standard at the industry for the verification process due to its reusability, scalability, time-efficiency and feasibility of handling high-level designs. This research proposes a functional verification framework using UVM for an AES encryption module based on a very detailed and robust verification plan. This document describes the complete verification process as done in the industry for a popular module used in information-security applications in the field of cryptography, defining the basis for future projects. The overall results show the achievement of the high verification standards required in industry applications and highlight the advantages of UVM against System Verilog-based functional verification and direct verification methodologies previously developed for the AES module.
The design of optimal energy management strategies that trade-off consumers' privacy and expected energy cost by using an energy storage is studied. The Kullback-Leibler divergence rate is used to assess the privacy risk of the unauthorized testing on consumers' behavior. We further show how this design problem can be formulated as a belief state Markov decision process problem so that standard tools of the Markov decision process framework can be utilized, and the optimal solution can be obtained by using Bellman dynamic programming. Finally, we illustrate the privacy-enhancement and cost-saving by numerical examples.
Industrial control systems are the fundamental infrastructures of a country. Since the intrusion attack methods for industrial control systems have become complex and concealed, the traditional protection methods, such as vulnerability database, virus database and rule matching cannot cope with the attacks hidden inside the terminals of industrial control systems. In this work, we propose a control flow anomaly detection algorithm based on the control flow of the business programs. First, a basic group partition method based on key paths is proposed to reduce the performance burden caused by tabbed-assert control flow analysis method through expanding basic research units. Second, the algorithm phases of standard path set acquisition and path matching are introduced. By judging whether the current control flow path is deviating from the standard set or not, the abnormal operating conditions of industrial control can be detected. Finally, the effectiveness of a control flow anomaly detection (checking) algorithm based on Path Matching (CFCPM) is demonstrated by anomaly detection ability analysis and experiments.
T138 combat cyber crimes, electronic evidence have played an increasing role, but in judicial practice the electronic evidence were not highly applied because of the natural contradiction between the epistemic uncertainty of electronic evidence and the principle of discretionary evidence of judge in the court. in this paper, we put forward a layer-built method to analyze the relevancy of electronic evidence, and discussed their analytical process combined with the case study. The initial practice shows the model is feasible and has a consulting value in analyzing the relevancy of electronic evidence.
This paper presents a theoretical background of main research activity focused on the evaluation of wiping/erasure standards which are mostly implemented in specific software products developed and programming for data wiping. The information saved in storage devices often consists of metadata and trace data. Especially but not only these kinds of data are very important in the process of forensic analysis because they sometimes contain information about interconnection on another file. Most people saving their sensitive information on their local storage devices and later they want to secure erase these files but usually there is a problem with this operation. Secure file destruction is one of many Anti-forensics methods. The outcome of this paper is to define the future research activities focused on the establishment of the suitable digital environment. This environment will be prepared for testing and evaluating selected wiping standards and appropriate eraser software.
The paper deals with the implementation aspects of the bilinear pairing operation over an elliptic curve on constrained devices, such as smart cards, embedded devices, smart meters and similar devices. Although cryptographic constructions, such as group signatures, anonymous credentials or identity-based encryption schemes, often rely on the pairing operation, the implementation of such schemes into practical applications is not straightforward, in fact, it may become very difficult. In this paper, we show that the implementation is difficult not only due to the high computational complexity, but also due to the lack of cryptographic libraries and programming interfaces. In particular, we show how difficult it is to implement pairing-based schemes on constrained devices and show the performance of various libraries on different platforms. Furthermore, we show the performance estimates of fundamental cryptographic constructions, the group signatures. The purpose of this paper is to reduce the gap between the cryptographic designers and developers and give performance results that can be used for the estimation of the implementability and performance of novel, upcoming schemes.
We survey elliptic curve implementations from several vantage points. We perform internet-wide scans for TLS on a large number of ports, as well as SSH and IPsec to measure elliptic curve support and implementation behaviors, and collect passive measurements of client curve support for TLS. We also perform active measurements to estimate server vulnerability to known attacks against elliptic curve implementations, including support for weak curves, invalid curve attacks, and curve twist attacks. We estimate that 1.53% of HTTPS hosts, 0.04% of SSH hosts, and 4.04% of IKEv2 hosts that support elliptic curves do not perform curve validity checks as specified in elliptic curve standards. We describe how such vulnerabilities could be used to construct an elliptic curve parameter downgrade attack called CurveSwap for TLS, and observe that there do not appear to be combinations of weak behaviors we examined enabling a feasible CurveSwap attack in the wild. We also analyze source code for elliptic curve implementations, and find that a number of libraries fail to perform point validation for JSON Web Encryption, and find a flaw in the Java and NSS multiplication algorithms.
Traceability has grown from being a specialized need for certain safety critical segments of the industry, to now being a recognized value-add tool for the industry as a whole that can be utilized for manual to automated processes End to End throughout the supply chain. The perception of traceability data collection persists as being a burden that provides value only when the most rare and disastrous of events take place. Disparate standards have evolved in the industry, mainly dictated by large OEM companies in the market create confusion, as a multitude of requirements and definitions proliferate. The intent of the IPC-1782 project is to bring the whole principle of traceability up to date and enable business to move faster, increase revenue, increase productivity, and decrease costs as a result of increased trust. Traceability, as defined in this standard will represent the most effective quality tool available, becoming an intrinsic part of best practice operations, with the encouragement of automated data collection from existing manufacturing systems which works well with Industry 4.0, integrating quality, reliability, product safety, predictive (routine, preventative, and corrective) maintenance, throughput, manufacturing, engineering and supply-chain data, reducing cost of ownership as well as ensuring timeliness and accuracy all the way from a finished product back through to the initial materials and granular attributes about the processes along the way. The goal of this standard is to create a single expandable and extendable data structure that can be adopted for all levels of traceability and enable easily exchanged information, as appropriate, across many industries. The scope includes support for the most demanding instances for detail and integrity such as those required by critical safety systems, all the way through to situations where only basic traceability, such as for simple consumer products, are required. A key driver for the adoption of the standard is the ability to find a relevant and achievable level of traceability that exactly meets the requirement following risk assessment of the business. The wealth of data accessible from traceability for analysis (e.g.; Big Data, etc.) can easily and quickly yield information that can raise expectations of very significant quality and performance improvements, as well as providing the necessary protection against the costs of issues in the market and providing very timely information to regulatory bodies along with consumers/customers as appropriate. This information can also be used to quickly raise yields, drive product innovation that resonates with consumers, and help drive development tests & design requirements that are meaningful to the Marketplace. Leveraging IPC 1782 to create the best value of Component Traceability for your business.
In this paper, inspired by Gatys's recent work, we propose a novel approach that transforms photos to comics using deep convolutional neural networks (CNNs). While Gatys's method that uses a pre-trained VGG network generally works well for transferring artistic styles such as painting from a style image to a content image, for more minimalist styles such as comics, the method often fails to produce satisfactory results. To address this, we further introduce a dedicated comic style CNN, which is trained for classifying comic images and photos. This new network is effective in capturing various comic styles and thus helps to produce better comic stylization results. Even with a grayscale style image, Gatys's method can still produce colored output, which is not desirable for comics. We develop a modified optimization framework such that a grayscale image is guaranteed to be synthesized. To avoid converging to poor local minima, we further initialize the output image using grayscale version of the content image. Various examples show that our method synthesizes better comic images than the state-of-the-art method.
With the rise in worth and popularity of cryptocurrencies, a new opportunity for criminal gain is being exploited and with little currently offered in the way of defence. The cost of mining (i.e., earning cryptocurrency through CPU-intensive calculations that underpin the blockchain technology) can be prohibitively expensive, with hardware costs and electrical overheads previously offering a loss compared to the cryptocurrency gained. Off-loading these costs along a distributed network of machines via malware offers an instantly profitable scenario, though standard Anti-virus (AV) products offer some defences against file-based threats. However, newer fileless malicious attacks, occurring through the browser on seemingly legitimate websites, can easily evade detection and surreptitiously engage the victim machine in computationally-expensive cryptomining (cryptojacking). With no current academic literature on the dynamic opcode analysis of cryptomining, to the best of our knowledge, we present the first such experimental study. Indeed, this is the first such work presenting opcode analysis on non-executable files. Our results show that browser-based cryptomining within our dataset can be detected by dynamic opcode analysis, with accuracies of up to 100%. Further to this, our model can distinguish between cryptomining sites, weaponized benign sites, de-weaponized cryptomining sites and real world benign sites. As it is process-based, our technique offers an opportunity to rapidly detect, prevent and mitigate such attacks, a novel contribution which should encourage further future work.
The concept of cyber-physical production systems is highly discussed amongst researchers and industry experts, however, the implementation options for these systems rely mainly on obsolete technologies. Despite the fact that the blockchain is most often associated with cryptocurrency, it is fundamentally wrong to deny the universality of this technology and the prospects for its application in other industries. For example, in the insurance sector or in a number of identity verification services. This article discusses the deployment of the CPPS backbone network based on the Ethereum private blockchain system. The structure of the network is described as well as its interaction with the help of smart contracts, based on the consumption of cryptocurrency for various operations.
Hadoop is developed as a distributed data processing platform for analyzing big data. Enterprises can analyze big data containing users' sensitive information by using Hadoop and utilize them for their marketing. Therefore, researches on data encryption have been widely done to protect the leakage of sensitive data stored in Hadoop. However, the existing researches support only the AES international standard data encryption algorithm. Meanwhile, the Korean government selected ARIA algorithm as a standard data encryption scheme for domestic usages. In this paper, we propose a HDFS data encryption scheme which supports both ARIA and AES algorithms on Hadoop. First, the proposed scheme provides a HDFS block-splitting component that performs ARIA/AES encryption and decryption under the Hadoop distributed computing environment. Second, the proposed scheme provides a variable-length data processing component that can perform encryption and decryption by adding dummy data, in case when the last data block does not contains 128-bit data. Finally, we show from performance analysis that our proposed scheme is efficient for various applications, such as word counting, sorting, k-Means, and hierarchical clustering.