Biblio
Code optimization is an essential feature for compilers and almost all software products are released by compiler optimizations. Consequently, bugs in code optimization will inevitably cast significant impact on the correctness of software systems. Locating optimization bugs in compilers is challenging as compilers typically support a large amount of optimization configurations. Although prior studies have proposed to locate compiler bugs via generating witness test programs, they are still time-consuming and not effective enough. To address such limitations, we propose an automatic bug localization approach, ODFL, for locating compiler optimization bugs via differentiating finer-grained options in this study. Specifically, we first disable the fine-grained options that are enabled by default under the bug-triggering optimization levels independently to obtain bug-free and bug-related fine-grained options. We then configure several effective passing and failing optimization sequences based on such fine-grained options to obtain multiple failing and passing compiler coverage. Finally, such generated coverage information can be utilized via Spectrum-Based Fault Localization formulae to rank the suspicious compiler files. We run ODFL on 60 buggy GCC compilers from an existing benchmark. The experimental results show that ODFL significantly outperforms the state-of-the-art compiler bug isolation approach RecBi in terms of all the evaluated metrics, demonstrating the effectiveness of ODFL. In addition, ODFL is much more efficient than RecBi as it can save more than 88% of the time for locating bugs on average.
ISSN: 1534-5351
Direct-access attacks were initially considered as un-realistic threats in cyber security because the attacker can more easily mount other non-computerized attacks like cutting a brake line. In recent years, some research into direct-access attacks have been conducted especially in the automotive field, for example, research on an attack method that makes the ECU stop functioning via the CAN bus. The problem with existing risk quantification methods is that direct-access attacks seem not to be recognized as serious threats. To solve this problem, we propose a new risk quantification method by applying vulnerability evaluation criteria and by setting metrics. We also confirm that direct-access attacks not recognized by conventional methods can be evaluated appropriately, using the case study of an automotive system as an example of a cyber-physical system.
Finite-state machine (FSM) is widely used as control unit in most digital designs. Many intellectual property protection and obfuscation techniques leverage on the exponential number of possible states and state transitions of large FSM to secure a physical design with the reason that it is challenging to retrieve the FSM design from its downstream design or physical implementation without knowledge of the design. In this paper, we postulate that this assumption may not be sustainable with big data analytics. We demonstrate by applying a data mining technique to analyze sufficiently large amount of data collected from a full scan design to identify its FSM state registers. An impact metric is introduced to discriminate FSM state registers from other registers. A decision tree algorithm is constructed from the scan data for the regression analysis of the dependency of other registers on a chosen register to deduce its impact. The registers with the greater impact are more likely to be the FSM state registers. The proposed scheme is applied on several complex designs from OpenCores. The experiment results show the feasibility of our scheme in correctly identifying most FSM state registers with a high hit rate for a large majority of the designs.
Living in the age of digital transformation, companies and individuals are moving to public and private clouds to store and retrieve information, hence the need to store and retrieve data is exponentially increasing. Existing storage technologies such as DAS are facing a big challenge to deal with these huge amount of data. Hence, newer technologies should be adopted. Storage Area Network (SAN) is a distributed storage technology that aggregates data from several private nodes into a centralized secure place. Looking at SAN from a security perspective, clearly physical security over multiple geographical remote locations is not adequate to ensure a full security solution. A SAN security framework needs to be developed and designed. This work investigates how SAN protocols work (FC, ISCSI, FCOE). It also investigates about other storages technologies such as Network Attached Storage (NAS) and Direct Attached Storage (DAS) including different metrics such as: IOPS (input output per second), Throughput, Bandwidths, latency, cashing technologies. This research work is focusing on the security vulnerabilities in SAN listing different attacks in SAN protocols and compare it to other such as NAS and DAS. Another aspect of this work is to highlight performance factors in SAN in order to find a way to improve the performance focusing security solutions aimed to enhance the security level in SAN.
In this work, we use a subjective approach to compute cyber resilience metrics for industrial control systems. We utilize the extended form of the R4 resilience framework and span the metrics over physical, technical, and organizational domains of resilience. We develop a qualitative cyber resilience assessment tool using the framework and a subjective questionnaire method. We make sure the questionnaires are realistic, balanced, and pertinent to ICS by involving subject matter experts into the process and following security guidelines and standards practices. We provide detail mathematical explanation of the resilience computation procedure. We discuss several usages of the qualitative tool by generating simulation results. We provide a system architecture of the simulation engine and the validation of the tool. We think the qualitative simulation tool would give useful insights for industrial control systems' overall resilience assessment and security analysis.
Software Defined Networking (SDN) provides new functionalities to efficiently manage the network traffic, which can be used to enhance the networking capabilities to support the growing communication demands today. But at the same time, it introduces new attack vectors that can be exploited by attackers. Hence, evaluating and selecting countermeasures to optimize the security of the SDN is of paramount importance. However, one should also take into account the trade-off between security and performance of the SDN. In this paper, we present a security optimization approach for the SDN taking into account the trade-off between security and performance. We evaluate the security of the SDN using graphical security models and metrics, and use queuing models to measure the performance of the SDN. Further, we use Genetic Algorithms, namely NSGA-II, to optimally select the countermeasure with performance and security constraints. Our experimental analysis results show that the proposed approach can efficiently compute the countermeasures that will optimize the security of the SDN while satisfying the performance constraints.
An improved algorithm of the Analytic Hierarchy Process (AHP) is proposed in this paper, which is realized by constructing an improved judgment matrix. Specifically, rough set theory is used in the algorithm to calculate the weight of the network metric data, and then the improved AHP algorithm nine-point systemic is structured, finally, an improved AHP judgment matrix is constructed. By performing an AHP operation on the improved judgment matrix, the weight of the improved network metric data can be obtained. If only the rough set theory is applied to process the network index data, the objective factors would dominate the whole process. If the improved algorithm of AHP is used to integrate the expert score into the process of measurement, then the combination of subjective factors and objective factors can be realized. Based on the aforementioned theory, a new network attack metrics system is proposed in this paper, which uses a metric structure based on "attack type-attack attribute-attack atomic operation-attack metrics", in which the metric process of attack attribute adopts AHP. The metrics of the system are comprehensive, given their judgment of frequent attacks is universal. The experiment was verified by an experiment of a common attack Smurf. The experimental results show the effectiveness and applicability of the proposed measurement system.