Biblio
Code optimization is an essential feature for compilers and almost all software products are released by compiler optimizations. Consequently, bugs in code optimization will inevitably cast significant impact on the correctness of software systems. Locating optimization bugs in compilers is challenging as compilers typically support a large amount of optimization configurations. Although prior studies have proposed to locate compiler bugs via generating witness test programs, they are still time-consuming and not effective enough. To address such limitations, we propose an automatic bug localization approach, ODFL, for locating compiler optimization bugs via differentiating finer-grained options in this study. Specifically, we first disable the fine-grained options that are enabled by default under the bug-triggering optimization levels independently to obtain bug-free and bug-related fine-grained options. We then configure several effective passing and failing optimization sequences based on such fine-grained options to obtain multiple failing and passing compiler coverage. Finally, such generated coverage information can be utilized via Spectrum-Based Fault Localization formulae to rank the suspicious compiler files. We run ODFL on 60 buggy GCC compilers from an existing benchmark. The experimental results show that ODFL significantly outperforms the state-of-the-art compiler bug isolation approach RecBi in terms of all the evaluated metrics, demonstrating the effectiveness of ODFL. In addition, ODFL is much more efficient than RecBi as it can save more than 88% of the time for locating bugs on average.
ISSN: 1534-5351
Autonomous vehicles (AVs) are capable of making driving decisions autonomously using multiple sensors and a complex autonomous driving (AD) software. However, AVs introduce numerous unique security challenges that have the potential to create safety consequences on the road. Security mechanisms require a benchmark suite and an evaluation framework to generate comparable results. Unfortunately, AVs lack a proper benchmarking framework to evaluate the attack and defense mechanisms and quantify the safety measures. This paper introduces BenchAV – a security benchmark suite and evaluation framework for AVs to address current limitations and pressing challenges of AD security. The benchmark suite contains 12 security and performance metrics, and an evaluation framework that automates the metric collection process using Carla simulator and Robot Operating System (ROS).
This paper introduces lronMask, a new versatile verification tool for masking security. lronMask is the first to offer the verification of standard simulation-based security notions in the probing model as well as recent composition and expandability notions in the random probing model. It supports any masking gadgets with linear randomness (e.g. addition, copy and refresh gadgets) as well as quadratic gadgets (e.g. multiplication gadgets) that might include non-linear randomness (e.g. by refreshing their inputs), while providing complete verification results for both types of gadgets. We achieve this complete verifiability by introducing a new algebraic characterization for such quadratic gadgets and exhibiting a complete method to determine the sets of input shares which are necessary and sufficient to perform a perfect simulation of any set of probes. We report various benchmarks which show that lronMask is competitive with state-of-the-art verification tools in the probing model (maskVerif, scVerif, SILVEH, matverif). lronMask is also several orders of magnitude faster than VHAPS -the only previous tool verifying random probing composability and expandability- as well as SILVEH -the only previous tool providing complete verification for quadratic gadgets with nonlinear randomness. Thanks to this completeness and increased performance, we obtain better bounds for the tolerated leakage probability of state-of-the-art random probing secure compilers.