Visible to the public Biblio

Filters: Keyword is coverage  [Clear All Filters]
2021-03-15
Simon, L., Verma, A..  2020.  Improving Fuzzing through Controlled Compilation. 2020 IEEE European Symposium on Security and Privacy (EuroS P). :34–52.
We observe that operations performed by standard compilers harm fuzzing because the optimizations and the Intermediate Representation (IR) lead to transformations that improve execution speed at the expense of fuzzing. To remedy this problem, we propose `controlled compilation', a set of techniques to automatically re-factor a program's source code and cherry pick beneficial compiler optimizations to improve fuzzing. We design, implement and evaluate controlled compilation by building a new toolchain with Clang/LLVM. We perform an evaluation on 10 open source projects and compare the results of AFL to state-of-the-art grey-box fuzzers and concolic fuzzers. We show that when programs are compiled with this new toolchain, AFL covers 30 % new code on average and finds 21 additional bugs in real world programs. Our study reveals that controlled compilation often covers more code and finds more bugs than state-of-the-art fuzzing techniques, without the need to write a fuzzer from scratch or resort to advanced techniques. We identify two main reasons to explain why. First, it has proven difficult for researchers to appropriately configure existing fuzzers such as AFL. To address this problem, we provide guidelines and new LLVM passes to help automate AFL's configuration. This will enable researchers to perform a fairer comparison with AFL. Second, we find that current coverage-based evaluation measures (e.g. the total number of visited lines, edges or BBs) are inadequate because they lose valuable information such as which parts of a program a fuzzer actually visits and how consistently it does so. Coverage is considered a useful metric to evaluate a fuzzer's performance and devise a fuzzing strategy. However, the lack of a standard methodology for evaluating coverage remains a problem. To address this, we propose a rigorous evaluation methodology based on `qualitative coverage'. Qualitative coverage uniquely identifies each program line to help understand which lines are commonly visited by different fuzzers vs. which lines are visited only by a particular fuzzer. Throughout our study, we show the benefits of this new evaluation methodology. For example we provide valuable insights into the consistency of fuzzers, i.e. their ability to cover the same code or find the same bug across multiple independent runs. Overall, our evaluation methodology based on qualitative coverage helps to understand if a fuzzer performs better, worse, or is complementary to another fuzzer. This helps security practitioners adjust their fuzzing strategies.
2019-07-01
Rosa, F. De Franco, Jino, M., Bueno, P. Marcos Siqueira, Bonacin, R..  2018.  Coverage-Based Heuristics for Selecting Assessment Items from Security Standards: A Core Set Proposal. 2018 Workshop on Metrology for Industry 4.0 and IoT. :192-197.

In the realm of Internet of Things (IoT), information security is a critical issue. Security standards, including their assessment items, are essential instruments in the evaluation of systems security. However, a key question remains open: ``Which test cases are most effective for security assessment?'' To create security assessment designs with suitable assessment items, we need to know the security properties and assessment dimensions covered by a standard. We propose an approach for selecting and analyzing security assessment items; its foundations come from a set of assessment heuristics and it aims to increase the coverage of assessment dimensions and security characteristics in assessment designs. The main contribution of this paper is the definition of a core set of security assessment heuristics. We systematize the security assessment process by means of a conceptual formalization of the security assessment area. Our approach can be applied to security standards to select or to prioritize assessment items with respect to 11 security properties and 6 assessment dimensions. The approach is flexible allowing the inclusion of dimensions and properties. Our proposal was applied to a well know security standard (ISO/IEC 27001) and its assessment items were analyzed. The proposal is meant to support: (i) the generation of high-coverage assessment designs, which include security assessment items with assured coverage of the main security characteristics, and (ii) evaluation of security standards with respect to the coverage of security aspects.

2017-10-03
Bello, Oumarou Mamadou, Taiwe, Kolyang Dina.  2016.  Mesh Node Placement in Wireless Mesh Network Based on Multiobjective Evolutionary Metaheuristic. Proceedings of the International Conference on Internet of Things and Cloud Computing. :59:1–59:6.

The necessity to deploy wireless mesh network is determined by the real world application requirements. WMN does not fit some application well due to latency issues and capacity related problem with paths having more than 2 hops. With the promising IEEE 802.11ac based device a better fairness for multi-hop communications are expected to support broadband application; the rate usually varies according to the link quality and network environment. Careful network planning can effectively improves the throughput and delay of the overall network. We provide model for the placement of router nodes as an optimization process to improve performance. Our aim is to propose a WMNs planning model based on multiobjective constraints like coverage, reliability, and cost of deployment. The bit rate guarantee therefore necessary to limit the number of stations connected to the access point; to takes into account delay and fairness of the network the user's behaviors are derived. We use a multiobjective evolutionary algorithm based metaheuristic to evaluate the performance of our proposed placement algorithm.