Visible to the public Biblio

Filters: Keyword is empirical evaluation  [Clear All Filters]
2021-04-29
Hayes, J. Huffman, Payne, J., Essex, E., Cole, K., Alverson, J., Dekhtyar, A., Fang, D., Bernosky, G..  2020.  Towards Improved Network Security Requirements and Policy: Domain-Specific Completeness Analysis via Topic Modeling. 2020 IEEE Seventh International Workshop on Artificial Intelligence for Requirements Engineering (AIRE). :83—86.

Network security policies contain requirements - including system and software features as well as expected and desired actions of human actors. In this paper, we present a framework for evaluation of textual network security policies as requirements documents to identify areas for improvement. Specifically, our framework concentrates on completeness. We use topic modeling coupled with expert evaluation to learn the complete list of important topics that should be addressed in a network security policy. Using these topics as a checklist, we evaluate (students) a collection of network security policies for completeness, i.e., the level of presence of these topics in the text. We developed three methods for topic recognition to identify missing or poorly addressed topics. We examine network security policies and report the results of our analysis: preliminary success of our approach.

2021-03-15
Brauckmann, A., Goens, A., Castrillon, J..  2020.  ComPy-Learn: A toolbox for exploring machine learning representations for compilers. 2020 Forum for Specification and Design Languages (FDL). :1–4.
Deep Learning methods have not only shown to improve software performance in compiler heuristics, but also e.g. to improve security in vulnerability prediction or to boost developer productivity in software engineering tools. A key to the success of such methods across these use cases is the expressiveness of the representation used to abstract from the program code. Recent work has shown that different such representations have unique advantages in terms of performance. However, determining the best-performing one for a given task is often not obvious and requires empirical evaluation. Therefore, we present ComPy-Learn, a toolbox for conveniently defining, extracting, and exploring representations of program code. With syntax-level language information from the Clang compiler frontend and low-level information from the LLVM compiler backend, the tool supports the construction of linear and graph representations and enables an efficient search for the best-performing representation and model for tasks on program code.
2020-06-29
Liang, Xiaoyu, Znati, Taieb.  2019.  An empirical study of intelligent approaches to DDoS detection in large scale networks. 2019 International Conference on Computing, Networking and Communications (ICNC). :821–827.
Distributed Denial of Services (DDoS) attacks continue to be one of the most challenging threats to the Internet. The intensity and frequency of these attacks are increasing at an alarming rate. Numerous schemes have been proposed to mitigate the impact of DDoS attacks. This paper presents a comprehensive empirical evaluation of Machine Learning (ML)based DDoS detection techniques, to gain better understanding of their performance in different types of environments. To this end, a framework is developed, focusing on different attack scenarios, to investigate the performance of a class of ML-based techniques. The evaluation uses different performance metrics, including the impact of the “Class Imbalance Problem” on ML-based DDoS detection. The results of the comparative analysis show that no one technique outperforms all others in all test cases. Furthermore, the results underscore the need for a method oriented feature selection model to enhance the capabilities of ML-based detection techniques. Finally, the results show that the class imbalance problem significantly impacts performance, underscoring the need to address this problem in order to enhance ML-based DDoS detection capabilities.
2020-01-20
Clark, Shane S., Paulos, Aaron, Benyo, Brett, Pal, Partha, Schantz, Richard.  2015.  Empirical Evaluation of the A3 Environment: Evaluating Defenses Against Zero-Day Attacks. 2015 10th International Conference on Availability, Reliability and Security. :80–89.

A3 is an execution management environment that aims to make network-facing applications and services resilient against zero-day attacks. A3 recently underwent two adversarial evaluations of its defensive capabilities. In one, A3 defended an App Store used in a Capture the Flag (CTF) tournament, and in the other, a tactically relevant network service in a red team exercise. This paper describes the A3 defensive technologies evaluated, the evaluation results, and the broader lessons learned about evaluations for technologies that seek to protect critical systems from zero-day attacks.

2017-03-07
Santoro, Donatello, Arocena, Patricia C., Glavic, Boris, Mecca, Giansalvatore, Miller, Renée J., Papotti, Paolo.  2016.  BART in Action: Error Generation and Empirical Evaluations of Data-Cleaning Systems. Proceedings of the 2016 International Conference on Management of Data. :2161–2164.

Repairing erroneous or conflicting data that violate a set of constraints is an important problem in data management. Many automatic or semi-automatic data-repairing algorithms have been proposed in the last few years, each with its own strengths and weaknesses. Bart is an open-source error-generation system conceived to support thorough experimental evaluations of these data-repairing systems. The demo is centered around three main lessons. To start, we discuss how generating errors in data is a complex problem, with several facets. We introduce the important notions of detectability and repairability of an error, that stand at the core of Bart. Then, we show how, by changing the features of errors, it is possible to influence quite significantly the performance of the tools. Finally, we concretely put to work five data-repairing algorithms on dirty data of various kinds generated using Bart, and discuss their performance.