Biblio

Filters: Keyword is empirical evaluation  [Clear All Filters]
2021-03-15
Brauckmann, A., Goens, A., Castrillon, J..  2020.  ComPy-Learn: A toolbox for exploring machine learning representations for compilers. 2020 Forum for Specification and Design Languages (FDL). :1–4.
Deep Learning methods have not only shown to improve software performance in compiler heuristics, but also e.g. to improve security in vulnerability prediction or to boost developer productivity in software engineering tools. A key to the success of such methods across these use cases is the expressiveness of the representation used to abstract from the program code. Recent work has shown that different such representations have unique advantages in terms of performance. However, determining the best-performing one for a given task is often not obvious and requires empirical evaluation. Therefore, we present ComPy-Learn, a toolbox for conveniently defining, extracting, and exploring representations of program code. With syntax-level language information from the Clang compiler frontend and low-level information from the LLVM compiler backend, the tool supports the construction of linear and graph representations and enables an efficient search for the best-performing representation and model for tasks on program code.
2021-04-29
Hayes, J. Huffman, Payne, J., Essex, E., Cole, K., Alverson, J., Dekhtyar, A., Fang, D., Bernosky, G..  2020.  Towards Improved Network Security Requirements and Policy: Domain-Specific Completeness Analysis via Topic Modeling. 2020 IEEE Seventh International Workshop on Artificial Intelligence for Requirements Engineering (AIRE). :83—86.

Network security policies contain requirements - including system and software features as well as expected and desired actions of human actors. In this paper, we present a framework for evaluation of textual network security policies as requirements documents to identify areas for improvement. Specifically, our framework concentrates on completeness. We use topic modeling coupled with expert evaluation to learn the complete list of important topics that should be addressed in a network security policy. Using these topics as a checklist, we evaluate (students) a collection of network security policies for completeness, i.e., the level of presence of these topics in the text. We developed three methods for topic recognition to identify missing or poorly addressed topics. We examine network security policies and report the results of our analysis: preliminary success of our approach.

2020-06-29
Liang, Xiaoyu, Znati, Taieb.  2019.  An empirical study of intelligent approaches to DDoS detection in large scale networks. 2019 International Conference on Computing, Networking and Communications (ICNC). :821–827.
Distributed Denial of Services (DDoS) attacks continue to be one of the most challenging threats to the Internet. The intensity and frequency of these attacks are increasing at an alarming rate. Numerous schemes have been proposed to mitigate the impact of DDoS attacks. This paper presents a comprehensive empirical evaluation of Machine Learning (ML)based DDoS detection techniques, to gain better understanding of their performance in different types of environments. To this end, a framework is developed, focusing on different attack scenarios, to investigate the performance of a class of ML-based techniques. The evaluation uses different performance metrics, including the impact of the “Class Imbalance Problem” on ML-based DDoS detection. The results of the comparative analysis show that no one technique outperforms all others in all test cases. Furthermore, the results underscore the need for a method oriented feature selection model to enhance the capabilities of ML-based detection techniques. Finally, the results show that the class imbalance problem significantly impacts performance, underscoring the need to address this problem in order to enhance ML-based DDoS detection capabilities.
2017-03-07
Santoro, Donatello, Arocena, Patricia C., Glavic, Boris, Mecca, Giansalvatore, Miller, Renée J., Papotti, Paolo.  2016.  BART in Action: Error Generation and Empirical Evaluations of Data-Cleaning Systems. Proceedings of the 2016 International Conference on Management of Data. :2161–2164.

Repairing erroneous or conflicting data that violate a set of constraints is an important problem in data management. Many automatic or semi-automatic data-repairing algorithms have been proposed in the last few years, each with its own strengths and weaknesses. Bart is an open-source error-generation system conceived to support thorough experimental evaluations of these data-repairing systems. The demo is centered around three main lessons. To start, we discuss how generating errors in data is a complex problem, with several facets. We introduce the important notions of detectability and repairability of an error, that stand at the core of Bart. Then, we show how, by changing the features of errors, it is possible to influence quite significantly the performance of the tools. Finally, we concretely put to work five data-repairing algorithms on dirty data of various kinds generated using Bart, and discuss their performance.

2020-01-20
Clark, Shane S., Paulos, Aaron, Benyo, Brett, Pal, Partha, Schantz, Richard.  2015.  Empirical Evaluation of the A3 Environment: Evaluating Defenses Against Zero-Day Attacks. 2015 10th International Conference on Availability, Reliability and Security. :80–89.

A3 is an execution management environment that aims to make network-facing applications and services resilient against zero-day attacks. A3 recently underwent two adversarial evaluations of its defensive capabilities. In one, A3 defended an App Store used in a Capture the Flag (CTF) tournament, and in the other, a tactically relevant network service in a red team exercise. This paper describes the A3 defensive technologies evaluated, the evaluation results, and the broader lessons learned about evaluations for technologies that seek to protect critical systems from zero-day attacks.

2015-01-13
Riaz, Maria, Breaux, Travis, Williams, Laurie, Niu, Jianwei.  2012.  On the Design of Empirical Studies to Evaluate Software Patterns: A Survey.

Software patterns are created with the goal of capturing expert
knowledge so it can be efficiently and effectively shared with the
software development community. However, patterns in practice
may or may not achieve these goals. Empirical studies of the use
of software patterns can help in providing deeper insight into
whether these goals have been met. The objective of this paper is
to aid researchers in designing empirical studies of software
patterns by summarizing the study designs of software patterns
available in the literature. The important components of these
study designs include the evaluation criteria and how the patterns
are presented to study participants. We select and analyze 19
distinct empirical studies and identify 17 independent variables in
three different categories (participants demographics; pattern
presentation; problem presentation). We also extract 10 evaluation
criteria with 23 associated observable measures. Additionally, by
synthesizing the reported observations, we identify challenges
faced during study execution. Provision of multiple domainspecific
examples of pattern application and tool support to assist
in pattern selection are helpful for the study participants in
understanding and completing the study task. Capturing data
regarding the cognitive processes of participants can provide
insights into the findings of the study.