Visible to the public Biblio

Filters: Author is Hoole, Alexander M.  [Clear All Filters]
2017-11-20
Hoole, Alexander M., Traore, Issa, Delaitre, Aurelien, de Oliveira, Charles.  2016.  Improving Vulnerability Detection Measurement: [Test Suites and Software Security Assurance]. Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering. :27:1–27:10.

The Software Assurance Metrics and Tool Evaluation (SAMATE) project at the National Institute of Standards and Technology (NIST) has created the Software Assurance Reference Dataset (SARD) to provide researchers and software security assurance tool developers with a set of known security flaws. As part of an empirical evaluation of a runtime monitoring framework, two test suites were executed and monitored, revealing deficiencies which led to a collaboration with the NIST SAMATE team to provide replacements. Test Suites 45 and 46 are analyzed, discussed, and updated to improve accuracy, consistency, preciseness, and automation. Empirical results show metrics such as recall, precision, and F-Measure are all impacted by invalid base assumptions regarding the test suites.

2017-10-13
Hoole, Alexander M., Traore, Issa, Delaitre, Aurelien, de Oliveira, Charles.  2016.  Improving Vulnerability Detection Measurement: [Test Suites and Software Security Assurance]. Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering. :27:1–27:10.

The Software Assurance Metrics and Tool Evaluation (SAMATE) project at the National Institute of Standards and Technology (NIST) has created the Software Assurance Reference Dataset (SARD) to provide researchers and software security assurance tool developers with a set of known security flaws. As part of an empirical evaluation of a runtime monitoring framework, two test suites were executed and monitored, revealing deficiencies which led to a collaboration with the NIST SAMATE team to provide replacements. Test Suites 45 and 46 are analyzed, discussed, and updated to improve accuracy, consistency, preciseness, and automation. Empirical results show metrics such as recall, precision, and F-Measure are all impacted by invalid base assumptions regarding the test suites.