Visible to the public Biblio

Filters: Author is Briand, Lionel C.  [Clear All Filters]
2018-05-09
Thomé, Julian, Shar, Lwin Khin, Bianculli, Domenico, Briand, Lionel C..  2017.  JoanAudit: A Tool for Auditing Common Injection Vulnerabilities. Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. :1004–1008.

JoanAudit is a static analysis tool to assist security auditors in auditing Web applications and Web services for common injection vulnerabilities during software development. It automatically identifies parts of the program code that are relevant for security and generates an HTML report to guide security auditors audit the source code in a scalable way. JoanAudit is configured with various security-sensitive input sources and sinks relevant to injection vulnerabilities and standard sanitization procedures that prevent these vulnerabilities. It can also automatically fix some cases of vulnerabilities in source code — cases where inputs are directly used in sinks without any form of sanitization — by using standard sanitization procedures. Our evaluation shows that by using JoanAudit, security auditors are required to inspect only 1% of the total code for auditing common injection vulnerabilities. The screen-cast demo is available at https://github.com/julianthome/joanaudit.

2017-05-22
Ceccato, Mariano, Nguyen, Cu D., Appelt, Dennis, Briand, Lionel C..  2016.  SOFIA: An Automated Security Oracle for Black-box Testing of SQL-injection Vulnerabilities. Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. :167–177.

Security testing is a pivotal activity in engineering secure software. It consists of two phases: generating attack inputs to test the system, and assessing whether test executions expose any vulnerabilities. The latter phase is known as the security oracle problem. In this work, we present SOFIA, a Security Oracle for SQL-Injection Vulnerabilities. SOFIA is programming-language and source-code independent, and can be used with various attack generation tools. Moreover, because it does not rely on known attacks for learning, SOFIA is meant to also detect types of SQLi attacks that might be unknown at learning time. The oracle challenge is recast as a one-class classification problem where we learn to characterise legitimate SQL statements to accurately distinguish them from SQLi attack statements. We have carried out an experimental validation on six applications, among which two are large and widely-used. SOFIA was used to detect real SQLi vulnerabilities with inputs generated by three attack generation tools. The obtained results show that SOFIA is computationally fast and achieves a recall rate of 100% (i.e., missing no attacks) with a low false positive rate (0.6%).