Biblio
Web technologies are typically built with time constraints and security vulnerabilities. Automatic software vulnerability scanners are common tools for detecting such vulnerabilities among software developers. It helps to illustrate the program for the attacker by creating a great deal of engagement within the program. SQL Injection and Cross-Site Scripting (XSS) are two of the most commonly spread and dangerous vulnerabilities in web apps that cause to the user. It is very important to trust the findings of the site vulnerability scanning software. Without a clear idea of the accuracy and the coverage of the open-source tools, it is difficult to analyze the result from the automatic vulnerability scanner that provides. The important to do a comparison on the key figure on the automated vulnerability scanners because there are many kinds of a scanner on the market and this comparison can be useful to decide which scanner has better performance in term of SQL Injection and Cross-Site Scripting (XSS) vulnerabilities. In this paper, a method by Jose Fonseca et al, is used to compare open-source automated vulnerability scanners based on detection coverage and a method by Yuki Makino and Vitaly Klyuev for precision rate. The criteria vulnerabilities will be injected into the web applications which then be scanned by the scanners. The results then are compared by analyzing the precision rate and detection coverage of vulnerability detection. Two leading open source automated vulnerability scanners will be evaluated. In this paper, the scanner that being utilizes is OW ASP ZAP and Skipfish for comparison. The results show that from precision rate and detection rate scope, OW ASP ZAP has better performance than Skipfish by two times for precision rate and have almost the same result for detection coverage where OW ASP ZAP has a higher number in high vulnerabilities.
Companies like Netflix increasingly use the cloud to deploy their business processes. Those processes often involve partnerships with other companies, and can be modeled as workflows where the owner of the data at risk interacts with contractors to realize a sequence of tasks on the data to be secured.In practice, access control is an essential building block to deploy these secured workflows. This component is generally managed by administrators using high-level policies meant to represent the requirements and restrictions put on the workflow. Handling access control with a high-level scheme comes with the benefit of separating the problem of specification, i.e. defining the desired behavior of the system, from the problem of implementation, i.e. enforcing this desired behavior. However, translating such high-level policies into a deployed implementation can be error-prone.Even though semi-automatic and automatic tools have been proposed to assist this translation, policy verification remains highly challenging in practice. In this paper, our aim is to define and propose structures assisting the checking and correction of potential errors introduced on the ground due to a faulty translation or corrupted deployments. In particular, we investigate structures with formal foundations able to naturally model policies. Metagraphs, a generalized graph theoretic structure, fulfill those requirements: their usage enables to compare high-level policies to their implementation. In practice, we consider Rego, a language used by companies like Netflix and Plex for their release process, as a valuable representative of most common policy languages. We propose a suite of tools transforming and checking policies as metagraphs, and use them in a global framework to show how policy verification can be achieved with such structures. Finally, we evaluate the performance of our verification method.
Software developers can use diverse techniques and tools to reduce the number of vulnerabilities, but the effectiveness of existing solutions in real projects is questionable. For example, Static Analysis Tools (SATs) report potential vulnerabilities by analyzing code patterns, and Software Metrics (SMs) can be used to predict vulnerabilities based on high-level characteristics of the code. In theory, both approaches can be applied from the early stages of the development process, but it is well known that they fail to detect critical vulnerabilities and raise a large number of false alarms. This paper studies the hypothesis of using Machine Learning (ML) to combine alerts from SATs with SMs to predict vulnerabilities in a large software project (under development for many years). In practice, we use four ML algorithms, alerts from two SATs, and a large number of SMs to predict whether a source code file is vulnerable or not (binary classification) and to predict the vulnerability category (multiclass classification). Results show that one can achieve either high precision or high recall, but not both at the same time. To understand the reason, we analyze and compare snippets of source code, demonstrating that vulnerable and non-vulnerable files share similar characteristics, making it hard to distinguish vulnerable from non-vulnerable code based on SAT alerts and SMs.