Biblio
The current evaluation of API recommendation systems mainly focuses on correctness, which is calculated through matching results with ground-truth APIs. However, this measurement may be affected if there exist more than one APIs in a result. In practice, some APIs are used to implement basic functionalities (e.g., print and log generation). These APIs can be invoked everywhere, and they may contribute less than functionally related APIs to the given requirements in recommendation. To study the impacts of correct-but-useless APIs, we use utility to measure them. Our study is conducted on more than 5,000 matched results generated by two specification-based API recommendation techniques. The results show that the matched APIs are heavily overlapped, 10% APIs compose more than 80% matched results. The selected 10% APIs are all correct, but few of them are used to implement the required functionality. We further propose a heuristic approach to measure the utility and conduct an online evaluation with 15 developers. Their reports confirm that the matched results with higher utility score usually have more efforts on programming than the lower ones.
Context: Software security is an imperative aspect of software quality. Early detection of vulnerable code during development can better ensure the security of the codebase and minimize testing efforts. Although traditional software metrics are used for early detection of vulnerabilities, they do not clearly address the granularity level of the issue to precisely pinpoint vulnerabilities. The goal of this study is to employ method-level traceable patterns (nano-patterns) in vulnerability prediction and empirically compare their performance with traditional software metrics. The concept of nano-patterns is similar to design patterns, but these constructs can be automatically recognized and extracted from source code. If nano-patterns can better predict vulnerable methods compared to software metrics, they can be used in developing vulnerability prediction models with better accuracy. Aims: This study explores the performance of method-level patterns in vulnerability prediction. We also compare them with method-level software metrics. Method: We studied vulnerabilities reported for two major releases of Apache Tomcat (6 and 7), Apache CXF, and two stand-alone Java web applications. We used three machine learning techniques to predict vulnerabilities using nano-patterns as features. We applied the same techniques using method-level software metrics as features and compared their performance with nano-patterns. Results: We found that nano-patterns show lower false negative rates for classifying vulnerable methods (for Tomcat 6, 21% vs 34.7%) and therefore, have higher recall in predicting vulnerable code than the software metrics used. On the other hand, software metrics show higher precision than nano-patterns (79.4% vs 76.6%). Conclusion: In summary, we suggest developers use nano-patterns as features for vulnerability prediction to augment existing approaches as these code constructs outperform standard metrics in terms of prediction recall.
Confidentiality, Integrity, and Availability are principal keys to build any secure software. Considering the security principles during the different software development phases would reduce software vulnerabilities. This paper measures the impact of the different software quality metrics on Confidentiality, Integrity, or Availability for any given object-oriented PHP application, which has a list of reported vulnerabilities. The National Vulnerability Database was used to provide the impact score on confidentiality, integrity, and availability for the reported vulnerabilities on the selected applications. This paper includes a study for these scores and its correlation with 25 code metrics for the given vulnerable source code. The achieved results were able to correlate 23.7% of the variability in `Integrity' to four metrics: Vocabulary Used in Code, Card and Agresti, Intelligent Content, and Efferent Coupling metrics. The Length (Halstead metric) could alone predict about 24.2 % of the observed variability in ` Availability'. The results indicate no significant correlation of `Confidentiality' with the tested code metrics.
Currently, the complexity of software quality and testing is increasing exponentially with a huge number of challenges knocking doors, especially when testing a mission-critical application in banking and other critical domains, or the new technology trends with decentralized and nonintegrated testing tools. From practical experience, software testing has become costly and more effort-intensive with unlimited scope. This thesis promotes the Scalable Quality and Testing Lab (SQTL), it's a centralized quality and testing platform, which integrates a powerful manual, automation and business intelligence tools. SQTL helps quality engineers (QE) effectively organize, manage and control all testing activities in one centralized lab, starting from creating test cases, then executing different testing types such as web, security and others. And finally, ending with analyzing and displaying all testing activities result in an interactive dashboard, which allows QE to forecast new bugs especially those related to security. The centralized SQTL is to empower QE during the testing cycle, help them to achieve a greater level of software quality in minimum time, effort and cost, and decrease defect density metric.
{Information and Communications Technology (ICT) have rationalized government services into a more efficient and transparent government. However, a large part of the government services remained constant in the manual process due to the high cost of ICT. The purpose of this paper is to explore the role of e-governance and ICT in the legislative management of municipalities in the Philippines. This study adopted the phases of Princeton Project Management Methodology (PPMM) as the approach in the development of LeMTrac. This paper utilized the developmental- quantitative research design involving two (2) sets of respondents, which are the end-users and IT experts. Majority of the respondents perceived that the system as "highly acceptable" with an average Likert score of 4.72 for the ISO 9126 Software quality metric Usability. The findings also reveal that the integration of LeMTrac within the Sangguniang Bayan (SB) Office in the Municipal Local Government Units (LGU) of Nabua and Bula, Camarines Sur provided better accessibility, security, and management of documents.
Defects in infrastructure as code (IaC) scripts can have serious
consequences, for example, creating large-scale system outages. A
taxonomy of IaC defects can be useful for understanding the nature
of defects, and identifying activities needed to fix and prevent
defects in IaC scripts. The goal of this paper is to help practitioners
improve the quality of infrastructure as code (IaC) scripts by developing
a defect taxonomy for IaC scripts through qualitative analysis.
We develop a taxonomy of IaC defects by applying qualitative analysis
on 1,448 defect-related commits collected from open source
software (OSS) repositories of the Openstack organization. We conduct
a survey with 66 practitioners to assess if they agree with the
identified defect categories included in our taxonomy. We quantify
the frequency of identified defect categories by analyzing 80,425
commits collected from 291 OSS repositories spanning across 2005
to 2019.
Our defect taxonomy for IaC consists of eight categories, including
a category specific to IaC called idempotency (i.e., defects that
lead to incorrect system provisioning when the same IaC script is
executed multiple times). We observe the surveyed 66 practitioners
to agree most with idempotency. The most frequent defect category
is configuration data i.e., providing erroneous configuration data
in IaC scripts. Our taxonomy and the quantified frequency of the
defect categories may help in advancing the science of IaC script
quality.
Security is often a critical problem in software systems. The consequences of the failure lead to substantial economic loss or extensive environmental damage. Developing secure software is challenging, and retrofitting existing systems to introduce security is even harder. In this paper, we propose an automated approach for Finding and Repairing Bugs based on security patterns (FireBugs), to repair defects causing security vulnerabilities. To locate and fix security bugs, we apply security patterns that are reusable solutions comprising large amounts of software design experience in many different situations. In the evaluation, we investigated 2,800 Android app repositories to apply our approach to 200 subject projects that use javax.crypto APIs. The vision of our automated approach is to reduce software maintenance burdens where the number of outstanding software defects exceeds available resources. Our ultimate vision is to design more security patterns that have a positive impact on software quality by disseminating correlated sets of best security design practices and knowledge.
Agile methods frequently have difficulties with qualities, often specifying quality requirements as stories, e.g., "As a user, I need a safe and secure system." Such projects will generally schedule some capability releases followed by safety and security releases, only to discover user-developer misunderstandings and unsecurable agile code, leading to project failure. Very large agile projects also have further difficulties with project velocity and scalability. Examples are trying to use daily standup meetings, 2-week sprints, shared tacit knowledge vs. documents, and dealing with user-developer misunderstandings. At USC, our Parallel Agile, Executable Architecture research project shows some success at mid-scale (50 developers). We also examined several large (hundreds of developers) TRW projects that had succeeded with rapid, high-quality development. The paper elaborates on their common Critical Quality Factors: a concurrent 3-team approach, an empowered Keeper of the Project Vision, and a management approach emphasizing qualities.
How to evaluate software reliability based on historical data of embedded software projects is one of the problems we have to face in practical engineering. Therefore, we establish a software reliability evaluation model based on code metrics. This evaluation technique requires the aggregation of software code metrics into project metrics. Statistical value methods, metric distribution methods, and econometric methods are commonly-used aggregation methods. What are the differences between these methods in the software reliability evaluation process, and which methods can improve the accuracy of the reliability assessment model we have established are our concerns. In view of these concerns, we conduct an empirical study on the application of software code metric aggregation methods based on actual projects. We find the distribution of code metrics for the projects under study. Using these distribution laws, we optimize the aggregation method of code metrics and improve the accuracy of the software reliability evaluation model.
The purpose of this paper is to analyze all Cloud based Service Models, Continuous Integration, Deployment and Delivery process and propose an Automated Continuous Testing and testing as a service based TestBot and metrics dashboard which will be integrated with all existing automation, bug logging, build management, configuration and test management tools. Recently cloud is being used by organizations to save time, money and efforts required to setup and maintain infrastructure and platform. Continuous Integration and Delivery is in practice nowadays within Agile methodology to give capability of multiple software releases on daily basis and ensuring all the development, test and Production environments could be synched up quickly. In such an agile environment there is need to ramp up testing tools and processes so that overall regression testing including functional, performance and security testing could be done along with build deployments at real time. To support this phenomenon, we researched on Continuous Testing and worked with industry professionals who are involved in architecting, developing and testing the software products. A lot of research has been done towards automating software testing so that testing of software product could be done quickly and overall testing process could be optimized. As part of this paper we have proposed ACT TestBot tool, metrics dashboard and coined 4S quality metrics term to quantify quality of the software product. ACT testbot and metrics dashboard will be integrated with Continuous Integration tools, Bug reporting tools, test management tools and Data Analytics tools to trigger automation scripts, continuously analyze application logs, open defects automatically and generate metrics reports. Defect pattern report will be created to support root cause analysis and to take preventive action.
Static application security testing (SAST) detects vulnerability warnings through static program analysis. Fixing the vulnerability warnings tremendously improves software quality. However, SAST has not been fully utilized by developers due to various reasons: difficulties in handling a large number of reported warnings, a high rate of false warnings, and lack of guidance in fixing the reported warnings. In this paper, we collaborated with security experts from a commercial SAST product and propose a set of approaches (Priv) to help developers better utilize SAST techniques. First, Priv identifies preferred fix locations for the detected vulnerability warnings, and group them based on the common fix locations. Priv also leverages visualization techniques so that developers can quickly investigate the warnings in groups and prioritize their quality-assurance effort. Second, Priv identifies actionable vulnerability warnings by removing SAST-specific false positives. Finally, Priv provides customized fix suggestions for vulnerability warnings. Our evaluation of Priv on six web applications highlights the accuracy and effectiveness of Priv. For 75.3% of the vulnerability warnings, the preferred fix locations found by Priv are identical to the ones annotated by security experts. The visualization based on shared preferred fix locations is useful for prioritizing quality-assurance efforts. Priv reduces the rate of SAST-specific false positives significantly. Finally, Priv is able to provide fully complete and correct fix suggestions for 75.6% of the evaluated warnings. Priv is well received by security experts and some features are already integrated into industrial practice.
Java programming language is considered highly important due to its extensive use in the development of web, desktop as well as handheld devices applications. Implementing Java Coding standards on Java code has great importance as it creates consistency and as a result better development and maintenance. Finding bugs and standard's violations is important at an early stage of software development than at a later stage when the change becomes impossible or too expensive. In the paper, some tools and research work done on Coding Standard Analyzers is reviewed. These tools are categorized based on the type of rules they cheeked, namely: style, concurrency, exceptions, and quality, security, dependency and general methods of static code analysis. Finally, list of Java Coding Standards Enforcing Tools are analyzed against certain predefined parameters that are limited by the scope of research paper under study. This review will provide the basis for selecting a static code analysis tool that enforce International Java Coding Standards such as the Rule of Ten and the JPL Coding Standards. Such tools have great importance especially in the development of mission/safety critical system. This work can be very useful for developers in selecting a good tool for Java code analysis, according to their requirements.
Defect prediction is an active topic in software quality assurance, which can help developers find potential bugs and make better use of resources. To improve prediction performance, this paper introduces cross-entropy, one common measure for natural language, as a new code metric into defect prediction tasks and proposes a framework called DefectLearner for this process. We first build a recurrent neural network language model to learn regularities in source code from software repository. Based on the trained model, the cross-entropy of each component can be calculated. To evaluate the discrimination for defect-proneness, cross-entropy is compared with 20 widely used metrics on 12 open-source projects. The experimental results show that cross-entropy metric is more discriminative than 50% of the traditional metrics. Besides, we combine cross-entropy with traditional metric suites together for accurate defect prediction. With cross-entropy added, the performance of prediction models is improved by an average of 2.8% in F1-score.
Code churn has been successfully used to identify defect inducing changes in software development. Our recent analysis of the cross-release code churn showed that several design metrics exhibit moderate correlation with the number of defects in complex systems. The goal of this paper is to explore whether cross-release code churn can be used to identify critical design change and contribute to prediction of defects for software in evolution. In our case study, we used two types of data from consecutive releases of open-source projects, with and without cross-release code churn, to build standard prediction models. The prediction models were trained on earlier releases and tested on the following ones, evaluating the performance in terms of AUC, GM and effort aware measure Pop. The comparison of their performance was used to answer our research question. The obtained results showed that the prediction model performs better when cross-release code churn is included. Practical implication of this research is to use cross-release code churn to aid in safe planning of next release in software development.
Software insecurity is being identified as one of the leading causes of security breaches. In this paper, we revisited one of the strategies in solving software insecurity, which is the use of software quality metrics. We utilized a multilayer deep feedforward network in examining whether there is a combination of metrics that can predict the appearance of security-related bugs. We also applied the traditional machine learning algorithms such as decision tree, random forest, naïve bayes, and support vector machines and compared the results with that of the Deep Learning technique. The results have successfully demonstrated that it was possible to develop an effective predictive model to forecast software insecurity based on the software metrics and using Deep Learning. All the models generated have shown an accuracy of more than sixty percent with Deep Learning leading the list. This finding proved that utilizing Deep Learning methods and a combination of software metrics can be tapped to create a better forecasting model thereby aiding software developers in predicting security bugs.
Trustworthiness is a paramount concern for users and customers in the selection of a software solution, specially in the context of complex and dynamic environments, such as Cloud and IoT. However, assessing and benchmarking trustworthiness (worthiness of software for being trusted) is a challenging task, mainly due to the variety of application scenarios (e.g., businesscritical, safety-critical), the large number of determinative quality attributes (e.g., security, performance), and last, but foremost, due to the subjective notion of trust and trustworthiness. In this paper, we present trustworthiness as a measurable notion in relative terms based on security attributes and propose an approach for the assessment and benchmarking of software. The main goal is to build a trustworthiness assessment model based on software metrics (e.g., Cyclomatic Complexity, CountLine, CBO) that can be used as indicators of software security. To demonstrate the proposed approach, we assessed and ranked several files and functions of the Mozilla Firefox project based on their trustworthiness score and conducted a survey among several software security experts in order to validate the obtained rank. Results show that our approach is able to provide a sound ranking of the benchmarked software.
Nowadays, honeypots are a key tool to attract attackers and study their activity. They help us in the tasks of evaluating attacker's behaviour, discovering new types of attacks, and collecting information and statistics associated with them. However, the gathered data cannot be directly interpreted, but must be analyzed to obtain useful information. In this paper, we present a SSH honeypot-based system designed to simulate a vulnerable server. Thus, we propose an approach for the classification of metrics from the data collected by the honeypot along 19 months.
The article considers the approach to identifying potentially unsafe data in program code of embedded systems which can lead to errors and fails in the functioning of equipment. The sources of invalid data are revealed and the process of changing the status of this data in process of static code analysis is shown. The mechanism for annotating functions that operate on unsafe data is described, which allows to control the entire process of using them and thus it will improve the quality of the output code.