Visible to the public Biblio

Filters: Keyword is code coverage  [Clear All Filters]
2019-11-12
E.V., Jaideep Varier, V., Prabakar, Balamurugan, Karthigha.  2019.  Design of Generic Verification Procedure for IIC Protocol in UVM. 2019 3rd International Conference on Electronics, Communication and Aerospace Technology (ICECA). :1146-1150.

With the growth of technology, designs became more complex and may contain bugs. This makes verification an indispensable part in product development. UVM describe a standard method for verification of designs which is reusable and portable. This paper verifies IIC bus protocol using Universal Verification Methodology. IIC controller is designed in Verilog using Vivado. It have APB interface and its function and code coverage is carried out in Mentor graphic Questasim 10.4e. This work achieved 83.87% code coverage and 91.11% functional coverage.

2018-02-02
Rotella, P., Chulani, S..  2017.  Predicting Release Reliability. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :39–46.

Customers need to know how reliable a new release is, and whether or not the new release has substantially different, either better or worse, reliability than the one currently in production. Customers are demanding quantitative evidence, based on pre-release metrics, to help them decide whether or not to upgrade (and thereby offer new features and capabilities to their customers). Finding ways to estimate future reliability performance is not easy - we have evaluated many prerelease development and test metrics in search of reliability predictors that are sufficiently accurate and also apply to a broad range of software products. This paper describes a successful model that has resulted from these efforts, and also presents both a functional extension and a further conceptual simplification of the extended model that enables us to better communicate key release information to internal stakeholders and customers, without sacrificing predictive accuracy or generalizability. Work remains to be done, but the results of the original model, the extended model, and the simplified version are encouraging and are currently being applied across a range of products and releases. To evaluate whether or not these early predictions are accurate, and also to compare releases that are available to customers, we use a field software reliability assessment mechanism that incorporates two types of customer experience metrics: field bug encounters normalized by usage, and field bug counts, also normalized by usage. Our 'release-overrelease' strategy combines the 'maturity assessment' component (i.e., estimating reliability prior to release to the field) and the 'reliability assessment' component (i.e., gauging actual reliability after release to the field). This overall approach enables us to both predict reliability and compare reliability results for recent releases for a product.

2017-08-02
Niedermayr, Rainer, Juergens, Elmar, Wagner, Stefan.  2016.  Will My Tests Tell Me if I Break This Code? Proceedings of the International Workshop on Continuous Software Evolution and Delivery. :23–29.

Automated tests play an important role in software evolution because they can rapidly detect faults introduced during changes. In practice, code-coverage metrics are often used as criteria to evaluate the effectiveness of test suites with focus on regression faults. However, code coverage only expresses which portion of a system has been executed by tests, but not how effective the tests actually are in detecting regression faults. Our goal was to evaluate the validity of code coverage as a measure for test effectiveness. To do so, we conducted an empirical study in which we applied an extreme mutation testing approach to analyze the tests of open-source projects written in Java. We assessed the ratio of pseudo-tested methods (those tested in a way such that faults would not be detected) to all covered methods and judged their impact on the software project. The results show that the ratio of pseudo-tested methods is acceptable for unit tests but not for system tests (that execute large portions of the whole system). Therefore, we conclude that the coverage metric is only a valid effectiveness indicator for unit tests.

Hirzel, Matthias, Klaeren, Herbert.  2016.  Code Coverage for Any Kind of Test in Any Kind of Transcompiled Cross-platform Applications. Proceedings of the 2Nd International Workshop on User Interface Test Automation. :1–10.

Code coverage is a widely used measure to determine how thoroughly an application is tested. There are many tools available for different languages. However, to the best of our knowledge, most of them focus on unit testing and ignore end-to-end tests with ui- or web tests. Furthermore, there is no support for determining code coverage of transcompiled cross-platform applications. This kind of application is written in one language, but compiled to and executed in a different programming language. Besides, it may run on a different platform. In this paper, we propose a new code coverage testing method that calculates the code coverage of any kind of test (unit-, integration- or ui-/web-test) for any type of (transcompiled) applications (desktop, web or mobile application). Developers obtain information about which parts of the source code are uncovered by tests. The basis of our approach is generic and may be applied in numerous programming languages based on an abstract syntax tree. We present our approach for any-kind-applications developed in Java and evaluate our tool on a web application created with Google Web Toolkit, on standard desktop applications, and on some small Java applications that use the Swing library to create user interfaces. Our results show that our tool is able to judge the code coverage of any kind of test. In particular, our tool is independent of the unit- or ui-/web test-framework in use. The runtime performance is promising although it is not as fast as already existing tools in the area of unit-testing.

2016-12-09
Xia Zeng, Tencent, Inc., Dengfend Li, University of Illinois at Urbana-Champaign, Wujie Zheng, Tencent, Inc., Yuetang Deng, Tencent, Inc., Wing Lam, University of Illinois at Urbana-Champaign, Wei Yang, University of Illinois at Urbana-Champaign, Tao Xie, University of Illinois at Urbana-Champaign.  2016.  Automated Test Input Generation for Android: Are We Really There Yet in an Industrial Case? 24th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE 2016).

Given the ever increasing number of research tools to automatically generate inputs to test Android applications (or simply apps), researchers recently asked the question "Are we there yet?" (in terms of the practicality of the tools). By conducting an empirical study of the various tools, the researchers found that Monkey (the most widely used tool of this category in industrial settings) outperformed all of the research tools in the study. In this paper, we present two signi cant extensions of that study. First, we conduct the rst industrial case study of applying Monkey against WeChat, a popular  messenger app with over 762 million monthly active users, and report the empirical ndings on Monkey's limitations in an industrial setting. Second, we develop a new approach to address major limitations of Monkey and accomplish substantial code-coverage improvements over Monkey. We conclude the paper with empirical insights for future enhancements to both Monkey and our approach.