Visible to the public Biblio

Filters: Keyword is performance testing  [Clear All Filters]
2020-11-02
Hamad, R. M. H., Fayoumi, M. Al.  2019.  Scalable Quality and Testing Lab (SQTL): Mission-Critical Applications Testing. 2019 International Conference on Computer and Information Sciences (ICCIS). :1–7.

Currently, the complexity of software quality and testing is increasing exponentially with a huge number of challenges knocking doors, especially when testing a mission-critical application in banking and other critical domains, or the new technology trends with decentralized and nonintegrated testing tools. From practical experience, software testing has become costly and more effort-intensive with unlimited scope. This thesis promotes the Scalable Quality and Testing Lab (SQTL), it's a centralized quality and testing platform, which integrates a powerful manual, automation and business intelligence tools. SQTL helps quality engineers (QE) effectively organize, manage and control all testing activities in one centralized lab, starting from creating test cases, then executing different testing types such as web, security and others. And finally, ending with analyzing and displaying all testing activities result in an interactive dashboard, which allows QE to forecast new bugs especially those related to security. The centralized SQTL is to empower QE during the testing cycle, help them to achieve a greater level of software quality in minimum time, effort and cost, and decrease defect density metric.

2018-06-20
Michael, Nicolas, Ramannavar, Nitin, Shen, Yixiao, Patil, Sheetal, Sung, Jan-Lung.  2017.  CloudPerf: A Performance Test Framework for Distributed and Dynamic Multi-Tenant Environments. Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering. :189–200.

The evolution of cloud-computing imposes many challenges on performance testing and requires not only a different approach and methodology of performance evaluation and analysis, but also specialized tools and frameworks to support such work. In traditional performance testing, typically a single workload was run against a static test configuration. The main metrics derived from such experiments included throughput, response times, and system utilization at steady-state. While this may have been sufficient in the past, where in many cases a single application was run on dedicated hardware, this approach is no longer suitable for cloud-based deployments. Whether private or public cloud, such environments typically host a variety of applications on distributed shared hardware resources, simultaneously accessed by a large number of tenants running heterogeneous workloads. The number of tenants as well as their activity and resource needs dynamically change over time, and the cloud infrastructure reacts to this by reallocating existing or provisioning new resources. Besides metrics such as the number of tenants and overall resource utilization, performance testing in the cloud must be able to answer many more questions: How is the quality of service of a tenant impacted by the constantly changing activity of other tenants? How long does it take the cloud infrastructure to react to changes in demand, and what is the effect on tenants while it does so? How well are service level agreements met? What is the resource consumption of individual tenants? How can global performance metrics on application- and system-level in a distributed system be correlated to an individual tenant's perceived performance? In this paper we present CloudPerf, a performance test framework specifically designed for distributed and dynamic multi-tenant environments, capable of answering all of the above questions, and more. CloudPerf consists of a distributed harness, a protocol-independent load generator and workload modeling framework, an extensible statistics framework with live-monitoring and post-analysis tools, interfaces for cloud deployment operations, and a rich set of both low-level as well as high-level workloads from different domains.