Visible to the public How have we evaluated software pattern application? A systematic mapping study of research design practicesConflict Detection Enabled

TitleHow have we evaluated software pattern application? A systematic mapping study of research design practices
Publication TypeJournal Article
Year of Publication2015
AuthorsMaria Riaz, Travis Breaux, Laurie Williams
JournalJournal Information and Software Technology
Volume65
Start Page14
IssueC
Pagination14-38
Date Published09/2015
KeywordsCMU, Oct'15
Abstract

ContextSoftware patterns encapsulate expert knowledge for constructing successful solutions to recurring problems. Although a large collection of software patterns is available in literature, empirical evidence on how well various patterns help in problem solving is limited and inconclusive. The context of these empirical findings is also not well understood, limiting applicability and generalizability of the findings. ObjectiveTo characterize the research design of empirical studies exploring software pattern application involving human participants. MethodWe conducted a systematic mapping study to identify and analyze 30 primary empirical studies on software pattern application, including 24 original studies and 6 replications. We characterize the research design in terms of the questions researchers have explored and the context of empirical research efforts. We also classify the studies in terms of measures used for evaluation, and threats to validity considered during study design and execution. ResultsUse of software patterns in maintenance is the most commonly investigated theme, explored in 16 studies. Object-oriented design patterns are evaluated in 14 studies while 4 studies evaluate architectural patterns. We identified 10 different constructs with 31 associated measures used to evaluate software patterns. Measures for 'efficiency' and 'usability' are commonly used to evaluate the problem solving process. While measures for 'completeness', 'correctness' and 'quality' are commonly used to evaluate the final artifact. Overall, 'time to complete a task' is the most frequently used measure, employed in 15 studies to measure 'efficiency'. For qualitative measures, studies do not report approaches for minimizing biases 27% of the time. Nine studies do not discuss any threats to validity. ConclusionSubtle differences in study design and execution can limit comparison of findings. Establishing baselines for participants' experience level, providing appropriate training, standardizing problem sets, and employing commonly used measures to evaluate performance can support replication and comparison of results across studies.

DOI10.1016/j.infsof.2015.04.002
Citation Keynode-30322

Other available formats:

Riaz_How_have_we_evaluated_TB.pdf
AttachmentTaxonomyKindSize
Riaz_How_have_we_evaluated_TB.pdfPDF document1.25 MBDownloadPreview
AttachmentSize
bytes