Biblio

Filters: Author is Tingting Yu  [Clear All Filters]
2017-07-11
Tingting Yu, Witawas Srisa-an, Gregg Rothermel.  2017.  An automated framework to support testing for process-level race conditions. Software: Testing, Verification, and Reliability .

Race conditions are difficult to detect because they usually occur only under specific execution interleavings. Numerous program analysis and testing techniques have been proposed to detect race conditions between threads on single applications. However, most of these techniques neglect races that occur at the process level due to complex system event interactions. This article presents a framework, SIMEXPLORER, that allows engineers to effectively test for process-level race conditions. SIMEXPLORER first uses dynamic analysis techniques to observe system execution, identify program locations of interest, and report faults related to oracles. Next, it uses virtualization to achieve the fine-grained controllability needed to exercise event interleavings that are likely to expose races. We evaluated the effectiveness of SIMEXPLORER on 24 real-world applications containing both known and unknown process-level race conditions. Our results show that SIMEXPLORER is effective at detecting these race conditions, while incurring an overhead that is acceptable given its effectiveness improvements.

2016-12-08
Supat Rattanasuksun, Tingting Yu, Witawas Srisa-an, Gregg Rothermel.  2016.  RRF: A Race Reproduction Framework for Use in Debugging Process-Level Races. 27th International Symposium on Software Reliability Engineering (ISSRE).

Process-level races are endemic in modern  systems. These races are difficult  to debug  because they are  sensitive to execution   events  such  as  interrupts and scheduling.  Unless  a process interleaving   that can result in the race can be found, it cannot be reproduced  and cannot be corrected. In practice, however,  the number of interleavings  that can occur among processes  in practice  is large,  and the patterns of interleavings can be complex. Thus, approaches for reproducing process-level races  to date are  often ineffective.  In  this paper, we present RRF, a race reproduction  framework that can help software engineers reproduce reported process-level races, enabling  them to potentially  debug these races. RRF performs a hybrid analysis by leveraging  existing  static program analysis tools, dynamic kernel event  reporting tools,  and yield points  to provide  the observability and controllability  needed to reproduce races. We conducted an empirical study to evaluate RRF; our results show that RRF can be effective for reproducing races.

2016-12-07
Tingting Yu, Witawas Srisa-an, Gregg Rothermel.  2014.  SimRT: An Automated Framework to Support Regression Testing for Data Races. ICSE 2014 Proceedings of the 36th International Conference on Software Engineering.

Concurrent programs are prone to various classes of difficult-to-detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SimRT, an automated regression testing framework for use in detecting races introduced by code modifications. SimRT employs a regression test selection technique, focused on sets of program elements related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritization technique to improve the rate at which such races are detected. Our empirical study of SimRT reveals that it is more efficient and effective for revealing races than other approaches, and that its constituent test selection and prioritization components each contribute to its performance.