Biblio

Filters: Author is Haibing Zheng, Tencent, Inc.  [Clear All Filters]
2017-07-18
Haibing Zheng, Tencent, Inc., Dengfeng Li, University of Illinois at Urbana-Champaign, Xia Zeng, Tencent, Inc., Wujie Zheng, Tencent, Inc., Yuetang Deng, Tencent, Inc., Wing Lam, University of Illinois at Urbana-Champaign, Wei Yang, University of Illinois at Urbana-Champaign, Tao Xie, University of Illinois at Urbana-Champaign.  2017.  Automated Test Input Generation for Android: Towards Getting There in an Industrial Case. 39th International Conference on Software Engineering (ICSE 2017), Software Engineering in Practice (SEIP).

Monkey, a random testing tool from Google, has been popularly used in industrial practices for automatic test input generation for Android due to its applicability to a variety of application settings, e.g., ease of use and compatibility with different Android platforms. Recently, Monkey has been under the spotlight of the research community: recent studies found out that none of the studied tools from the academia were actually better than Monkey when applied on a set of open source Android apps. Our recent efforts performed the first case study of applying Monkey on WeChat, a popular messenger app with over 800 million monthly active users, and revealed many limitations of Monkey along with developing our improved approach to alleviate some of these limitations. In this paper, we explore two optimization techniques to improve the effectiveness and efficiency of our previous approach. We also conduct manual categorization of not-covered activities and two automatic coverage-analysis techniques to provide insightful information about the not-covered code entities. Lastly, we present findings of our empirical studies of conducting automatic random testing on WeChat with the preceding techniques.