Biblio

Filters: Author is Wei Yang, University of Illinois at Urbana-Champaign  [Clear All Filters]
2017-07-18
Haibing Zheng, Tencent, Inc., Dengfeng Li, University of Illinois at Urbana-Champaign, Xia Zeng, Tencent, Inc., Wujie Zheng, Tencent, Inc., Yuetang Deng, Tencent, Inc., Wing Lam, University of Illinois at Urbana-Champaign, Wei Yang, University of Illinois at Urbana-Champaign, Tao Xie, University of Illinois at Urbana-Champaign.  2017.  Automated Test Input Generation for Android: Towards Getting There in an Industrial Case. 39th International Conference on Software Engineering (ICSE 2017), Software Engineering in Practice (SEIP).

Monkey, a random testing tool from Google, has been popularly used in industrial practices for automatic test input generation for Android due to its applicability to a variety of application settings, e.g., ease of use and compatibility with different Android platforms. Recently, Monkey has been under the spotlight of the research community: recent studies found out that none of the studied tools from the academia were actually better than Monkey when applied on a set of open source Android apps. Our recent efforts performed the first case study of applying Monkey on WeChat, a popular messenger app with over 800 million monthly active users, and revealed many limitations of Monkey along with developing our improved approach to alleviate some of these limitations. In this paper, we explore two optimization techniques to improve the effectiveness and efficiency of our previous approach. We also conduct manual categorization of not-covered activities and two automatic coverage-analysis techniques to provide insightful information about the not-covered code entities. Lastly, we present findings of our empirical studies of conducting automatic random testing on WeChat with the preceding techniques.

2017-04-03
2016-10-24
2016-12-09
Xia Zeng, Tencent, Inc., Dengfend Li, University of Illinois at Urbana-Champaign, Wujie Zheng, Tencent, Inc., Yuetang Deng, Tencent, Inc., Wing Lam, University of Illinois at Urbana-Champaign, Wei Yang, University of Illinois at Urbana-Champaign, Tao Xie, University of Illinois at Urbana-Champaign.  2016.  Automated Test Input Generation for Android: Are We Really There Yet in an Industrial Case? 24th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE 2016).

Given the ever increasing number of research tools to automatically generate inputs to test Android applications (or simply apps), researchers recently asked the question "Are we there yet?" (in terms of the practicality of the tools). By conducting an empirical study of the various tools, the researchers found that Monkey (the most widely used tool of this category in industrial settings) outperformed all of the research tools in the study. In this paper, we present two signi cant extensions of that study. First, we conduct the rst industrial case study of applying Monkey against WeChat, a popular  messenger app with over 762 million monthly active users, and report the empirical ndings on Monkey's limitations in an industrial setting. Second, we develop a new approach to address major limitations of Monkey and accomplish substantial code-coverage improvements over Monkey. We conclude the paper with empirical insights for future enhancements to both Monkey and our approach.

2015-11-17
Wei Yang, University of Illinois at Urbana-Champaign, Xusheng Xiao, NEC Laboratories America, Benjamin Andow, North Carolina State University, Sihan Li, University of Illinois at Urbana-Champaign, Tao Xie, University of Illinois at Urbana-Champaign, William Enck, North Carolina State University.  2015.  AppContext: Differentiating Malicious and Benign Mobile App Behavior Under Context. 37th International Conference on Software Engineering (ICSE 2015).

Mobile malware attempts to evade detection during app analysis by mimicking security-sensitive behaviors of benign apps that provide similar functionality (e.g., sending SMS mes- sages), and suppressing their payload to reduce the chance of being observed (e.g., executing only its payload at night). Since current approaches focus their analyses on the types of security- sensitive resources being accessed (e.g., network), these evasive techniques in malware make differentiating between malicious and benign app behaviors a difficult task during app analysis. We propose that the malicious and benign behaviors within apps can be differentiated based on the contexts that trigger security- sensitive behaviors, i.e., the events and conditions that cause the security-sensitive behaviors to occur. In this work, we introduce AppContext, an approach of static program analysis that extracts the contexts of security-sensitive behaviors to assist app analysis in differentiating between malicious and benign behaviors. We implement a prototype of AppContext and evaluate AppContext on 202 malicious apps from various malware datasets, and 633 benign apps from the Google Play Store. AppContext correctly identifies 192 malicious apps with 87.7% precision and 95% recall. Our evaluation results suggest that the maliciousness of a security-sensitive behavior is more closely related to the intention of the behavior (reflected via contexts) than the type of the security-sensitive resources that the behavior accesses.