Biblio

Filters: Author is Tao Xie, University of Illinois at Urbana-Champaign  [Clear All Filters]
2017-07-18
Haibing Zheng, Tencent, Inc., Dengfeng Li, University of Illinois at Urbana-Champaign, Xia Zeng, Tencent, Inc., Wujie Zheng, Tencent, Inc., Yuetang Deng, Tencent, Inc., Wing Lam, University of Illinois at Urbana-Champaign, Wei Yang, University of Illinois at Urbana-Champaign, Tao Xie, University of Illinois at Urbana-Champaign.  2017.  Automated Test Input Generation for Android: Towards Getting There in an Industrial Case. 39th International Conference on Software Engineering (ICSE 2017), Software Engineering in Practice (SEIP).

Monkey, a random testing tool from Google, has been popularly used in industrial practices for automatic test input generation for Android due to its applicability to a variety of application settings, e.g., ease of use and compatibility with different Android platforms. Recently, Monkey has been under the spotlight of the research community: recent studies found out that none of the studied tools from the academia were actually better than Monkey when applied on a set of open source Android apps. Our recent efforts performed the first case study of applying Monkey on WeChat, a popular messenger app with over 800 million monthly active users, and revealed many limitations of Monkey along with developing our improved approach to alleviate some of these limitations. In this paper, we explore two optimization techniques to improve the effectiveness and efficiency of our previous approach. We also conduct manual categorization of not-covered activities and two automatic coverage-analysis techniques to provide insightful information about the not-covered code entities. Lastly, we present findings of our empirical studies of conducting automatic random testing on WeChat with the preceding techniques.

2017-04-03
2017-07-18
Benjamin Andow, Akhil Acharya, Dengfeng Li, University of Illinois at Urbana-Champaign, William Enck, Kapil Singh, Tao Xie, University of Illinois at Urbana-Champaign.  2017.  UiRef: Analysis of Sensitive User Inputs in Android Applications. 10th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec 2017).

Mobile applications frequently request sensitive data. While prior work has focused on analyzing sensitive-data uses originating from well-dened API calls in the system, the security and privacy implications of inputs requested via application user interfaces have been widely unexplored. In this paper, our goal is to understand the broad implications of such requests in terms of the type of sensitive data being requested by applications.

To this end, we propose UiRef (User Input REsolution Framework), an automated approach for resolving the semantics of user inputs requested by mobile applications. UiRef’s design includes a number of novel techniques for extracting and resolving user interface labels and addressing ambiguity in semantics, resulting in signicant improvements over prior work.We apply UiRef to 50,162 Android applications from Google Play and use outlier analysis to triage applications with questionable input requests. We identify concerning developer practices, including insecure exposure of account passwords and non-consensual input disclosures to third parties. These ndings demonstrate the importance of user-input semantics when protecting end users.

2016-12-09
Tao Xie, University of Illinois at Urbana-Champaign, William Enck, North Carolina State University.  2016.  Text Analytics for Security.

Invited Tutorial, Symposium and Bootcamp on the Science of Security (HotSoS 2016), April 2016.

Tao Xie, University of Illinois at Urbana-Champaign.  2016.  User Expectations in Mobile App Security.

Maintaining the security and privacy hygiene of mobile apps is a critical challenge. Unfortunately, no program analysis algorithm can determine that an application is “secure” or “malware-free.” For example, if an application records audio during a phone call, it may be malware. However, the user may want to use such an application to record phone calls for archival and benign purposes. A key challenge for automated program analysis tools is determining whether or not that behavior is actually desired by the user (i.e., user expectation). This talk presents recent research progress in exploring user expectations in mobile app security.

Presented at the ITI Joint Trust and Security/Science of Security Seminar, January 26, 2016.

2016-10-24
2016-07-13
Sihan Li, University of Illinois at Urbana-Champaign, Xusheng Xiao, NEC Laboratories America, Blake Bassett, University of Illinois at Urbana-Champaign, Tao Xie, University of Illinois at Urbana-Champaign, Nikolai Tillmann, Microsoft Research.  2016.  Measuring Code Behavioral Similarity for Programming and Software Engineering Education. 38th International Conference on Software Engineering.

In recent years, online programming and software engineering education via information technology has gained a lot of popularity. Typically, popular courses often have hundreds or thousands of students but only a few course sta members. Tool automation is needed to maintain the quality of education. In this paper, we envision that the capability of quantifying behavioral similarity between programs is helpful for teaching and learning programming and software engineering, and propose three metrics that approximate the computation of behavioral similarity. Speci cally, we leverage random testing and dynamic symbolic execution (DSE) to generate test inputs, and run programs on these test inputs to compute metric values of the behavioral similarity. We evaluate our metrics on three real-world data sets from the Pex4Fun platform (which so far has accumulated more than 1.7 million game-play interactions). The results show that our metrics provide highly accurate approximation to the behavioral similarity. We also demonstrate a number of practical applications of our metrics including hint generation, progress indication, and automatic grading.

 

Benjamin Andow, North Carolina State University, Adwait Nadkarni, North Carolina State University, Blake Bassett, University of Illinois at Urbana-Champaign, William Enck, North Carolina State University, Tao Xie, University of Illinois at Urbana-Champaign.  2016.  A Study of Grayware on Google Play. Workshop on Mobile Security Technologies.

While there have been various studies identifying and classifying Android malware, there is limited discussion of the broader class of apps that fall in a gray area. Mobile grayware is distinct from PC grayware due to differences in operating system properties. Due to mobile grayware’s subjective nature, it is difficult to identify mobile grayware via program analysis alone. Instead, we hypothesize enhancing analysis with text analytics can effectively reduce human effort when triaging grayware. In this paper, we design and implement heuristics for seven main categories of grayware.We then use these heuristics to simulate grayware triage on a large set of apps from Google Play. We then present the results of our empirical study, demonstrating a clear problem of grayware. In doing so, we show how even relatively simple heuristics can quickly triage apps that take advantage of users in an undesirable way.
 

2016-12-09
Xia Zeng, Tencent, Inc., Dengfend Li, University of Illinois at Urbana-Champaign, Wujie Zheng, Tencent, Inc., Yuetang Deng, Tencent, Inc., Wing Lam, University of Illinois at Urbana-Champaign, Wei Yang, University of Illinois at Urbana-Champaign, Tao Xie, University of Illinois at Urbana-Champaign.  2016.  Automated Test Input Generation for Android: Are We Really There Yet in an Industrial Case? 24th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE 2016).

Given the ever increasing number of research tools to automatically generate inputs to test Android applications (or simply apps), researchers recently asked the question "Are we there yet?" (in terms of the practicality of the tools). By conducting an empirical study of the various tools, the researchers found that Monkey (the most widely used tool of this category in industrial settings) outperformed all of the research tools in the study. In this paper, we present two signi cant extensions of that study. First, we conduct the rst industrial case study of applying Monkey against WeChat, a popular  messenger app with over 762 million monthly active users, and report the empirical ndings on Monkey's limitations in an industrial setting. Second, we develop a new approach to address major limitations of Monkey and accomplish substantial code-coverage improvements over Monkey. We conclude the paper with empirical insights for future enhancements to both Monkey and our approach.

2016-12-01
Pierre McCauley, University of Illinois at Urbana-Champaign, Brandon Nsiah-Ababio, University of Illinois at Urbana-Champaign, Joshua Reed, University of Illinois at Urbana-Champaign, Faramola Isiaka, University of Illinois at Urbana-Champaign, Tao Xie, University of Illinois at Urbana-Champaign.  2016.  Preliminary Analysis of Code Hunt Data Set from a Contest. 2nd International Code Hunt Workshop on Educational Software Engineering (CHESE 2016).

Code Hunt (https://www.codehunt.com/) from Microsoft Research is a web-based serious gaming platform being popularly used for various programming contests. In this paper, we demonstrate preliminary statistical analysis of a Code Hunt data set that contains the programs written by students (only) worldwide during a contest over 48 hours. There are 259 users, 24 puzzles (organized into 4 sectors), and about 13,000 programs submitted by these users. Our analysis results can help improve the creation of puzzles in a future contest.

2015-11-17
Wei Yang, University of Illinois at Urbana-Champaign, Xusheng Xiao, NEC Laboratories America, Benjamin Andow, North Carolina State University, Sihan Li, University of Illinois at Urbana-Champaign, Tao Xie, University of Illinois at Urbana-Champaign, William Enck, North Carolina State University.  2015.  AppContext: Differentiating Malicious and Benign Mobile App Behavior Under Context. 37th International Conference on Software Engineering (ICSE 2015).

Mobile malware attempts to evade detection during app analysis by mimicking security-sensitive behaviors of benign apps that provide similar functionality (e.g., sending SMS mes- sages), and suppressing their payload to reduce the chance of being observed (e.g., executing only its payload at night). Since current approaches focus their analyses on the types of security- sensitive resources being accessed (e.g., network), these evasive techniques in malware make differentiating between malicious and benign app behaviors a difficult task during app analysis. We propose that the malicious and benign behaviors within apps can be differentiated based on the contexts that trigger security- sensitive behaviors, i.e., the events and conditions that cause the security-sensitive behaviors to occur. In this work, we introduce AppContext, an approach of static program analysis that extracts the contexts of security-sensitive behaviors to assist app analysis in differentiating between malicious and benign behaviors. We implement a prototype of AppContext and evaluate AppContext on 202 malicious apps from various malware datasets, and 633 benign apps from the Google Play Store. AppContext correctly identifies 192 malicious apps with 87.7% precision and 95% recall. Our evaluation results suggest that the maliciousness of a security-sensitive behavior is more closely related to the intention of the behavior (reflected via contexts) than the type of the security-sensitive resources that the behavior accesses.

2016-12-09
2015-11-17
Tao Xie, University of Illinois at Urbana-Champaign, Judith Bishop, Microsoft Research, Nikolai Tillmann, Microsoft Research, Jonathan de Halleux, Microsoft Research.  2015.  Gamifying Software Security Education and Training via Secure Coding Duels in Code Hunt. Symposium and Bootcamp for the Science of Security (HotSoS).

Sophistication and flexibility of software development make it easy to leave security vulnerabilities in software applications for attack- ers. It is critical to educate and train software engineers to avoid in- troducing vulnerabilities in software applications in the first place such as adopting secure coding mechanisms and conducting secu- rity testing. A number of websites provide training grounds to train people’s hacking skills, which are highly related to security test- ing skills, and train people’s secure coding skills. However, there exists no interactive gaming platform for instilling gaming aspects into the education and training of secure coding. To address this issue, we propose to construct secure coding duels in Code Hunt, a high-impact serious gaming platform released by Microsoft Re- search. In Code Hunt, a coding duel consists of two code segments: a secret code segment and a player-visible code segment. To solve a coding duel, a player iteratively modifies the player-visible code segment to match the functional behaviors of the secret code seg- ment. During the duel-solving process, the player is given clues as a set of automatically generated test cases to characterize sample functional behaviors of the secret code segment. The game aspect in Code Hunt is to recognize a pattern from the test cases, and to re-engineer the player-visible code segment to exhibit the expected behaviors. Secure coding duels proposed in this work are coding duels that are carefully designed to train players’ secure coding skills, such as sufficient input validation and access control.

2016-12-09
2015-11-17
Xusheng Xiao, NEC Laboratories America, Nikolai Tillmann, Microsoft Research, Manuel Fahndrich, Microsoft Research, Jonathan de Halleux, Microsoft Research, Michal Moskal, Microsoft Research, Tao Xie, University of Illinois at Urbana-Champaign.  2015.  User-Aware Privacy Control via Extended Static-Information-Flow Analysis. Automated Software Engineering Journal. 22(3)

Applications in mobile marketplaces may leak private user information without notification. Existing mobile platforms provide little information on how applications use private user data, making it difficult for experts to validate appli- cations and for users to grant applications access to their private data. We propose a user-aware-privacy-control approach, which reveals how private information is used inside applications. We compute static information flows and classify them as safe/un- safe based on a tamper analysis that tracks whether private data is obscured before escaping through output channels. This flow information enables platforms to provide default settings that expose private data for only safe flows, thereby preserving privacy and minimizing decisions required from users. We build our approach into TouchDe- velop, an application-creation environment that allows users to write scripts on mobile devices and install scripts published by other users. We evaluate our approach by studying 546 scripts published by 194 users, and the results show that our approach effectively reduces the need to make access-granting choices to only 10.1 % (54) of all scripts. We also conduct a user survey that involves 50 TouchDevelop users to assess the effectiveness and usability of our approach. The results show that 90 % of the users consider our approach useful in protecting their privacy, and 54 % prefer our approach over other privacy-control approaches.

2016-12-01
Huoran Li, Peking University, Xuan Lu, Peking University, Xuanzhe Liu, Peking University, Tao Xie, University of Illinois at Urbana-Champaign, Kaigui Bian, Peking University, Felix Xiaozhu Lin, Purdue University, Qiaozhu Mei, University of Michigan, Feng Feng, Wandoujia Lab.  2015.  Characterizing Smartphone Usage Patterns from Millions of Android Users. 2015 Internet Measurement Conference (IMC 2015).

The prevalence of smart devices has promoted the popularity of mobile applications (a.k.a. apps) in recent years. A number of interesting and important questions remain unanswered, such as why a user likes/dislikes an app, how an app becomes popular or eventually perishes, how a user selects apps to install and interacts with them, how frequently an app is used and how much trac it generates, etc. This paper presents an empirical analysis of app usage behaviors collected from millions of users of Wandoujia, a leading Android app marketplace in China. The dataset covers two types of user behaviors of using over 0.2 million Android apps, including (1) app management activities (i.e., installation, updating, and uninstallation) of over 0.8 million unique users and (2) app network trac from over 2 million unique users. We explore multiple aspects of such behavior data and present interesting patterns of app usage. The results provide many useful implications to the developers, users, and disseminators of mobile apps.