Visible to the public Biblio

Filters: Author is Mirkovic, Jelena  [Clear All Filters]
2018-01-23
Shi, Hao, Mirkovic, Jelena, Alwabel, Abdulla.  2017.  Handling Anti-Virtual Machine Techniques in Malicious Software. ACM Trans. Priv. Secur.. 21:2:1–2:31.

Malware analysis relies heavily on the use of virtual machines (VMs) for functionality and safety. There are subtle differences in operation between virtual and physical machines. Contemporary malware checks for these differences and changes its behavior when it detects a VM presence. These anti-VM techniques hinder malware analysis. Existing research approaches to uncover differences between VMs and physical machines use randomized testing, and thus cannot guarantee completeness. In this article, we propose a detect-and-hide approach, which systematically addresses anti-VM techniques in malware. First, we propose cardinal pill testing—a modification of red pill testing that aims to enumerate the differences between a given VM and a physical machine through carefully designed tests. Cardinal pill testing finds five times more pills by running 15 times fewer tests than red pill testing. We examine the causes of pills and find that, while the majority of them stem from the failure of VMs to follow CPU specifications, a small number stem from under-specification of certain instructions by the Intel manual. This leads to divergent implementations in different CPU and VM architectures. Cardinal pill testing successfully enumerates the differences that stem from the first cause. Finally, we propose VM Cloak—a WinDbg plug-in which hides the presence of VMs from malware. VM Cloak monitors each execute malware command, detects potential pills, and at runtime modifies the command’s outcomes to match those that a physical machine would generate. We implemented VM Cloak and verified that it successfully hides VM presence from malware.

Shi, Hao, Mirkovic, Jelena.  2017.  Hiding Debuggers from Malware with Apate. Proceedings of the Symposium on Applied Computing. :1703–1710.
Malware analysis uses debuggers to understand and manipulate the behaviors of stripped binaries. To circumvent analysis, malware applies a variety of anti-debugging techniques, such as self-modifying, checking for or removing breakpoints, hijacking keyboard and mouse events, escaping the debugger, etc. Most state-of-the-art debuggers are vulnerable to these anti-debugging techniques. In this paper, we first systematically analyze the spectrum of possible anti-debugging techniques and compile a list of 79 attack vectors. We then propose a framework, called Apate, which detects and defeats each of these attack vectors, by performing: (1) just-in-time disassembling based on single-stepping, (2) careful monitoring of the debuggee's execution and, when needed, modification of the debuggee's states to hide the debugger's presence. We implement Apate as an extension to WinDbg and extensively evaluate it using five different datasets, with known and new malware samples. Apate outperforms other debugger-hiding technologies by a wide margin, addressing 58+–465+ more attack vectors.
2018-01-10
Deng, Xiyue, Mirkovic, Jelena.  2017.  Commoner Privacy And A Study On Network Traces. Proceedings of the 33rd Annual Computer Security Applications Conference. :566–576.
Differential privacy has emerged as a promising mechanism for privacy-safe data mining. One popular differential privacy mechanism allows researchers to pose queries over a dataset, and adds random noise to all output points to protect privacy. While differential privacy produces useful data in many scenarios, added noise may jeopardize utility for queries posed over small populations or over long-tailed datasets. Gehrke et al. proposed crowd-blending privacy, with random noise added only to those output points where fewer than k individuals (a configurable parameter) contribute to the point in the same manner. This approach has a lower privacy guarantee, but preserves more research utility than differential privacy. We propose an even more liberal privacy goal—commoner privacy—which fuzzes (omits, aggregates or adds noise to) only those output points where an individual's contribution to this point is an outlier. By hiding outliers, our mechanism hides the presence or absence of an individual in a dataset. We propose one mechanism that achieves commoner privacy—interactive k-anonymity. We also discuss query composition and show how we can guarantee privacy via either a pre-sampling step or via query introspection. We implement interactive k-anonymity and query introspection in a system called Patrol for network trace processing. Our evaluation shows that commoner privacy prevents common attacks while preserving orders of magnitude higher research utility than differential privacy, and at least 9-49 times the utility of crowd-blending privacy.