Visible to the public Biblio

Filters: Author is Huang, Heqing  [Clear All Filters]
2019-02-13
Shu, Xiaokui, Araujo, Frederico, Schales, Douglas L., Stoecklin, Marc Ph., Jang, Jiyong, Huang, Heqing, Rao, Josyula R..  2018.  Threat Intelligence Computing. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :1883–1898.
Cyber threat hunting is the process of proactively and iteratively formulating and validating threat hypotheses based on security-relevant observations and domain knowledge. To facilitate threat hunting tasks, this paper introduces threat intelligence computing as a new methodology that models threat discovery as a graph computation problem. It enables efficient programming for solving threat discovery problems, equipping threat hunters with a suite of potent new tools for agile codifications of threat hypotheses, automated evidence mining, and interactive data inspection capabilities. A concrete realization of a threat intelligence computing platform is presented through the design and implementation of a domain-specific graph language with interactive visualization support and a distributed graph database. The platform was evaluated in a two-week DARPA competition for threat detection on a test bed comprising a wide variety of systems monitored in real time. During this period, sub-billion records were produced, streamed, and analyzed, dozens of threat hunting tasks were dynamically planned and programmed, and attack campaigns with diverse malicious intent were discovered. The platform exhibited strong detection and analytics capabilities coupled with high efficiency, resulting in a leadership position in the competition. Additional evaluations on comprehensive policy reasoning are outlined to demonstrate the versatility of the platform and the expressiveness of the language.
2019-02-08
Zhang, Jialong, Gu, Zhongshu, Jang, Jiyong, Wu, Hui, Stoecklin, Marc Ph., Huang, Heqing, Molloy, Ian.  2018.  Protecting Intellectual Property of Deep Neural Networks with Watermarking. Proceedings of the 2018 on Asia Conference on Computer and Communications Security. :159-172.

Deep learning technologies, which are the key components of state-of-the-art Artificial Intelligence (AI) services, have shown great success in providing human-level capabilities for a variety of tasks, such as visual analysis, speech recognition, and natural language processing and etc. Building a production-level deep learning model is a non-trivial task, which requires a large amount of training data, powerful computing resources, and human expertises. Therefore, illegitimate reproducing, distribution, and the derivation of proprietary deep learning models can lead to copyright infringement and economic harm to model creators. Therefore, it is essential to devise a technique to protect the intellectual property of deep learning models and enable external verification of the model ownership. In this paper, we generalize the "digital watermarking'' concept from multimedia ownership verification to deep neural network (DNNs) models. We investigate three DNN-applicable watermark generation algorithms, propose a watermark implanting approach to infuse watermark into deep learning models, and design a remote verification mechanism to determine the model ownership. By extending the intrinsic generalization and memorization capabilities of deep neural networks, we enable the models to learn specially crafted watermarks at training and activate with pre-specified predictions when observing the watermark patterns at inference. We evaluate our approach with two image recognition benchmark datasets. Our framework accurately (100$\backslash$%) and quickly verifies the ownership of all the remotely deployed deep learning models without affecting the model accuracy for normal input data. In addition, the embedded watermarks in DNN models are robust and resilient to different counter-watermark mechanisms, such as fine-tuning, parameter pruning, and model inversion attacks.

2018-05-24
Chen, Xin, Huang, Heqing, Zhu, Sencun, Li, Qing, Guan, Quanlong.  2017.  SweetDroid: Toward a Context-Sensitive Privacy Policy Enforcement Framework for Android OS. Proceedings of the 2017 on Workshop on Privacy in the Electronic Society. :75–86.

Android privacy control is an important but difficult problem to solve. Previously, there was much research effort either focusing on extending the Android permission model with better policies or modifying the Android framework for fine-grained access control. In this work, we take an integral approach by designing and implementing SweetDroid, a calling-context-sensitive privacy policy enforcement framework. SweetDroid combines automated policy generation with automated policy enforcement. The automatically generated policies in SweetDroid are based on the calling contexts of privacy sensitive APIs; hence, SweetDroid is able to tell whether a particular API (e.g., getLastKnownLocation) under a certain execution path is leaking private information. The policy enforcement in SweetDroid is also fine-grained - it is at the individual API level, not at the permission level. We implement and evaluate the system based on thousands of Android apps, including those from a third-party market and malicious apps from VirusTotal. Our experiment results show that SweetDroid can successfully distinguish and enforce different privacy policies based on calling contexts, and the current design is both developer hassle-free and user transparent. SweetDroid is also efficient because it only introduces small storage and computational overhead.

2017-09-15
Song, Linhai, Huang, Heqing, Zhou, Wu, Wu, Wenfei, Zhang, Yiying.  2016.  Learning from Big Malwares. Proceedings of the 7th ACM SIGOPS Asia-Pacific Workshop on Systems. :12:1–12:8.

This paper calls for the attention to investigate real-world malwares in large scales by examining the largest real malware repository, VirusTotal. As a first step, we analyzed two fundamental characteristics of Windows executable malwares from VirusTotal. We designed offline and online tools for this analysis. Our results show that malwares appear in bursts and that distributions of malwares are highly skewed.