Visible to the public Biblio

Filters: Author is Liu, Ling  [Clear All Filters]
2022-04-13
Liu, Ling, Zhang, Shengli, Ling, Cong.  2021.  Set Reconciliation for Blockchains with Slepian-Wolf Coding: Deletion Polar Codes. 2021 13th International Conference on Wireless Communications and Signal Processing (WCSP). :1–5.
In this paper, we propose a polar coding based scheme for set reconciliation between two network nodes. The system is modeled as a well-known Slepian-Wolf setting induced by a fixed number of deletions. The set reconciliation process is divided into two phases: 1) a deletion polar code is employed to help one node to identify the possible deletion indices, which may be larger than the number of genuine deletions; 2) a lossless compression polar code is then designed to feedback those indices with minimum overhead. Our scheme can be viewed as a generalization of polar codes to some emerging network-based applications such as the package synchronization in blockchains. The total overhead is linear to the number of packages, and immune to the package size.
2021-06-28
Wei, Wenqi, Liu, Ling, Loper, Margaret, Chow, Ka-Ho, Gursoy, Mehmet Emre, Truex, Stacey, Wu, Yanzhao.  2020.  Adversarial Deception in Deep Learning: Analysis and Mitigation. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :236–245.
The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data. Adversarial attacks in deep learning have emerged as one of the dominating security threats to a range of mission-critical deep learning systems and applications. This paper takes a holistic view to characterize the adversarial examples in deep learning by studying their adverse effect and presents an attack-independent countermeasure with three original contributions. First, we provide a general formulation of adversarial examples and elaborate on the basic principle for adversarial attack algorithm design. Then, we evaluate 15 adversarial attacks with a variety of evaluation metrics to study their adverse effects and costs. We further conduct three case studies to analyze the effectiveness of adversarial examples and to demonstrate their divergence across attack instances. We take advantage of the instance-level divergence of adversarial examples and propose strategic input transformation teaming defense. The proposed defense methodology is attack-independent and capable of auto-repairing and auto-verifying the prediction decision made on the adversarial input. We show that the strategic input transformation teaming defense can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false-positive rates, compared to existing representative defense methods.
2021-06-02
Gursoy, M. Emre, Rajasekar, Vivekanand, Liu, Ling.  2020.  Utility-Optimized Synthesis of Differentially Private Location Traces. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :30—39.
Differentially private location trace synthesis (DPLTS) has recently emerged as a solution to protect mobile users' privacy while enabling the analysis and sharing of their location traces. A key challenge in DPLTS is to best preserve the utility in location trace datasets, which is non-trivial considering the high dimensionality, complexity and heterogeneity of datasets, as well as the diverse types and notions of utility. In this paper, we present OptaTrace: a utility-optimized and targeted approach to DPLTS. Given a real trace dataset D, the differential privacy parameter ε controlling the strength of privacy protection, and the utility/error metric Err of interest; OptaTrace uses Bayesian optimization to optimize DPLTS such that the output error (measured in terms of given metric Err) is minimized while ε-differential privacy is satisfied. In addition, OptaTrace introduces a utility module that contains several built-in error metrics for utility benchmarking and for choosing Err, as well as a front-end web interface for accessible and interactive DPLTS service. Experiments show that OptaTrace's optimized output can yield substantial utility improvement and error reduction compared to previous work.
2021-05-25
Wei, Wenqi, Liu, Ling, Loper, Margaret, Chow, Ka-Ho, Gursoy, Emre, Truex, Stacey, Wu, Yanzhao.  2020.  Cross-Layer Strategic Ensemble Defense Against Adversarial Examples. 2020 International Conference on Computing, Networking and Communications (ICNC). :456—460.

Deep neural network (DNN) has demonstrated its success in multiple domains. However, DNN models are inherently vulnerable to adversarial examples, which are generated by adding adversarial perturbations to benign inputs to fool the DNN model to misclassify. In this paper, we present a cross-layer strategic ensemble framework and a suite of robust defense algorithms, which are attack-independent, and capable of auto-repairing and auto-verifying the target model being attacked. Our strategic ensemble approach makes three original contributions. First, we employ input-transformation diversity to design the input-layer strategic transformation ensemble algorithms. Second, we utilize model-disagreement diversity to develop the output-layer strategic model ensemble algorithms. Finally, we create an input-output cross-layer strategic ensemble defense that strengthens the defensibility by combining diverse input transformation based model ensembles with diverse output verification model ensembles. Evaluated over 10 attacks on ImageNet dataset, we show that our strategic ensemble defense algorithms can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false negative rates, compared to existing representative defenses.

2020-09-21
Chow, Ka-Ho, Wei, Wenqi, Wu, Yanzhao, Liu, Ling.  2019.  Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks. 2019 IEEE International Conference on Big Data (Big Data). :1282–1291.
Deep neural networks (DNNs) have demonstrated impressive performance on many challenging machine learning tasks. However, DNNs are vulnerable to adversarial inputs generated by adding maliciously crafted perturbations to the benign inputs. As a growing number of attacks have been reported to generate adversarial inputs of varying sophistication, the defense-attack arms race has been accelerated. In this paper, we present MODEF, a cross-layer model diversity ensemble framework. MODEF intelligently combines unsupervised model denoising ensemble with supervised model verification ensemble by quantifying model diversity, aiming to boost the robustness of the target model against adversarial examples. Evaluated using eleven representative attacks on popular benchmark datasets, we show that MODEF achieves remarkable defense success rates, compared with existing defense methods, and provides a superior capability of repairing adversarial inputs and making correct predictions with high accuracy in the presence of black-box attacks.
2019-03-06
Gursoy, Mehmet Emre, Liu, Ling, Truex, Stacey, Yu, Lei, Wei, Wenqi.  2018.  Utility-Aware Synthesis of Differentially Private and Attack-Resilient Location Traces. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :196-211.
As mobile devices and location-based services become increasingly ubiquitous, the privacy of mobile users' location traces continues to be a major concern. Traditional privacy solutions rely on perturbing each position in a user's trace and replacing it with a fake location. However, recent studies have shown that such point-based perturbation of locations is susceptible to inference attacks and suffers from serious utility losses, because it disregards the moving trajectory and continuity in full location traces. In this paper, we argue that privacy-preserving synthesis of complete location traces can be an effective solution to this problem. We present AdaTrace, a scalable location trace synthesizer with three novel features: provable statistical privacy, deterministic attack resilience, and strong utility preservation. AdaTrace builds a generative model from a given set of real traces through a four-phase synthesis process consisting of feature extraction, synopsis learning, privacy and utility preserving noise injection, and generation of differentially private synthetic location traces. The output traces crafted by AdaTrace preserve utility-critical information existing in real traces, and are robust against known location trace attacks. We validate the effectiveness of AdaTrace by comparing it with three state of the art approaches (ngram, DPT, and SGLT) using real location trace datasets (Geolife and Taxi) as well as a simulated dataset of 50,000 vehicles in Oldenburg, Germany. AdaTrace offers up to 3-fold improvement in trajectory utility, and is orders of magnitude faster than previous work, while preserving differential privacy and attack resilience.
2017-06-05
Zhang, Rui, Xue, Rui, Yu, Ting, Liu, Ling.  2016.  Dynamic and Efficient Private Keyword Search over Inverted Index–Based Encrypted Data. ACM Trans. Internet Technol.. 16:21:1–21:20.

Querying over encrypted data is gaining increasing popularity in cloud-based data hosting services. Security and efficiency are recognized as two important and yet conflicting requirements for querying over encrypted data. In this article, we propose an efficient private keyword search (EPKS) scheme that supports binary search and extend it to dynamic settings (called DEPKS) for inverted index–based encrypted data. First, we describe our approaches of constructing a searchable symmetric encryption (SSE) scheme that supports binary search. Second, we present a novel framework for EPKS and provide its formal security definitions in terms of plaintext privacy and predicate privacy by modifying Shen et al.’s security notions [Shen et al. 2009]. Third, built on the proposed framework, we design an EPKS scheme whose complexity is logarithmic in the number of keywords. The scheme is based on the groups of prime order and enjoys strong notions of security, namely statistical plaintext privacy and statistical predicate privacy. Fourth, we extend the EPKS scheme to support dynamic keyword and document updates. The extended scheme not only maintains the properties of logarithmic-time search efficiency and plaintext privacy and predicate privacy but also has fewer rounds of communications for updates compared to existing dynamic search encryption schemes. We experimentally evaluate the proposed EPKS and DEPKS schemes and show that they are significantly more efficient in terms of both keyword search complexity and communication complexity than existing randomized SSE schemes.