Visible to the public Biblio

Filters: Author is Liu, Jiqiang  [Clear All Filters]
2022-09-20
Chen, Tong, Xiang, Yingxiao, Li, Yike, Tian, Yunzhe, Tong, Endong, Niu, Wenjia, Liu, Jiqiang, Li, Gang, Alfred Chen, Qi.  2021.  Protecting Reward Function of Reinforcement Learning via Minimal and Non-catastrophic Adversarial Trajectory. 2021 40th International Symposium on Reliable Distributed Systems (SRDS). :299—309.
Reward functions are critical hyperparameters with commercial values for individual or distributed reinforcement learning (RL), as slightly different reward functions result in significantly different performance. However, existing inverse reinforcement learning (IRL) methods can be utilized to approximate reward functions just based on collected expert trajectories through observing. Thus, in the real RL process, how to generate a polluted trajectory and perform an adversarial attack on IRL for protecting reward functions has become the key issue. Meanwhile, considering the actual RL cost, generated adversarial trajectories should be minimal and non-catastrophic for ensuring normal RL performance. In this work, we propose a novel approach to craft adversarial trajectories disguised as expert ones, for decreasing the IRL performance and realize the anti-IRL ability. Firstly, we design a reward clustering-based metric to integrate both advantages of fine- and coarse-grained IRL assessment, including expected value difference (EVD) and mean reward loss (MRL). Further, based on such metric, we explore an adversarial attack based on agglomerative nesting algorithm (AGNES) clustering and determine targeted states as starting states for reward perturbation. Then we employ the intrinsic fear model to predict the probability of imminent catastrophe, supporting to generate non-catastrophic adversarial trajectories. Extensive experiments of 7 state-of-the-art IRL algorithms are implemented on the Object World benchmark, demonstrating the capability of our proposed approach in (a) decreasing the IRL performance and (b) having minimal and non-catastrophic adversarial trajectories.
2022-06-06
Li, Qiang, Song, Jinke, Tan, Dawei, Wang, Haining, Liu, Jiqiang.  2021.  PDGraph: A Large-Scale Empirical Study on Project Dependency of Security Vulnerabilities. 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :161–173.
The reuse of libraries in software development has become prevalent for improving development efficiency and software quality. However, security vulnerabilities of reused libraries propagated through software project dependency pose a severe security threat, but they have not yet been well studied. In this paper, we present the first large-scale empirical study of project dependencies with respect to security vulnerabilities. We developed PDGraph, an innovative approach for analyzing publicly known security vulnerabilities among numerous project dependencies, which provides a new perspective for assessing security risks in the wild. As a large-scale software collection in dependency, we find 337,415 projects and 1,385,338 dependency relations. In particular, PDGraph generates a project dependency graph, where each node is a project, and each edge indicates a dependency relationship. We conducted experiments to validate the efficacy of PDGraph and characterized its features for security analysis. We revealed that 1,014 projects have publicly disclosed vulnerabilities, and more than 67,806 projects are directly dependent on them. Among these, 42,441 projects still manifest 67,581 insecure dependency relationships, indicating that they are built on vulnerable versions of reused libraries even though their vulnerabilities are publicly known. During our eight-month observation period, only 1,266 insecure edges were fixed, and corresponding vulnerable libraries were updated to secure versions. Furthermore, we uncovered four underlying dependency risks that can significantly reduce the difficulty of compromising systems. We conducted a quantitative analysis of dependency risks on the PDGraph.
2017-06-27
Qiu, Shuo, Wang, Boyang, Li, Ming, Victors, Jesse, Liu, Jiqiang, Shi, Yanfeng, Wang, Wei.  2016.  Fast, Private and Verifiable: Server-aided Approximate Similarity Computation over Large-Scale Datasets. Proceedings of the 4th ACM International Workshop on Security in Cloud Computing. :29–36.

Computing similarity, especially Jaccard Similarity, between two datasets is a fundamental building block in big data analytics, and extensive applications including genome matching, plagiarism detection, social networking, etc. The increasing user privacy concerns over the release of has sensitive data have made it desirable and necessary for two users to evaluate Jaccard Similarity over their datasets in a privacy-preserving manner. In this paper, we propose two efficient and secure protocols to compute the Jaccard Similarity of two users' private sets with the help of an unfully-trusted server. Specifically, in order to boost the efficiency, we leverage Minhashing algorithm on encrypted data, where the output of our protocols is guaranteed to be a close approximation of the exact value. In both protocols, only an approximate similarity result is leaked to the server and users. The first protocol is secure against a semi-honest server, while the second protocol, with a novel consistency-check mechanism, further achieves result verifiability against a malicious server who cheats in the executions. Experimental results show that our first protocol computes an approximate Jaccard Similarity of two billion-element sets within only 6 minutes (under 256-bit security in parallel mode). To the best of our knowledge, our consistency-check mechanism represents the very first work to realize an efficient verification particularly on approximate similarity computation.