Visible to the public Biblio

Filters: Author is Tang, Qiang  [Clear All Filters]
2019-10-15
Pejo, Balazs, Tang, Qiang, Biczók, Gergely.  2018.  The Price of Privacy in Collaborative Learning. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :2261–2263.

Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training dataset; in reality a small or medium sized organization often does not have enough data to train a reasonably accurate model. For such organizations, a realistic solution is to train machine learning models based on a joint dataset (which is a union of the individual ones). Unfortunately, privacy concerns prevent them from straightforwardly doing so. While a number of privacy-preserving solutions exist for collaborating organizations to securely aggregate the parameters in the process of training the models, we are not aware of any work that provides a rational framework for the participants to precisely balance the privacy loss and accuracy gain in their collaboration. In this paper, we model the collaborative training process as a two-player game where each player aims to achieve higher accuracy while preserving the privacy of its own dataset. We introduce the notion of Price of Privacy, a novel approach for measuring the impact of privacy protection on the accuracy in the proposed framework. Furthermore, we develop a game-theoretical model for different player types, and then either find or prove the existence of a Nash Equilibrium with regard to the strength of privacy protection for each player.

Wang, Jun, Arriaga, Afonso, Tang, Qiang, Ryan, Peter Y.A..  2018.  Facilitating Privacy-Preserving Recommendation-as-a-Service with Machine Learning. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :2306–2308.

Machine-Learning-as-a-Service has become increasingly popular, with Recommendation-as-a-Service as one of the representative examples. In such services, providing privacy protection for the users is an important topic. Reviewing privacy-preserving solutions which were proposed in the past decade, privacy and machine learning are often seen as two competing goals at stake. Though improving cryptographic primitives (e.g., secure multi-party computation (SMC) or homomorphic encryption (HE)) or devising sophisticated secure protocols has made a remarkable achievement, but in conjunction with state-of-the-art recommender systems often yields far-from-practical solutions. We tackle this problem from the direction of machine learning. We aim to design crypto-friendly recommendation algorithms, thus to obtain efficient solutions by directly using existing cryptographic tools. In particular, we propose an HE-friendly recommender system, refer to as CryptoRec, which (1) decouples user features from latent feature space, avoiding training the recommendation model on encrypted data; (2) only relies on addition and multiplication operations, making the model straightforwardly compatible with HE schemes. The properties turn recommendation-computations into a simple matrix-multiplication operation. To further improve efficiency, we introduce a sparse-quantization-reuse method which reduces the recommendation-computation time by \$9$\backslash$times\$ (compared to using CryptoRec directly), without compromising the accuracy. We demonstrate the efficiency and accuracy of CryptoRec on three real-world datasets. CryptoRec allows a server to estimate a user's preferences on thousands of items within a few seconds on a single PC, with the user's data homomorphically encrypted, while its prediction accuracy is still competitive with state-of-the-art recommender systems computing over clear data. Our solution enables Recommendation-as-a-Service on large datasets in a nearly real-time (seconds) level.

2019-04-01
He, Songlin, Tang, Qiang, Wu, Chase Q..  2018.  Censorship Resistant Decentralized IoT Management Systems. Proceedings of the 15th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services. :454–459.

Blockchain technology has been increasingly used for decentralizing cloud-based Internet of Things (IoT) architectures to address some limitations faced by centralized systems. While many existing efforts are successful in leveraging blockchain for decentralization with multiple servers (full nodes) to handle faulty nodes, an important issue has arisen that external clients (also called lightweight clients) have to rely on a relay node to communicate with the full nodes in the blockchain. Compromization of such relay nodes may result in a security breach and even a blockage of IoT sensors from the network. We propose censorship resistant decentralized IoT management systems, which include a "diffusion" function to deliver all messages from sensors to all full nodes and an augmented consensus protocol to check data loss, replicate processing outcome, and facilitate opportunistic outcome delivery. We also leverage the cryptographic tool of aggregate signature to reduce the complexity of communication and signature verification.

2018-09-05
Pejo, Balazs, Tang, Qiang.  2017.  To Cheat or Not to Cheat: A Game-Theoretic Analysis of Outsourced Computation Verification. Proceedings of the Fifth ACM International Workshop on Security in Cloud Computing. :3–10.

In the cloud computing era, in order to avoid computational burdens, many organizations tend to outsource their computations to third-party cloud servers. In order to protect service quality, the integrity of computation results need to be guaranteed. In this paper, we develop a game theoretic framework which helps the outsourcer to maximize its payoff while ensuring the desired level of integrity for the outsourced computation. We define two Stackelberg games and analyze the optimal setting's sensitivity for the parameters of the model.

2018-03-05
Tang, Qiang, Yung, Moti.  2017.  Cliptography: Post-Snowden Cryptography. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :2615–2616.

This tutorial will present a systematic overview of \$\backslash$em kleptography\: stealing information subliminally from black-box cryptographic implementations; and \$\backslash$em cliptography\: defending mechanisms that clip the power of kleptographic attacks via specification re-designs (without altering the underlying algorithms). Despite the laudatory history of development of modern cryptography, applying cryptographic tools to reliably provide security and privacy in practice is notoriously difficult. One fundamental practical challenge, guaranteeing security and privacy without explicit trust in the algorithms and implementations that underlie basic security infrastructure, remains. While the dangers of entertaining adversarial implementation of cryptographic primitives seem obvious, the ramifications of such attacks are surprisingly dire: it turns out that – in wide generality – adversarial implementations of cryptographic (both deterministic and randomized) algorithms may leak private information while producing output that is statistically indistinguishable from that of a faithful implementation. Such attacks were formally studied in Kleptography. Snowden revelations has shown us how security and privacy can be lost at a very large scale even when traditional cryptography seems to be used to protect Internet communication, when Kleptography was not taken into consideration. We will first explain how the above-mentioned Kleptographic attacks can be carried out in various settings. We will then introduce several simple but rigorous immunizing strategies that were inspired by folklore practical wisdoms to protect different algorithms from implementation subversion. Those strategies can be applied to ensure security of most of the fundamental cryptographic primitives such as PRG, digital signatures, public key encryptions against kleptographic attacks when they are implemented accordingly. Our new design principles may suggest new standardization methods that help reducing the threats of subverted implementation. We also hope our tutorial to stimulate a community-wise efforts to further tackle the fundamental challenge mentioned at the beginning.

Russell, Alexander, Tang, Qiang, Yung, Moti, Zhou, Hong-Sheng.  2017.  Generic Semantic Security Against a Kleptographic Adversary. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :907–922.

Notable recent security incidents have generated intense interest in adversaries which attempt to subvert–-perhaps covertly–-crypto$\backslash$-graphic algorithms. In this paper we develop (IND-CPA) Semantically Secure encryption in this challenging setting. This fundamental encryption primitive has been previously studied in the "kleptographic setting," though existing results must relax the model by introducing trusted components or otherwise constraining the subversion power of the adversary: designing a Public Key System that is kletographically semantically secure (with minimal trust) has remained elusive to date. In this work, we finally achieve such systems, even when all relevant cryptographic algorithms are subject to adversarial (kleptographic) subversion. To this end we exploit novel inter-component randomized cryptographic checking techniques (with an offline checking component), combined with common and simple software engineering modular programming techniques (applied to the system's black box specification level). Moreover, our methodology yields a strong generic technique for the preservation of any semantically secure cryptosystem when incorporated into the strong kleptographic adversary setting.

2018-01-16
Tang, Qiang, Wang, Husen.  2017.  Privacy-preserving Hybrid Recommender System. Proceedings of the Fifth ACM International Workshop on Security in Cloud Computing. :59–66.

Privacy issues in recommender systems have attracted the attention of researchers for many years. So far, a number of solutions have been proposed. Unfortunately, most of them are far from practical as they either downgrade the utility or are very inefficient. In this paper, we aim at a more practical solution, by proposing a privacy-preserving hybrid recommender system which consists of an incremental matrix factorization (IMF) component and a user-based collaborative filtering (UCF) component. The IMF component provides the fundamental utility while it allows the service provider to efficiently learn feature vectors in plaintext domain, and the UCF component improves the utility while allows users to carry out their computations in an offline manner. Leveraging somewhat homomorphic encryption (SWHE) schemes, we provide privacy-preserving candidate instantiations for both components. Our experiments demonstrate that the hybrid solution is much more efficient than existing solutions.