Biblio

Filters: Author is Liu, Huan  [Clear All Filters]
2023-03-31
Moraffah, Raha, Liu, Huan.  2022.  Query-Efficient Target-Agnostic Black-Box Attack. 2022 IEEE International Conference on Data Mining (ICDM). :368–377.
Adversarial attacks have recently been proposed to scrutinize the security of deep neural networks. Most blackbox adversarial attacks, which have partial access to the target through queries, are target-specific; e.g., they require a well-trained surrogate that accurately mimics a given target. In contrast, target-agnostic black-box attacks are developed to attack any target; e.g., they learn a generalized surrogate that can adapt to any target via fine-tuning on samples queried from the target. Despite their success, current state-of-the-art target-agnostic attacks require tremendous fine-tuning steps and consequently an immense number of queries to the target to generate successful attacks. The high query complexity of these attacks makes them easily detectable and thus defendable. We propose a novel query-efficient target-agnostic attack that trains a generalized surrogate network to output the adversarial directions iv.r.t. the inputs and equip it with an effective fine-tuning strategy that only fine-tunes the surrogate when it fails to provide useful directions to generate the attacks. Particularly, we show that to effectively adapt to any target and generate successful attacks, it is sufficient to fine-tune the surrogate with informative samples that help the surrogate get out of the failure mode with additional information on the target’s local behavior. Extensive experiments on CIFAR10 and CIFAR-100 datasets demonstrate that the proposed target-agnostic approach can generate highly successful attacks for any target network with very few fine-tuning steps and thus significantly smaller number of queries (reduced by several order of magnitudes) compared to the state-of-the-art baselines.
2019-11-04
Beigi, Ghazaleh, Shu, Kai, Zhang, Yanchao, Liu, Huan.  2018.  Securing Social Media User Data: An Adversarial Approach. Proceedings of the 29th on Hypertext and Social Media. :165–173.
Social media users generate tremendous amounts of data. To better serve users, it is required to share the user-related data among researchers, advertisers and application developers. Publishing such data would raise more concerns on user privacy. To encourage data sharing and mitigate user privacy concerns, a number of anonymization and de-anonymization algorithms have been developed to help protect privacy of social media users. In this work, we propose a new adversarial attack specialized for social media data.We further provide a principled way to assess effectiveness of anonymizing different aspects of social media data. Our work sheds light on new privacy risks in social media data due to innate heterogeneity of user-generated data which require striking balance between sharing user data and protecting user privacy.