Visible to the public Biblio

Filters: Keyword is data poisoning attacks  [Clear All Filters]
2023-06-22
Shams, Sulthana, Leith, Douglas J..  2022.  Improving Resistance of Matrix Factorization Recommenders To Data Poisoning Attacks. 2022 Cyber Research Conference - Ireland (Cyber-RCI). :1–4.
In this work, we conduct a systematic study on data poisoning attacks to Matrix Factorisation (MF) based Recommender Systems (RS) where a determined attacker injects fake users with false user-item feedback, with an objective to promote a target item by increasing its rating. We explore the capability of a MF based approach to reduce the impact of attack on targeted item in the system. We develop and evaluate multiple techniques to update the user and item feature matrices when incorporating new ratings. We also study the effectiveness of attack under increasing filler items and choice of target item.Our experimental results based on two real-world datasets show that the observations from the study could be used to design a more robust MF based RS.
2023-01-06
Erbil, Pinar, Gursoy, M. Emre.  2022.  Detection and Mitigation of Targeted Data Poisoning Attacks in Federated Learning. 2022 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :1—8.
Federated learning (FL) has emerged as a promising paradigm for distributed training of machine learning models. In FL, several participants train a global model collaboratively by only sharing model parameter updates while keeping their training data local. However, FL was recently shown to be vulnerable to data poisoning attacks, in which malicious participants send parameter updates derived from poisoned training data. In this paper, we focus on defending against targeted data poisoning attacks, where the attacker’s goal is to make the model misbehave for a small subset of classes while the rest of the model is relatively unaffected. To defend against such attacks, we first propose a method called MAPPS for separating malicious updates from benign ones. Using MAPPS, we propose three methods for attack detection: MAPPS + X-Means, MAPPS + VAT, and their Ensemble. Then, we propose an attack mitigation approach in which a "clean" model (i.e., a model that is not negatively impacted by an attack) can be trained despite the existence of a poisoning attempt. We empirically evaluate all of our methods using popular image classification datasets. Results show that we can achieve \textgreater 95% true positive rates while incurring only \textless 2% false positive rate. Furthermore, the clean models that are trained using our proposed methods have accuracy comparable to models trained in an attack-free scenario.
2022-07-15
Fan, Wenqi, Derr, Tyler, Zhao, Xiangyu, Ma, Yao, Liu, Hui, Wang, Jianping, Tang, Jiliang, Li, Qing.  2021.  Attacking Black-box Recommendations via Copying Cross-domain User Profiles. 2021 IEEE 37th International Conference on Data Engineering (ICDE). :1583—1594.
Recommender systems, which aim to suggest personalized lists of items for users, have drawn a lot of attention. In fact, many of these state-of-the-art recommender systems have been built on deep neural networks (DNNs). Recent studies have shown that these deep neural networks are vulnerable to attacks, such as data poisoning, which generate fake users to promote a selected set of items. Correspondingly, effective defense strategies have been developed to detect these generated users with fake profiles. Thus, new strategies of creating more ‘realistic’ user profiles to promote a set of items should be investigated to further understand the vulnerability of DNNs based recommender systems. In this work, we present a novel framework CopyAttack. It is a reinforcement learning based black-box attacking method that harnesses real users from a source domain by copying their profiles into the target domain with the goal of promoting a subset of items. CopyAttack is constructed to both efficiently and effectively learn policy gradient networks that first select, then further refine/craft user profiles from the source domain, and ultimately copy them into the target domain. CopyAttack’s goal is to maximize the hit ratio of the targeted items in the Top-k recommendation list of the users in the target domain. We conducted experiments on two real-world datasets and empirically verified the effectiveness of the proposed framework. The implementation of CopyAttack is available at https://github.com/wenqifan03/CopyAttack.
2020-11-04
Khalid, F., Hanif, M. A., Rehman, S., Ahmed, R., Shafique, M..  2019.  TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks. 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS). :188—193.

Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity analysis, or noticeable (e.g., by humans) in multi-level security system. Moreover, majority of the inference attack rely on some knowledge about the training dataset. In this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs, without requiring any information about the training dataset (i.e., completely training data-unaware). We present a case study on traffic sign detection using the VGGNet trained on the German Traffic Sign Recognition Benchmarks dataset in an autonomous driving use case. Our results demonstrate that the generated attack images successfully perform misclassification while remaining imperceptible in both “subjective” and “objective” quality tests.

2018-07-06
Zhang, R., Zhu, Q..  2017.  A game-theoretic defense against data poisoning attacks in distributed support vector machines. 2017 IEEE 56th Annual Conference on Decision and Control (CDC). :4582–4587.

With a large number of sensors and control units in networked systems, distributed support vector machines (DSVMs) play a fundamental role in scalable and efficient multi-sensor classification and prediction tasks. However, DSVMs are vulnerable to adversaries who can modify and generate data to deceive the system to misclassification and misprediction. This work aims to design defense strategies for DSVM learner against a potential adversary. We use a game-theoretic framework to capture the conflicting interests between the DSVM learner and the attacker. The Nash equilibrium of the game allows predicting the outcome of learning algorithms in adversarial environments, and enhancing the resilience of the machine learning through dynamic distributed algorithms. We develop a secure and resilient DSVM algorithm with rejection method, and show its resiliency against adversary with numerical experiments.