Title | Link Prediction Adversarial Attack Via Iterative Gradient Attack |
Publication Type | Journal Article |
Year of Publication | 2020 |
Authors | Chen, J., Lin, X., Shi, Z., Liu, Y. |
Journal | IEEE Transactions on Computational Social Systems |
Volume | 7 |
Pagination | 1081–1094 |
ISSN | 2329-924X |
Keywords | adversarial attack, adversarial graph, Attack Graphs, composability, data privacy, deep models, deep neural networks, defense, GAE, gradient attack (GA), gradient attack strategy, gradient information, gradient methods, graph autoencode, graph evolved tasks, graph theory, iterative gradient attack, learning (artificial intelligence), Link prediction, link prediction adversarial attack problem, neural nets, node classification, Perturbation methods, Prediction algorithms, Predictive Metrics, Predictive models, privacy, pubcrawl, real-world graphs, Resiliency, Robustness, security of data, security problem, Task Analysis, trained graph autoencoder model |
Abstract | Increasing deep neural networks are applied in solving graph evolved tasks, such as node classification and link prediction. However, the vulnerability of deep models can be revealed using carefully crafted adversarial examples generated by various adversarial attack methods. To explore this security problem, we define the link prediction adversarial attack problem and put forward a novel iterative gradient attack (IGA) strategy using the gradient information in the trained graph autoencoder (GAE) model. Not surprisingly, GAE can be fooled by an adversarial graph with a few links perturbed on the clean one. The results on comprehensive experiments of different real-world graphs indicate that most deep models and even the state-of-the-art link prediction algorithms cannot escape the adversarial attack, such as GAE. We can benefit the attack as an efficient privacy protection tool from the link prediction of unknown violations. On the other hand, the adversarial attack is a robust evaluation metric for current link prediction algorithms of their defensibility. |
DOI | 10.1109/TCSS.2020.3004059 |
Citation Key | chen_link_2020 |