Title | Adversarial Attacks to API Recommender Systems: Time to Wake Up and Smell the Coffee? |
Publication Type | Conference Paper |
Year of Publication | 2021 |
Authors | Nguyen, Phuong T., Di Sipio, Claudio, Di Rocco, Juri, Di Penta, Massimiliano, Di Ruscio, Davide |
Conference Name | 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE) |
Keywords | adversarial attacks, Adversarial Machine Learning, API mining, codes, Human Behavior, Open Source Software, pubcrawl, recommender systems, resilience, Resiliency, Scalability, software engineering, Task Analysis, Training data |
Abstract | Recommender systems in software engineering provide developers with a wide range of valuable items to help them complete their tasks. Among others, API recommender systems have gained momentum in recent years as they became more successful at suggesting API calls or code snippets. While these systems have proven to be effective in terms of prediction accuracy, there has been less attention for what concerns such recommenders' resilience against adversarial attempts. In fact, by crafting the recommenders' learning material, e.g., data from large open-source software (OSS) repositories, hostile users may succeed in injecting malicious data, putting at risk the software clients adopting API recommender systems. In this paper, we present an empirical investigation of adversarial machine learning techniques and their possible influence on recommender systems. The evaluation performed on three state-of-the-art API recommender systems reveals a worrying outcome: all of them are not immune to malicious data. The obtained result triggers the need for effective countermeasures to protect recommender systems against hostile attacks disguised in training data. |
DOI | 10.1109/ASE51524.2021.9678946 |
Citation Key | nguyen_adversarial_2021 |