Visible to the public Adversarial Attacks to API Recommender Systems: Time to Wake Up and Smell the Coffee?

TitleAdversarial Attacks to API Recommender Systems: Time to Wake Up and Smell the Coffee?
Publication TypeConference Paper
Year of Publication2021
AuthorsNguyen, Phuong T., Di Sipio, Claudio, Di Rocco, Juri, Di Penta, Massimiliano, Di Ruscio, Davide
Conference Name2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE)
Keywordsadversarial attacks, Adversarial Machine Learning, API mining, codes, Human Behavior, Open Source Software, pubcrawl, recommender systems, resilience, Resiliency, Scalability, software engineering, Task Analysis, Training data
AbstractRecommender systems in software engineering provide developers with a wide range of valuable items to help them complete their tasks. Among others, API recommender systems have gained momentum in recent years as they became more successful at suggesting API calls or code snippets. While these systems have proven to be effective in terms of prediction accuracy, there has been less attention for what concerns such recommenders' resilience against adversarial attempts. In fact, by crafting the recommenders' learning material, e.g., data from large open-source software (OSS) repositories, hostile users may succeed in injecting malicious data, putting at risk the software clients adopting API recommender systems. In this paper, we present an empirical investigation of adversarial machine learning techniques and their possible influence on recommender systems. The evaluation performed on three state-of-the-art API recommender systems reveals a worrying outcome: all of them are not immune to malicious data. The obtained result triggers the need for effective countermeasures to protect recommender systems against hostile attacks disguised in training data.
DOI10.1109/ASE51524.2021.9678946
Citation Keynguyen_adversarial_2021