Visible to the public Detection of Induced False Negatives in Malware Samples

TitleDetection of Induced False Negatives in Malware Samples
Publication TypeConference Paper
Year of Publication2021
AuthorsWood, Adrian, Johnstone, Michael N.
Conference Name2021 18th International Conference on Privacy, Security and Trust (PST)
KeywordsAdversarial Machine Learning, false trust, Heuristic algorithms, machine learning, Malware, poisoning attacks, Poisoning Defences, policy-based governance, Predictive models, privacy, pubcrawl, resilience, Resiliency, Scalability, Training, Training data, Zero day Malware
AbstractMalware detection is an important area of cyber security. Computer systems rely on malware detection applications to prevent malware attacks from succeeding. Malware detection is not a straightforward task, as new variants of malware are generated at an increasing rate. Machine learning (ML) has been utilised to generate predictive classification models to identify new malware variants which conventional malware detection methods may not detect. Machine learning, has however, been found to be vulnerable to different types of adversarial attacks, in which an attacker is able to negatively affect the classification ability of the ML model. Several defensive measures to prevent adversarial poisoning attacks have been developed, but they often rely on the use of a trusted clean dataset to help identify and remove adversarial examples from the training dataset. The defence in this paper does not require a trusted clean dataset, but instead, identifies intentional false negatives (zero day malware classified as benign) at the testing stage by examining the activation weights of the ML model. The defence was able to identify 94.07% of the successful targeted poisoning attacks.
DOI10.1109/PST52912.2021.9647787
Citation Keywood_detection_2021