Visible to the public Secure Kernel Machines Against Evasion Attacks

TitleSecure Kernel Machines Against Evasion Attacks
Publication TypeConference Paper
Year of Publication2016
AuthorsRussu, Paolo, Demontis, Ambra, Biggio, Battista, Fumera, Giorgio, Roli, Fabio
Conference NameProceedings of the 2016 ACM Workshop on Artificial Intelligence and Security
PublisherACM
Conference LocationNew York, NY, USA
ISBN Number978-1-4503-4573-6
KeywordsAdversarial Machine Learning, artificial intelligence security, Collaboration, composability, evasion attacks, game theoretic security, Human Behavior, kernel methods, Metrics, pubcrawl, Resiliency, Scalability, secure learning, spam detection, Support vector machines
Abstract

Machine learning is widely used in security-sensitive settings like spam and malware detection, although it has been shown that malicious data can be carefully modified at test time to evade detection. To overcome this limitation, adversary-aware learning algorithms have been developed, exploiting robust optimization and game-theoretical models to incorporate knowledge of potential adversarial data manipulations into the learning algorithm. Despite these techniques have been shown to be effective in some adversarial learning tasks, their adoption in practice is hindered by different factors, including the difficulty of meeting specific theoretical requirements, the complexity of implementation, and scalability issues, in terms of computational time and space required during training. In this work, we aim to develop secure kernel machines against evasion attacks that are not computationally more demanding than their non-secure counterparts. In particular, leveraging recent work on robustness and regularization, we show that the security of a linear classifier can be drastically improved by selecting a proper regularizer, depending on the kind of evasion attack, as well as unbalancing the cost of classification errors. We then discuss the security of nonlinear kernel machines, and show that a proper choice of the kernel function is crucial. We also show that unbalancing the cost of classification errors and varying some kernel parameters can further improve classifier security, yielding decision functions that better enclose the legitimate data. Our results on spam and PDF malware detection corroborate our analysis.

URLhttp://doi.acm.org/10.1145/2996758.2996771
DOI10.1145/2996758.2996771
Citation Keyrussu_secure_2016