Visible to the public Biblio

Filters: Keyword is support vector data description  [Clear All Filters]
2022-05-19
Deng, Xiaolei, Zhang, Chunrui, Duan, Yubing, Xie, Jiajun, Deng, Kai.  2021.  A Mixed Method For Internal Threat Detection. 2021 IEEE 5th Information Technology,Networking,Electronic and Automation Control Conference (ITNEC). 5:748–756.
In recent years, the development of deep learning has brought new ideas to internal threat detection. In this paper, three common deep learning algorithms for threat detection are optimized and innovated, and feature embedding, drift detection and sample weighting are introduced into FCNN. Adaptive multi-iteration method is introduced into Support Vector Data Description (SVDD). A dynamic threshold adjustment mechanism is introduced in VAE. In threat detection, three methods are used to detect the abnormal behavior of users, and the intersection of output results is taken as the final threat judgment basis. Experiments on cert r6.2 data set show that this method can significantly reduce the false positive rate.
2018-07-06
Kloft, Marius, Laskov, Pavel.  2012.  Security Analysis of Online Centroid Anomaly Detection. J. Mach. Learn. Res.. 13:3681–3724.

Security issues are crucial in a number of machine learning applications, especially in scenarios dealing with human activity rather than natural phenomena (e.g., information ranking, spam detection, malware detection, etc.). In such cases, learning algorithms may have to cope with manipulated data aimed at hampering decision making. Although some previous work addressed the issue of handling malicious data in the context of supervised learning, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution, we analyze the performance of a particular method–online centroid anomaly detection–in the presence of adversarial noise. Our analysis addresses the following security-related issues: formalization of learning and attack processes, derivation of an optimal attack, and analysis of attack efficiency and limitations. We derive bounds on the effectiveness of a poisoning attack against centroid anomaly detection under different conditions: attacker's full or limited control over the traffic and bounded false positive rate. Our bounds show that whereas a poisoning attack can be effectively staged in the unconstrained case, it can be made arbitrarily difficult (a strict upper bound on the attacker's gain) if external constraints are properly used. Our experimental evaluation, carried out on real traces of HTTP and exploit traffic, confirms the tightness of our theoretical bounds and the practicality of our protection mechanisms.