Visible to the public Biblio

Filters: Keyword is false positive classification  [Clear All Filters]
2021-03-29
Johanyák, Z. C..  2020.  Fuzzy Logic based Network Intrusion Detection Systems. 2020 IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI). :15—16.

Plenary Talk Our everyday life is more and more dependent on electronic communication and network connectivity. However, the threats of attacks and different types of misuse increase exponentially with the expansion of computer networks. In order to alleviate the problem and to identify malicious activities as early as possible Network Intrusion Detection Systems (NIDSs) have been developed and intensively investigated. Several approaches have been proposed and applied so far for these systems. It is a common challenge in this field that often there are no crisp boundaries between normal and abnormal network traffic, there are noisy or inaccurate data and therefore the investigated traffic could represent both attack and normal communication. Fuzzy logic based solutions could be advantageous owing to their capability to define membership levels in different classes and to do different operations with results ensuring reduced false positive and false negative classification compared to other approaches. In this presentation, after a short introduction of NIDSs a survey will be done on typical fuzzy logic based solutions followed by a detailed description of a fuzzy rule interpolation based IDS. The whole development process, i.e. data preprocessing, feature extraction, rule base generation steps are covered as well.

2018-07-06
Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A., Jha, N. K..  2015.  Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare. IEEE Journal of Biomedical and Health Informatics. 19:1893–1905.

Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.