Visible to the public Biblio

Filters: Keyword is noisy data  [Clear All Filters]
2021-03-29
Johanyák, Z. C..  2020.  Fuzzy Logic based Network Intrusion Detection Systems. 2020 IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI). :15—16.

Plenary Talk Our everyday life is more and more dependent on electronic communication and network connectivity. However, the threats of attacks and different types of misuse increase exponentially with the expansion of computer networks. In order to alleviate the problem and to identify malicious activities as early as possible Network Intrusion Detection Systems (NIDSs) have been developed and intensively investigated. Several approaches have been proposed and applied so far for these systems. It is a common challenge in this field that often there are no crisp boundaries between normal and abnormal network traffic, there are noisy or inaccurate data and therefore the investigated traffic could represent both attack and normal communication. Fuzzy logic based solutions could be advantageous owing to their capability to define membership levels in different classes and to do different operations with results ensuring reduced false positive and false negative classification compared to other approaches. In this presentation, after a short introduction of NIDSs a survey will be done on typical fuzzy logic based solutions followed by a detailed description of a fuzzy rule interpolation based IDS. The whole development process, i.e. data preprocessing, feature extraction, rule base generation steps are covered as well.

2020-06-15
Biradar, Shivleela, Sasi, Smitha.  2018.  Design and Implementation of Secure and Encoded Data Transmission Using Turbo Codes. 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–7.
The general idea to achieve error detection and correction is to add some extra bit to an original message, in which the receiver can use to check the flexibility of the message which has been delivered, and to recover the noisy data. Turbo code is one of the forward error correction method, which is able to achieve the channel capacity, with nearer Shannon limit, encoding and decoding of text and images are performed. Methods and the working have been explained in this paper. The error has also introduced and detection and correction of errors have been achieved. Transmission will be secure it can secure the information by the theft.
2019-03-06
Hess, S., Satam, P., Ditzler, G., Hariri, S..  2018.  Malicious HTML File Prediction: A Detection and Classification Perspective with Noisy Data. 2018 IEEE/ACS 15th International Conference on Computer Systems and Applications (AICCSA). :1-7.

Cybersecurity plays a critical role in protecting sensitive information and the structural integrity of networked systems. As networked systems continue to expand in numbers as well as in complexity, so does the threat of malicious activity and the necessity for advanced cybersecurity solutions. Furthermore, both the quantity and quality of available data on malicious content as well as the fact that malicious activity continuously evolves makes automated protection systems for this type of environment particularly challenging. Not only is the data quality a concern, but the volume of the data can be quite small for some of the classes. This creates a class imbalance in the data used to train a classifier; however, many classifiers are not well equipped to deal with class imbalance. One such example is detecting malicious HMTL files from static features. Unfortunately, collecting malicious HMTL files is extremely difficult and can be quite noisy from HTML files being mislabeled. This paper evaluates a specific application that is afflicted by these modern cybersecurity challenges: detection of malicious HTML files. Previous work presented a general framework for malicious HTML file classification that we modify in this work to use a $\chi$2 feature selection technique and synthetic minority oversampling technique (SMOTE). We experiment with different classifiers (i.e., AdaBoost, Gentle-Boost, RobustBoost, RusBoost, and Random Forest) and a pure detection model (i.e., Isolation Forest). We benchmark the different classifiers using SMOTE on a real dataset that contains a limited number of malicious files (40) with respect to the normal files (7,263). It was found that the modified framework performed better than the previous framework's results. However, additional evidence was found to imply that algorithms which train on both the normal and malicious samples are likely overtraining to the malicious distribution. We demonstrate the likely overtraining by determining that a subset of the malicious files, while suspicious, did not come from a malicious source.

2017-05-16
Chen, Di, Zhang, Qin.  2016.  Streaming Algorithms for Robust Distinct Elements. Proceedings of the 2016 International Conference on Management of Data. :1433–1447.

We study the problem of estimating distinct elements in the data stream model, which has a central role in traffic monitoring, query optimization, data mining and data integration. Different from all previous work, we study the problem in the noisy data setting, where two different looking items in the stream may reference the same entity (determined by a distance function and a threshold value), and the goal is to estimate the number of distinct entities in the stream. In this paper, we formalize the problem of robust distinct elements, and develop space and time-efficient streaming algorithms for datasets in the Euclidean space, using a novel technique we call bucket sampling. We also extend our algorithmic framework to other metric spaces by establishing a connection between bucket sampling and the theory of locality sensitive hashing. Moreover, we formally prove that our algorithms are still effective under small distinct elements ambiguity. Our experiments demonstrate the practicality of our algorithms.