Visible to the public Biblio

Filters: Keyword is machine learning approaches  [Clear All Filters]
2021-03-29
Moti, Z., Hashemi, S., Jahromi, A. N..  2020.  A Deep Learning-based Malware Hunting Technique to Handle Imbalanced Data. 2020 17th International ISC Conference on Information Security and Cryptology (ISCISC). :48–53.
Nowadays, with the increasing use of computers and the Internet, more people are exposed to cyber-security dangers. According to antivirus companies, malware is one of the most common threats of using the Internet. Therefore, providing a practical solution is critical. Current methods use machine learning approaches to classify malware samples automatically. Despite the success of these approaches, the accuracy and efficiency of these techniques are still inadequate, especially for multiple class classification problems and imbalanced training data sets. To mitigate this problem, we use deep learning-based algorithms for classification and generation of new malware samples. Our model is based on the opcode sequences, which are given to the model without any pre-processing. Besides, we use a novel generative adversarial network to generate new opcode sequences for oversampling minority classes. Also, we propose the model that is a combination of Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) to classify malware samples. CNN is used to consider short-term dependency between features; while, LSTM is used to consider longer-term dependence. The experiment results show our method could classify malware to their corresponding family effectively. Our model achieves 98.99% validation accuracy.
2021-03-09
Hossain, M. D., Ochiai, H., Doudou, F., Kadobayashi, Y..  2020.  SSH and FTP brute-force Attacks Detection in Computer Networks: LSTM and Machine Learning Approaches. 2020 5th International Conference on Computer and Communication Systems (ICCCS). :491—497.

Network traffic anomaly detection is of critical importance in cybersecurity due to the massive and rapid growth of sophisticated computer network attacks. Indeed, the more new Internet-related technologies are created, the more elaborate the attacks become. Among all the contemporary high-level attacks, dictionary-based brute-force attacks (BFA) present one of the most unsurmountable challenges. We need to develop effective methods to detect and mitigate such brute-force attacks in realtime. In this paper, we investigate SSH and FTP brute-force attack detection by using the Long Short-Term Memory (LSTM) deep learning approach. Additionally, we made use of machine learning (ML) classifiers: J48, naive Bayes (NB), decision table (DT), random forest (RF) and k-nearest-neighbor (k-NN), for additional detection purposes. We used the well-known labelled dataset CICIDS2017. We evaluated the effectiveness of the LSTM and ML algorithms, and compared their performance. Our results show that the LSTM model outperforms the ML algorithms, with an accuracy of 99.88%.

2021-03-01
Davis, B., Glenski, M., Sealy, W., Arendt, D..  2020.  Measure Utility, Gain Trust: Practical Advice for XAI Researchers. 2020 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX). :1–8.
Research into the explanation of machine learning models, i.e., explainable AI (XAI), has seen a commensurate exponential growth alongside deep artificial neural networks throughout the past decade. For historical reasons, explanation and trust have been intertwined. However, the focus on trust is too narrow, and has led the research community astray from tried and true empirical methods that produced more defensible scientific knowledge about people and explanations. To address this, we contribute a practical path forward for researchers in the XAI field. We recommend researchers focus on the utility of machine learning explanations instead of trust. We outline five broad use cases where explanations are useful and, for each, we describe pseudo-experiments that rely on objective empirical measurements and falsifiable hypotheses. We believe that this experimental rigor is necessary to contribute to scientific knowledge in the field of XAI.
2020-08-17
Paudel, Ramesh, Muncy, Timothy, Eberle, William.  2019.  Detecting DoS Attack in Smart Home IoT Devices Using a Graph-Based Approach. 2019 IEEE International Conference on Big Data (Big Data). :5249–5258.
The use of the Internet of Things (IoT) devices has surged in recent years. However, due to the lack of substantial security, IoT devices are vulnerable to cyber-attacks like Denial-of-Service (DoS) attacks. Most of the current security solutions are either computationally expensive or unscalable as they require known attack signatures or full packet inspection. In this paper, we introduce a novel Graph-based Outlier Detection in Internet of Things (GODIT) approach that (i) represents smart home IoT traffic as a real-time graph stream, (ii) efficiently processes graph data, and (iii) detects DoS attack in real-time. The experimental results on real-world data collected from IoT-equipped smart home show that GODIT is more effective than the traditional machine learning approaches, and is able to outperform current graph-stream anomaly detection approaches.
2019-06-10
Kalash, M., Rochan, M., Mohammed, N., Bruce, N. D. B., Wang, Y., Iqbal, F..  2018.  Malware Classification with Deep Convolutional Neural Networks. 2018 9th IFIP International Conference on New Technologies, Mobility and Security (NTMS). :1-5.

In this paper, we propose a deep learning framework for malware classification. There has been a huge increase in the volume of malware in recent years which poses a serious security threat to financial institutions, businesses and individuals. In order to combat the proliferation of malware, new strategies are essential to quickly identify and classify malware samples so that their behavior can be analyzed. Machine learning approaches are becoming popular for classifying malware, however, most of the existing machine learning methods for malware classification use shallow learning algorithms (e.g. SVM). Recently, Convolutional Neural Networks (CNN), a deep learning approach, have shown superior performance compared to traditional learning algorithms, especially in tasks such as image classification. Motivated by this success, we propose a CNN-based architecture to classify malware samples. We convert malware binaries to grayscale images and subsequently train a CNN for classification. Experiments on two challenging malware classification datasets, Malimg and Microsoft malware, demonstrate that our method achieves better than the state-of-the-art performance. The proposed method achieves 98.52% and 99.97% accuracy on the Malimg and Microsoft datasets respectively.

Nathezhtha, T., Yaidehi, V..  2018.  Cloud Insider Attack Detection Using Machine Learning. 2018 International Conference on Recent Trends in Advance Computing (ICRTAC). :60-65.

Security has always been a major issue in cloud. Data sources are the most valuable and vulnerable information which is aimed by attackers to steal. If data is lost, then the privacy and security of every cloud user are compromised. Even though a cloud network is secured externally, the threat of an internal attacker exists. Internal attackers compromise a vulnerable user node and get access to a system. They are connected to the cloud network internally and launch attacks pretending to be trusted users. Machine learning approaches are widely used for cloud security issues. The existing machine learning based security approaches classify a node as a misbehaving node based on short-term behavioral data. These systems do not differentiate whether a misbehaving node is a malicious node or a broken node. To address this problem, this paper proposes an Improvised Long Short-Term Memory (ILSTM) model which learns the behavior of a user and automatically trains itself and stores the behavioral data. The model can easily classify the user behavior as normal or abnormal. The proposed ILSTM not only identifies an anomaly node but also finds whether a misbehaving node is a broken node or a new user node or a compromised node using the calculated trust factor. The proposed model not only detects the attack accurately but also reduces the false alarm in the cloud network.

2017-03-07
Kannao, Raghvendra, Guha, Prithwijit.  2016.  Generic TV Advertisement Detection Using Progressively Balanced Perceptron Trees. Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing. :8:1–8:8.

Automatic detection of TV advertisements is of paramount importance for various media monitoring agencies. Existing works in this domain have mostly focused on news channels using news specific features. Most commercial products use near copy detection algorithms instead of generic advertisement classification. A generic detector needs to handle inter-class and intra-class imbalances present in data due to variability in content aired across channels and frequent repetition of advertisements. Imbalances present in data make classifiers biased towards one of the classes and thus require special treatment. We propose to use tree of perceptrons to solve this problem. The training data available for each perceptron node is balanced using cluster based over-sampling and TOMEK link cleaning as we traverse the tree downwards. The trained perceptron node then passes the original unbalanced data to its children. This process is repeated recursively till we reach the leaf nodes. We call this new algorithm as "Progressively Balanced Perceptron Tree". We have also contributed a TV advertisements dataset consisting of 250 hours of videos recorded from five non-news TV channels of different genres. Experimentations on this dataset have shown that the proposed approach has comparatively superior and balanced performance with respect to six baseline methods. Our proposal generalizes well across channels, with varying training data sizes and achieved a top F1-score of 97% in detecting advertisements.