Visible to the public Biblio

Found 765 results

Filters: Keyword is Training  [Clear All Filters]
2020-04-10
Baral, Gitanjali, Arachchilage, Nalin Asanka Gamagedara.  2019.  Building Confidence not to be Phished Through a Gamified Approach: Conceptualising User's Self-Efficacy in Phishing Threat Avoidance Behaviour. 2019 Cybersecurity and Cyberforensics Conference (CCC). :102—110.

Phishing attacks are prevalent and humans are central to this online identity theft attack, which aims to steal victims' sensitive and personal information such as username, password, and online banking details. There are many antiphishing tools developed to thwart against phishing attacks. Since humans are the weakest link in phishing, it is important to educate them to detect and avoid phishing attacks. One can argue self-efficacy is one of the most important determinants of individual's motivation in phishing threat avoidance behaviour, which has co-relation with knowledge. The proposed research endeavours on the user's self-efficacy in order to enhance the individual's phishing threat avoidance behaviour through their motivation. Using social cognitive theory, we explored that various knowledge attributes such as observational (vicarious) knowledge, heuristic knowledge and structural knowledge contributes immensely towards the individual's self-efficacy to enhance phishing threat prevention behaviour. A theoretical framework is then developed depicting the mechanism that links knowledge attributes, self-efficacy, threat avoidance motivation that leads to users' threat avoidance behaviour. Finally, a gaming prototype is designed incorporating the knowledge elements identified in this research that aimed to enhance individual's self-efficacy in phishing threat avoidance behaviour.

2020-04-06
Chen, Chia-Mei, Wang, Shi-Hao, Wen, Dan-Wei, Lai, Gu-Hsin, Sun, Ming-Kung.  2019.  Applying Convolutional Neural Network for Malware Detection. 2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST). :1—5.

Failure to detect malware at its very inception leaves room for it to post significant threat and cost to cyber security for not only individuals, organizations but also the society and nation. However, the rapid growth in volume and diversity of malware renders conventional detection techniques that utilize feature extraction and comparison insufficient, making it very difficult for well-trained network administrators to identify malware, not to mention regular users of internet. Challenges in malware detection is exacerbated since complexity in the type and structure also increase dramatically in these years to include source code, binary file, shell script, Perl script, instructions, settings and others. Such increased complexity offers a premium on misjudgment. In order to increase malware detection efficiency and accuracy under large volume and multiple types of malware, this research adopts Convolutional Neural Networks (CNN), one of the most successful deep learning techniques. The experiment shows an accuracy rate of over 90% in identifying malicious and benign codes. The experiment also presents that CNN is effective with detecting source code and binary code, it can further identify malware that is embedded into benign code, leaving malware no place to hide. This research proposes a feasible solution for network administrators to efficiently identify malware at the very inception in the severe network environment nowadays, so that information technology personnel can take protective actions in a timely manner and make preparations for potential follow-up cyber-attacks.

2020-04-03
Song, Liwei, Shokri, Reza, Mittal, Prateek.  2019.  Membership Inference Attacks Against Adversarially Robust Deep Learning Models. 2019 IEEE Security and Privacy Workshops (SPW). :50—56.
In recent years, the research community has increasingly focused on understanding the security and privacy challenges posed by deep learning models. However, the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards enhancing our understanding of deep learning models when the two domains are combined together. We do this by measuring the success of membership inference attacks against two state-of-the-art adversarial defense methods that mitigate evasion attacks: adversarial training and provable defense. On the one hand, membership inference attacks aim to infer an individual's participation in the target model's training dataset and are known to be correlated with target model's overfitting. On the other hand, adversarial defense methods aim to enhance the robustness of target models by ensuring that model predictions are unchanged for a small area around each sample in the training dataset. Intuitively, adversarial defenses may rely more on the training dataset and be more vulnerable to membership inference attacks. By performing empirical membership inference attacks on both adversarially robust models and corresponding undefended models, we find that the adversarial training method is indeed more susceptible to membership inference attacks, and the privacy leakage is directly correlated with model robustness. We also find that the provable defense approach does not lead to enhanced success of membership inference attacks. However, this is achieved by significantly sacrificing the accuracy of the model on benign data points, indicating that privacy, security, and prediction accuracy are not jointly achieved in these two approaches.
Sattar, Naw Safrin, Arifuzzaman, Shaikh, Zibran, Minhaz F., Sakib, Md Mohiuddin.  2019.  An Ensemble Approach for Suspicious Traffic Detection from High Recall Network Alerts. {2019 IEEE International Conference on Big Data (Big Data. :4299—4308}}@inproceedings{wu_ensemble_2019.
Web services from large-scale systems are prevalent all over the world. However, these systems are naturally vulnerable and incline to be intruded by adversaries for illegal benefits. To detect anomalous events, previous works focus on inspecting raw system logs by identifying the outliers in workflows or relying on machine learning methods. Though those works successfully identify the anomalies, their models use large training set and process whole system logs. To reduce the quantity of logs that need to be processed, high recall suspicious network alert systems can be applied to preprocess system logs. Only the logs that trigger alerts are retrieved for further usage. Due to the universally usage of network traffic alerts among Security Operations Center, anomalies detection problems could be transformed to classify truly suspicious network traffic alerts from false alerts.In this work, we propose an ensemble model to distinguish truly suspicious alerts from false alerts. Our model consists of two sub-models with different feature extraction strategies to ensure the diversity and generalization. We use decision tree based boosters and deep neural networks to build ensemble models for classification. Finally, we evaluate our approach on suspicious network alerts dataset provided by 2019 IEEE BigData Cup: Suspicious Network Event Recognition. Under the metric of AUC scores, our model achieves 0.9068 on the whole testing set.
2020-03-23
Qin, Peng, Tan, Cheng, Zhao, Lei, Cheng, Yueqiang.  2019.  Defending against ROP Attacks with Nearly Zero Overhead. 2019 IEEE Global Communications Conference (GLOBECOM). :1–6.
Return-Oriented Programming (ROP) is a sophisticated exploitation technique that is able to drive target applications to perform arbitrary unintended operations by constructing a gadget chain reusing existing small code sequences (gadgets) collected across the entire code space. In this paper, we propose to address ROP attacks from a different angle-shrinking available code space at runtime. We present ROPStarvation , a generic and transparent ROP countermeasure that defend against all types of ROP attacks with almost zero run-time overhead. ROPStarvation does not aim to completely stop ROP attacks, instead it attempts to significantly increase the bar by decreasing the possibility of launching a successful ROP exploit in reality. Moreover, shrinking available code space at runtime is lightweight that makes ROPStarvation practical for being deployed with high performance requirement. Results show that ROPStarvation successfully reduces the code space of target applications by 85%. With the reduced code segments, ROPStarvation decreases the probability of building a valid ROP gadget chain by 100% and 83% respectively, with the assumptions that whether the adversary knows the vulnerable applications are protected by ROPStarvation . Evaluations on the SPEC CPU2006 benchmark show that ROPStarvation introduces nearly zero (0.2% on average) run-time performance overhead.
Xu, Yilin, Ge, Weimin, Li, Xiaohong, Feng, Zhiyong, Xie, Xiaofei, Bai, Yude.  2019.  A Co-Occurrence Recommendation Model of Software Security Requirement. 2019 International Symposium on Theoretical Aspects of Software Engineering (TASE). :41–48.
To guarantee the quality of software, specifying security requirements (SRs) is essential for developing systems, especially for security-critical software systems. However, using security threat to determine detailed SR is quite difficult according to Common Criteria (CC), which is too confusing and technical for non-security specialists. In this paper, we propose a Co-occurrence Recommend Model (CoRM) to automatically recommend software SRs. In this model, the security threats of product are extracted from security target documents of software, in which the related security requirements are tagged. In order to establish relationships between software security threat and security requirement, semantic similarities between different security threat is calculated by Skip-thoughts Model. To evaluate our CoRM model, over 1000 security target documents of 9 types software products are exploited. The results suggest that building a CoRM model via semantic similarity is feasible and reliable.
2020-03-12
Salmani, Hassan, Hoque, Tamzidul, Bhunia, Swarup, Yasin, Muhammad, Rajendran, Jeyavijayan JV, Karimi, Naghmeh.  2019.  Special Session: Countering IP Security Threats in Supply Chain. 2019 IEEE 37th VLSI Test Symposium (VTS). :1–9.

The continuing decrease in feature size of integrated circuits, and the increase of the complexity and cost of design and fabrication has led to outsourcing the design and fabrication of integrated circuits to third parties across the globe, and in turn has introduced several security vulnerabilities. The adversaries in the supply chain can pirate integrated circuits, overproduce these circuits, perform reverse engineering, and/or insert hardware Trojans in these circuits. Developing countermeasures against such security threats is highly crucial. Accordingly, this paper first develops a learning-based trust verification framework to detect hardware Trojans. To tackle Trojan insertion, IP piracy and overproduction, logic locking schemes and in particular stripped functionality logic locking is discussed and its resiliency against the state-of-the-art attacks is investigated.

Park, Sean, Gondal, Iqbal, Kamruzzaman, Joarder, Zhang, Leo.  2019.  One-Shot Malware Outbreak Detection Using Spatio-Temporal Isomorphic Dynamic Features. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :751–756.

Fingerprinting the malware by its behavioural signature has been an attractive approach for malware detection due to the homogeneity of dynamic execution patterns across different variants of similar families. Although previous researches show reasonably good performance in dynamic detection using machine learning techniques on a large corpus of training set, decisions must be undertaken based upon a scarce number of observable samples in many practical defence scenarios. This paper demonstrates the effectiveness of generative adversarial autoencoder for dynamic malware detection under outbreak situations where in most cases a single sample is available for training the machine learning algorithm to detect similar samples that are in the wild.

2020-03-09
Cao, Yuan, Zhao, Yongli, Li, Jun, Lin, Rui, Zhang, Jie, Chen, Jiajia.  2019.  Reinforcement Learning Based Multi-Tenant Secret-Key Assignment for Quantum Key Distribution Networks. 2019 Optical Fiber Communications Conference and Exhibition (OFC). :1–3.
We propose a reinforcement learning based online multi-tenant secret-key assignment algorithm for quantum key distribution networks, capable of reducing tenant-request blocking probability more than half compared to the benchmark heuristics.
2020-02-26
Wang, Yuze, Han, Tao, Han, Xiaoxia, Liu, Peng.  2019.  Ensemble-Learning-Based Hardware Trojans Detection Method by Detecting the Trigger Nets. 2019 IEEE International Symposium on Circuits and Systems (ISCAS). :1–5.

With the globalization of integrated circuit (IC) design and manufacturing, malicious third-party vendors can easily insert hardware Trojans into their intellect property (IP) cores during IC design phase, threatening the security of IC systems. It is strongly required to develop hardware-Trojan detection methods especially for the IC design phase. As the particularity of Trigger nets in Trojan circuits, in this paper, we propose an ensemble-learning-based hardware-Trojan detection method by detecting the Trigger nets at the gate level. We extract the Trigger-net features for each net from known netlists and use the ensemble learning method to train two detection models according to the Trojan types. The detection models are used to identify suspicious Trigger nets in an unknown detected netlist and give results of suspiciousness values for each detected net. By flagging the top n% suspicious nets of each detection model as the suspicious Trigger nets based on the suspiciousness values, the proposed method can achieve, on average, 88% true positive rate, 90% true negative rate, and 90% Accuracy.

Han, Tao, Wang, Yuze, Liu, Peng.  2019.  Hardware Trojans Detection at Register Transfer Level Based on Machine Learning. 2019 IEEE International Symposium on Circuits and Systems (ISCAS). :1–5.

To accurately detect Hardware Trojans in integrated circuits design process, a machine-learning-based detection method at the register transfer level (RTL) is proposed. In this method, circuit features are extracted from the RTL source codes and a training database is built using circuits in a Hardware Trojans library. The training database is used to train an efficient detection model based on the gradient boosting algorithm. In order to expand the Hardware Trojans library for detecting new types of Hardware Trojans and update the detection model in time, a server-client mechanism is used. The proposed method can achieve 100% true positive rate and 89% true negative rate, on average, based on the benchmark from Trust-Hub.

2020-02-18
Huang, Yonghong, Verma, Utkarsh, Fralick, Celeste, Infantec-Lopez, Gabriel, Kumar, Brajesh, Woodward, Carl.  2019.  Malware Evasion Attack and Defense. 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :34–38.

Machine learning (ML) classifiers are vulnerable to adversarial examples. An adversarial example is an input sample which is slightly modified to induce misclassification in an ML classifier. In this work, we investigate white-box and grey-box evasion attacks to an ML-based malware detector and conduct performance evaluations in a real-world setting. We compare the defense approaches in mitigating the attacks. We propose a framework for deploying grey-box and black-box attacks to malware detection systems.

Nasr, Milad, Shokri, Reza, Houmansadr, Amir.  2019.  Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-Box Inference Attacks against Centralized and Federated Learning. 2019 IEEE Symposium on Security and Privacy (SP). :739–753.

Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.

Chen, Jiefeng, Wu, Xi, Rastogi, Vaibhav, Liang, Yingyu, Jha, Somesh.  2019.  Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :480–495.

Wide adoption of artificial neural networks in various domains has led to an increasing interest in defending adversarial attacks against them. Preprocessing defense methods such as pixel discretization are particularly attractive in practice due to their simplicity, low computational overhead, and applicability to various systems. It is observed that such methods work well on simple datasets like MNIST, but break on more complicated ones like ImageNet under recently proposed strong white-box attacks. To understand the conditions for success and potentials for improvement, we study the pixel discretization defense method, including more sophisticated variants that take into account the properties of the dataset being discretized. Our results again show poor resistance against the strong attacks. We analyze our results in a theoretical framework and offer strong evidence that pixel discretization is unlikely to work on all but the simplest of the datasets. Furthermore, our arguments present insights why some other preprocessing defenses may be insecure.

2020-02-17
Khalil, Kasem, Eldash, Omar, Kumar, Ashok, Bayoumi, Magdy.  2019.  Self-Healing Approach for Hardware Neural Network Architecture. 2019 IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS). :622–625.
Neural Network is used in many applications and guarding its performance against faults is a research challenge. Self-healing neural network is a promising concept for achieving reliability, which is the ability to detect and fix a fault in the system automatically. Most of the current self-healing neural network are based on replication of hardware nodes which causes significant area overhead. The proposed self-healing approach results in a modest area overhead and it is suitable for complex neural network. The proposed method is based on a shared operation and a spare node in each layer which compensates for any faulty node in the layer. Each faulty node will be compensated by its neighbor node, and the neighbor node performs the faulty node as well as its own operations sequentially. In the case the neighbor is faulty, the spare node will compensate for it. The proposed method is implemented using VHDL and the simulation results are obtained using Altira 10 GX FPGA for a different number of nodes. The area overhead is very small for a complex network. The reliability of the proposed method is studied and compared with the traditional neural network.
Wang, Xinda, Sun, Kun, Batcheller, Archer, Jajodia, Sushil.  2019.  Detecting "0-Day" Vulnerability: An Empirical Study of Secret Security Patch in OSS. 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :485–492.
Security patches in open source software (OSS) not only provide security fixes to identified vulnerabilities, but also make the vulnerable code public to the attackers. Therefore, armored attackers may misuse this information to launch N-day attacks on unpatched OSS versions. The best practice for preventing this type of N-day attacks is to keep upgrading the software to the latest version in no time. However, due to the concerns on reputation and easy software development management, software vendors may choose to secretly patch their vulnerabilities in a new version without reporting them to CVE or even providing any explicit description in their change logs. When those secretly patched vulnerabilities are being identified by armored attackers, they can be turned into powerful "0-day" attacks, which can be exploited to compromise not only unpatched version of the same software, but also similar types of OSS (e.g., SSL libraries) that may contain the same vulnerability due to code clone or similar design/implementation logic. Therefore, it is critical to identify secret security patches and downgrade the risk of those "0-day" attacks to at least "n-day" attacks. In this paper, we develop a defense system and implement a toolset to automatically identify secret security patches in open source software. To distinguish security patches from other patches, we first build a security patch database that contains more than 4700 security patches mapping to the records in CVE list. Next, we identify a set of features to help distinguish security patches from non-security ones using machine learning approaches. Finally, we use code clone identification mechanisms to discover similar patches or vulnerabilities in similar types of OSS. The experimental results show our approach can achieve good detection performance. A case study on OpenSSL, LibreSSL, and BoringSSL discovers 12 secret security patches.
Ullah, Imtiaz, Mahmoud, Qusay H..  2019.  A Two-Level Hybrid Model for Anomalous Activity Detection in IoT Networks. 2019 16th IEEE Annual Consumer Communications Networking Conference (CCNC). :1–6.
In this paper we propose a two-level hybrid anomalous activity detection model for intrusion detection in IoT networks. The level-1 model uses flow-based anomaly detection, which is capable of classifying the network traffic as normal or anomalous. The flow-based features are extracted from the CICIDS2017 and UNSW-15 datasets. If an anomaly activity is detected then the flow is forwarded to the level-2 model to find the category of the anomaly by deeply examining the contents of the packet. The level-2 model uses Recursive Feature Elimination (RFE) to select significant features and Synthetic Minority Over-Sampling Technique (SMOTE) for oversampling and Edited Nearest Neighbors (ENN) for cleaning the CICIDS2017 and UNSW-15 datasets. Our proposed model precision, recall and F score for level-1 were measured 100% for the CICIDS2017 dataset and 99% for the UNSW-15 dataset, while the level-2 model precision, recall, and F score were measured at 100 % for the CICIDS2017 dataset and 97 % for the UNSW-15 dataset. The predictor we introduce in this paper provides a solid framework for the development of malicious activity detection in IoT networks.
Ying, Huan, Ouyang, Xuan, Miao, Siwei, Cheng, Yushi.  2019.  Power Message Generation in Smart Grid via Generative Adversarial Network. 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :790–793.
As the next generation of the power system, smart grid develops towards automated and intellectualized. Along with the benefits brought by smart grids, e.g., improved energy conversion rate, power utilization rate, and power supply quality, are the security challenges. One of the most important issues in smart grids is to ensure reliable communication between the secondary equipment. The state-of-art method to ensure smart grid security is to detect cyber attacks by deep learning. However, due to the small number of negative samples, the performance of the detection system is limited. In this paper, we propose a novel approach that utilizes the Generative Adversarial Network (GAN) to generate abundant negative samples, which helps to improve the performance of the state-of-art detection system. The evaluation results demonstrate that the proposed method can effectively improve the performance of the detection system by 4%.
2020-02-10
Schneeberger, Tanja, Scholtes, Mirella, Hilpert, Bernhard, Langer, Markus, Gebhard, Patrick.  2019.  Can Social Agents elicit Shame as Humans do? 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). :164–170.
This paper presents a study that examines whether social agents can elicit the social emotion shame as humans do. For that, we use job interviews, which are highly evaluative situations per se. We vary the interview style (shame-eliciting vs. neutral) and the job interviewer (human vs. social agent). Our dependent variables include observational data regarding the social signals of shame and shame regulation as well as self-assessment questionnaires regarding the felt uneasiness and discomfort in the situation. Our results indicate that social agents can elicit shame to the same amount as humans. This gives insights about the impact of social agents on users and the emotional connection between them.
Zojaji, Sahba, Peters, Christopher.  2019.  Towards Virtual Agents for Supporting Appropriate Small Group Behaviors in Educational Contexts. 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games). :1–2.
Verbal and non-verbal behaviors that we use in order to effectively communicate with other people are vital for our success in our daily lives. Despite the importance of social skills, creating standardized methods for training them and supporting their training is challenging. Information and Communications Technology (ICT) may have a good potential to support social and emotional learning (SEL) through virtual social demonstration games. This paper presents initial work involving the design of a pedagogical scenario to facilitate teaching of socially appropriate and inappropriate behaviors when entering and standing in a small group of people, a common occurrence in collaborative social situations. This is achieved through the use of virtual characters and, initially, virtual reality (VR) environments for supporting situated learning in multiple contexts. We describe work done thus far on the demonstrator scenario and anticipated potentials, pitfalls and challenges involved in the approach.
Hu, Taifeng, Wu, Liji, Zhang, Xiangmin, Yin, Yanzhao, Yang, Yijun.  2019.  Hardware Trojan Detection Combine with Machine Learning: an SVM-based Detection Approach. 2019 IEEE 13th International Conference on Anti-counterfeiting, Security, and Identification (ASID). :202–206.
With the application of integrated circuits (ICs) appears in all aspects of life, whether an IC is security and reliable has caused increasing worry which is of significant necessity. An attacker can achieve the malicious purpose by adding or removing some modules, so called hardware Trojans (HTs). In this paper, we use side-channel analysis (SCA) and support vector machine (SVM) classifier to determine whether there is a Trojan in the circuit. We use SAKURA-G circuit board with Xilinx SPARTAN-6 to complete our experiment. Results show that the Trojan detection rate is up to 93% and the classification accuracy is up to 91.8475%.
Cheng, Xiao, Wang, Haoyu, Hua, Jiayi, Zhang, Miao, Xu, Guoai, Yi, Li, Sui, Yulei.  2019.  Static Detection of Control-Flow-Related Vulnerabilities Using Graph Embedding. 2019 24th International Conference on Engineering of Complex Computer Systems (ICECCS). :41–50.

Static vulnerability detection has shown its effectiveness in detecting well-defined low-level memory errors. However, high-level control-flow related (CFR) vulnerabilities, such as insufficient control flow management (CWE-691), business logic errors (CWE-840), and program behavioral problems (CWE-438), which are often caused by a wide variety of bad programming practices, posing a great challenge for existing general static analysis solutions. This paper presents a new deep-learning-based graph embedding approach to accurate detection of CFR vulnerabilities. Our approach makes a new attempt by applying a recent graph convolutional network to embed code fragments in a compact and low-dimensional representation that preserves high-level control-flow information of a vulnerable program. We have conducted our experiments using 8,368 real-world vulnerable programs by comparing our approach with several traditional static vulnerability detectors and state-of-the-art machine-learning-based approaches. The experimental results show the effectiveness of our approach in terms of both accuracy and recall. Our research has shed light on the promising direction of combining program analysis with deep learning techniques to address the general static analysis challenges.

Chen, Yige, Zang, Tianning, Zhang, Yongzheng, Zhou, Yuan, Wang, Yipeng.  2019.  Rethinking Encrypted Traffic Classification: A Multi-Attribute Associated Fingerprint Approach. 2019 IEEE 27th International Conference on Network Protocols (ICNP). :1–11.

With the unprecedented prevalence of mobile network applications, cryptographic protocols, such as the Secure Socket Layer/Transport Layer Security (SSL/TLS), are widely used in mobile network applications for communication security. The proven methods for encrypted video stream classification or encrypted protocol detection are unsuitable for the SSL/TLS traffic. Consequently, application-level traffic classification based networking and security services are facing severe challenges in effectiveness. Existing encrypted traffic classification methods exhibit unsatisfying accuracy for applications with similar state characteristics. In this paper, we propose a multiple-attribute-based encrypted traffic classification system named Multi-Attribute Associated Fingerprints (MAAF). We develop MAAF based on the two key insights that the DNS traces generated during the application runtime contain classification guidance information and that the handshake certificates in the encrypted flows can provide classification clues. Apart from the exploitation of key insights, MAAF employs the context of the encrypted traffic to overcome the attribute-lacking problem during the classification. Our experimental results demonstrate that MAAF achieves 98.69% accuracy on the real-world traceset that consists of 16 applications, supports the early prediction, and is robust to the scale of the training traceset. Besides, MAAF is superior to the state-of-the-art methods in terms of both accuracy and robustness.

Eshmawi, Ala', Nair, Suku.  2019.  The Roving Proxy Framewrok for SMS Spam and Phishing Detection. 2019 2nd International Conference on Computer Applications Information Security (ICCAIS). :1–6.

This paper presents the details of the roving proxy framework for SMS spam and SMS phishing (SMishing) detection. The framework aims to protect organizations and enterprises from the danger of SMishing attacks. Feasibility and functionality studies of the framework are presented along with an update process study to define the minimum requirements for the system to adapt with the latest spam and SMishing trends.

Elakkiya, E, Selvakumar, S.  2019.  Initial Weights Optimization Using Enhanced Step Size Firefly Algorithm for Feed Forward Neural Network Applied to Spam Detection. TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON). :942–946.

Spams are unsolicited and unnecessary messages which may contain harmful codes or links for activation of malicious viruses and spywares. Increasing popularity of social networks attracts the spammers to perform malicious activities in social networks. So an efficient spam detection method is necessary for social networks. In this paper, feed forward neural network with back propagation based spam detection model is proposed. The quality of the learning process is improved by tuning initial weights of feed forward neural network using proposed enhanced step size firefly algorithm which reduces the time for finding optimal weights during the learning process. The model is applied for twitter dataset and the experimental results show that, the proposed model performs well in terms of accuracy and detection rate and has lower false positive rate.