Biblio

Filters: Author is Jia, J.  [Clear All Filters]
2019-02-25
Peng, W., Huang, L., Jia, J., Ingram, E..  2018.  Enhancing the Naive Bayes Spam Filter Through Intelligent Text Modification Detection. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :849–854.

Spam emails have been a chronic issue in computer security. They are very costly economically and extremely dangerous for computers and networks. Despite of the emergence of social networks and other Internet based information exchange venues, dependence on email communication has increased over the years and this dependence has resulted in an urgent need to improve spam filters. Although many spam filters have been created to help prevent these spam emails from entering a user's inbox, there is a lack or research focusing on text modifications. Currently, Naive Bayes is one of the most popular methods of spam classification because of its simplicity and efficiency. Naive Bayes is also very accurate; however, it is unable to correctly classify emails when they contain leetspeak or diacritics. Thus, in this proposes, we implemented a novel algorithm for enhancing the accuracy of the Naive Bayes Spam Filter so that it can detect text modifications and correctly classify the email as spam or ham. Our Python algorithm combines semantic based, keyword based, and machine learning algorithms to increase the accuracy of Naive Bayes compared to Spamassassin by over two hundred percent. Additionally, we have discovered a relationship between the length of the email and the spam score, indicating that Bayesian Poisoning, a controversial topic, is actually a real phenomenon and utilized by spammers.

2018-04-02
Jia, J., Chen, L..  2017.  (L, m, d) \#x2014; Anonymity : A Resisting Similarity Attack Model for Multiple Sensitive Attributes. 2017 IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :756–760.

Preserving privacy is extremely important in data publishing. The existing privacy-preserving models are mostly oriented to single sensitive attribute, can not be applied to multiple sensitive attributes situation. Moreover, they do not consider the semantic similarity between sensitive attribute values, and may be vulnerable to similarity attack. In this paper, we propose a (l, m, d)-anonymity model for multiple sensitive attributes similarity attack, where m is the dimension of the sensitive attributes. This model uses the semantic hierarchical tree to analyze and compute the semantic dissimilarity between sensitive attribute values, and each equivalence class must exist at least l sensitive attribute values that satisfy d-different on each dimension sensitive attribute. Meanwhile, in order to make the published data highly available, our model adopts the distance-based measurement method to divide the equivalence class. We carry out extensive experiments to certify the (1, m, d)-anonymity model can significantly reduce the probability of sensitive information leakage and protect individual privacy more effectively.