Visible to the public Biblio

Filters: Keyword is visual similarity  [Clear All Filters]
2023-02-03
Alkawaz, Mohammed Hazim, Joanne Steven, Stephanie, Mohammad, Omar Farook, Gapar Md Johar, Md.  2022.  Identification and Analysis of Phishing Website based on Machine Learning Methods. 2022 IEEE 12th Symposium on Computer Applications & Industrial Electronics (ISCAIE). :246–251.
People are increasingly sharing their details online as internet usage grows. Therefore, fraudsters have access to a massive amount of information and financial activities. The attackers create web pages that seem like reputable sites and transmit the malevolent content to victims to get them to provide subtle information. Prevailing phishing security measures are inadequate for detecting new phishing assaults. To accomplish this aim, objective to meet for this research is to analyses and compare phishing website and legitimate by analyzing the data collected from open-source platforms through a survey. Another objective for this research is to propose a method to detect fake sites using Decision Tree and Random Forest approaches. Microsoft Form has been utilized to carry out the survey with 30 participants. Majority of the participants have poor awareness and phishing attack and does not obverse the features of interface before accessing the search browser. With the data collection, this survey supports the purpose of identifying the best phishing website detection where Decision Tree and Random Forest were trained and tested. In achieving high number of feature importance detection and accuracy rate, the result demonstrates that Random Forest has the best performance in phishing website detection compared to Decision Tree.
2020-07-30
Perez, Claudio A., Estévez, Pablo A, Galdames, Francisco J., Schulz, Daniel A., Perez, Juan P., Bastías, Diego, Vilar, Daniel R..  2018.  Trademark Image Retrieval Using a Combination of Deep Convolutional Neural Networks. 2018 International Joint Conference on Neural Networks (IJCNN). :1—7.
Trademarks are recognizable images and/or words used to distinguish various products or services. They become associated with the reputation, innovation, quality, and warranty of the products. Countries around the world have offices for industrial/intellectual property (IP) registration. A new trademark image in application for registration should be distinct from all the registered trademarks. Due to the volume of trademark registration applications and the size of the databases containing existing trademarks, it is impossible for humans to make all the comparisons visually. Therefore, technological tools are essential for this task. In this work we use a pre-trained, publicly available Convolutional Neural Network (CNN) VGG19 that was trained on the ImageNet database. We adapted the VGG19 for the trademark image retrieval (TIR) task by fine tuning the network using two different databases. The VGG19v was trained with a database organized with trademark images using visual similarities, and the VGG19c was trained using trademarks organized by using conceptual similarities. The database for the VGG19v was built using trademarks downloaded from the WEB, and organized by visual similarity according to experts from the IP office. The database for the VGG19c was built using trademark images from the United States Patent and Trademarks Office and organized according to the Vienna conceptual protocol. The TIR was assessed using the normalized average rank for a test set from the METU database that has 922,926 trademark images. We computed the normalized average ranks for VGG19v, VGG19c, and for a combination of both networks. Our method achieved significantly better results on the METU database than those published previously.
2017-03-07
Wang, Xi, Sun, Zhenfeng, Zhang, Wenqiang, Zhou, Yu, Jiang, Yu-Gang.  2016.  Matching User Photos to Online Products with Robust Deep Features. Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval. :7–14.

This paper focuses on a practically very important problem of matching a real-world product photo to exactly the same item(s) in online shopping sites. The task is extremely challenging because the user photos (i.e., the queries in this scenario) are often captured in uncontrolled environments, while the product images in online shops are mostly taken by professionals with clean backgrounds and perfect lighting conditions. To tackle the problem, we study deep network architectures and training schemes, with the goal of learning a robust deep feature representation that is able to bridge the domain gap between the user photos and the online product images. Our contributions are two-fold. First, we propose an alternative of the popular contrastive loss used in siamese deep networks, namely robust contrastive loss, where we "relax" the penalty on positive pairs to alleviate over-fitting. Second, a multi-task fine-tuning approach is introduced to learn a better feature representation, which not only incorporates knowledge from the provided training photo pairs, but also explores additional information from the large ImageNet dataset to regularize the fine-tuning procedure. Experiments on two challenging real-world datasets demonstrate that both the robust contrastive loss and the multi-task fine-tuning approach are effective, leading to very promising results with a time cost suitable for real-time retrieval.