Visible to the public Biblio

Filters: Keyword is Uniform resource locators  [Clear All Filters]
2017-04-20
Wang, C. H., Zhou, Y. S..  2016.  A New Cross-Site Scripting Detection Mechanism Integrated with HTML5 and CORS Properties by Using Browser Extensions. 2016 International Computer Symposium (ICS). :264–269.
Cross site scripting (XSS) is a kind of common attack nowadays. The attack patterns with the new technical like HTML5 that makes detection task getting harder and harder. In this paper, we focus on the browser detection mechanism integrated with HTML5 and CORS properties to detect XSS attacks with the rule based filter by using browser extensions. Further, we also present a model of composition pattern estimation system which can be used to judge whether the intercepted request has malicious attempts or not. The experimental results show that our approach can reach high detection rate by tuning our system through some frequently used attack sentences and testing it with the popular tool-kits: XSSer developed by OWASP.
Mhana, Samer Attallah, Din, Jamilah Binti, Atan, Rodziah Binti.  2016.  Automatic generation of Content Security Policy to mitigate cross site scripting. 2016 2nd International Conference on Science in Information Technology (ICSITech). :324–328.

Content Security Policy (CSP) is powerful client-side security layer that helps in mitigating and detecting wide ranges of Web attacks including cross-site scripting (XSS). However, utilizing CSP by site administrators is a fallible process and may require significant changes in web application code. In this paper, we propose an approach to help site administers to overcome these limitations in order to utilize the full benefits of CSP mechanism which leads to more immune sites from XSS. The algorithm is implemented as a plugin. It does not interfere with the Web application original code. The plugin can be “installed” on any other web application with minimum efforts. The algorithm can be implemented as part of Web Server layer, not as part of the business logic layer. It can be extended to support generating CSP for contents that are modified by JavaScript after loading. Current approach inspects the static contents of URLs.

Chaudhary, P., Gupta, B. B., Yamaguchi, S..  2016.  XSS detection with automatic view isolation on online social network. 2016 IEEE 5th Global Conference on Consumer Electronics. :1–5.

Online Social Networks (OSNs) are continuously suffering from the negative impact of Cross-Site Scripting (XSS) vulnerabilities. This paper describes a novel framework for mitigating XSS attack on OSN-based platforms. It is completely based on the request authentication and view isolation approach. It detects XSS attack through validating string value extracted from the vulnerable checkpoint present in the web page by implementing string examination algorithm with the help of XSS attack vector repository. Any similarity (i.e. string is not validated) indicates the presence of malicious code injected by the attacker and finally it removes the script code to mitigate XSS attack. To assess the defending ability of our designed model, we have tested it on OSN-based web application i.e. Humhub. The experimental results revealed that our model discovers the XSS attack vectors with low false negatives and false positive rate tolerable performance overhead.

2017-03-07
Nirmal, K., Janet, B., Kumar, R..  2015.  Phishing - the threat that still exists. 2015 International Conference on Computing and Communications Technologies (ICCCT). :139–143.

Phishing is an online security attack in which the hacker aims in harvesting sensitive information like passwords, credit card information etc. from the users by making them to believe what they see is what it is. This threat has been into existence for a decade and there has been continuous developments in counter attacking this threat. However, statistical study reveals how phishing is still a big threat to today's world as the online era booms. In this paper, we look into the art of phishing and have made a practical analysis on how the state of the art anti-phishing systems fail to prevent Phishing. With the loop-holes identified in the state-of-the-art systems, we move ahead paving the roadmap for the kind of system that will counter attack this online security threat more effectively.

Wazzan, M. A., Awadh, M. H..  2015.  Towards Improving Web Attack Detection: Highlighting the Significant Factors. 2015 5th International Conference on IT Convergence and Security (ICITCS). :1–5.

Nowadays, with the rapid development of Internet, the use of Web is increasing and the Web applications have become a substantial part of people's daily life (e.g. E-Government, E-Health and E-Learning), as they permit to seamlessly access and manage information. The main security concern for e-business is Web application security. Web applications have many vulnerabilities such as Injection, Broken Authentication and Session Management, and Cross-site scripting (XSS). Subsequently, web applications have become targets of hackers, and a lot of cyber attack began to emerge in order to block the services of these Web applications (Denial of Service Attach). Developers are not aware of these vulnerabilities and have no enough time to secure their applications. Therefore, there is a significant need to study and improve attack detection for web applications through determining the most significant factors for detection. To the best of our knowledge, there is not any research that summarizes the influent factors of detection web attacks. In this paper, the author studies state-of-the-art techniques and research related to web attack detection: the author analyses and compares different methods of web attack detections and summarizes the most important factors for Web attack detection independent of the type of vulnerabilities. At the end, the author gives recommendation to build a framework for web application protection.

Burnap, P., Javed, A., Rana, O. F., Awan, M. S..  2015.  Real-time classification of malicious URLs on Twitter using machine activity data. 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). :970–977.

Massive online social networks with hundreds of millions of active users are increasingly being used by Cyber criminals to spread malicious software (malware) to exploit vulnerabilities on the machines of users for personal gain. Twitter is particularly susceptible to such activity as, with its 140 character limit, it is common for people to include URLs in their tweets to link to more detailed information, evidence, news reports and so on. URLs are often shortened so the endpoint is not obvious before a person clicks the link. Cyber criminals can exploit this to propagate malicious URLs on Twitter, for which the endpoint is a malicious server that performs unwanted actions on the person's machine. This is known as a drive-by-download. In this paper we develop a machine classification system to distinguish between malicious and benign URLs within seconds of the URL being clicked (i.e. `real-time'). We train the classifier using machine activity logs created while interacting with URLs extracted from Twitter data collected during a large global event - the Superbowl - and test it using data from another large sporting event - the Cricket World Cup. The results show that machine activity logs produce precision performances of up to 0.975 on training data from the first event and 0.747 on a test data from a second event. Furthermore, we examine the properties of the learned model to explain the relationship between machine activity and malicious software behaviour, and build a learning curve for the classifier to illustrate that very small samples of training data can be used with only a small detriment to performance.

Johnson, R., Kiourtis, N., Stavrou, A., Sritapan, V..  2015.  Analysis of content copyright infringement in mobile application markets. 2015 APWG Symposium on Electronic Crime Research (eCrime). :1–10.

As mobile devices increasingly become bigger in terms of display and reliable in delivering paid entertainment and video content, we also see a rise in the presence of mobile applications that attempt to profit by streaming pirated content to unsuspected end-users. These applications are both paid and free and in the case of free applications, the source of funding appears to be advertisements that are displayed while the content is streamed to the device. In this paper, we assess the extent of content copyright infringement for mobile markets that span multiple platforms (iOS, Android, and Windows Mobile) and cover both official and unofficial mobile markets located across the world. Using a set of search keywords that point to titles of paid streaming content, we discovered 8,592 Android, 5,550 iOS, and 3,910 Windows mobile applications that matched our search criteria. Out of those applications, hundreds had links to either locally or remotely stored pirated content and were not developed, endorsed, or, in many cases, known to the owners of the copyrighted contents. We also revealed the network locations of 856,717 Uniform Resource Locators (URLs) pointing to back-end servers and cyber-lockers used to communicate the pirated content to the mobile application.

Lakhita, Yadav, S., Bohra, B., Pooja.  2015.  A review on recent phishing attacks in Internet. 2015 International Conference on Green Computing and Internet of Things (ICGCIoT). :1312–1315.

The development of internet comes with the other domain that is cyber-crime. The record and intelligently can be exposed to a user of illegal activity so that it has become important to make the technology reliable. Phishing techniques include domain of email messages. Phishing emails have hosted such a phishing website, where a click on the URL or the malware code as executing some actions to perform is socially engineered messages. Lexically analyzing the URLs can enhance the performance and help to differentiate between the original email and the phishing URL. As assessed in this study, in addition to textual analysis of phishing URL, email classification is successful and results in a highly precise anti phishing.

2017-02-27
Li-xiong, Z., Xiao-lin, X., Jia, L., Lu, Z., Xuan-chen, P., Zhi-yuan, M., Li-hong, Z..  2015.  Malicious URL prediction based on community detection. 2015 International Conference on Cyber Security of Smart Cities, Industrial Control System and Communications (SSIC). :1–7.

Traditional Anti-virus technology is primarily based on static analysis and dynamic monitoring. However, both technologies are heavily depended on application files, which increase the risk of being attacked, wasting of time and network bandwidth. In this study, we propose a new graph-based method, through which we can preliminary detect malicious URL without application file. First, the relationship between URLs can be found through the relationship between people and URLs. Then the association rules can be mined with confidence of each frequent URLs. Secondly, the networks of URLs was built through the association rules. When the networks of URLs were finished, we clustered the date with modularity to detect communities and every community represents different types of URLs. We suppose that a URL has association with one community, then the URL is malicious probably. In our experiments, we successfully captured 82 % of malicious samples, getting a higher capture than using traditional methods.

2017-02-21
H. S. Jeon, H. Jung, W. Chun.  2015.  "ID Based Web Browser with P2P Property". 2015 9th International Conference on Future Generation Communication and Networking (FGCN). :41-44.

The main usage pattern of internet is shifting from traditional host-to-host central model to content dissemination model. It leads to the pretty prompt growth in Internet content. CDN and P2P are two mainstream techmologies to provide streaming content services in the current Internet. In recent years, some researchers have begun to focus on CDN-P2P-hybrid architecture and ISP-friendly P2P content delivery technology. Web applications have become one of the fundamental internet services. How to effectively support the popular browser-based web application is one of keys to success for future internet projects. This paper proposes ID based browser with caching in IDNet. IDNet consists of id/locator separation scheme and domain-insulated autonomous network architecture (DIANA) which redesign the future internet in the clean slate basis. Experiment shows that ID web browser with caching function can support how to disseminate content and how to find the closet network in IDNet having identical contents.

2015-05-05
Buja, G., Bin Abd Jalil, K., Bt Hj Mohd Ali, F., Rahman, T.F.A..  2014.  Detection model for SQL injection attack: An approach for preventing a web application from the SQL injection attack. Computer Applications and Industrial Electronics (ISCAIE), 2014 IEEE Symposium on. :60-64.

Since the past 20 years the uses of web in daily life is increasing and becoming trend now. As the use of the web is increasing, the use of web application is also increasing. Apparently most of the web application exists up to today have some vulnerability that could be exploited by unauthorized person. Some of well-known web application vulnerabilities are Structured Query Language (SQL) Injection, Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF). By compromising with these web application vulnerabilities, the system cracker can gain information about the user and lead to the reputation of the respective organization. Usually the developers of web applications did not realize that their web applications have vulnerabilities. They only realize them when there is an attack or manipulation of their code by someone. This is normal as in a web application, there are thousands of lines of code, therefore it is not easy to detect if there are some loopholes. Nowadays as the hacking tools and hacking tutorials are easier to get, lots of new hackers are born. Even though SQL injection is very easy to protect against, there are still large numbers of the system on the internet are vulnerable to this type of attack because there will be a few subtle condition that can go undetected. Therefore, in this paper we propose a detection model for detecting and recognizing the web vulnerability which is; SQL Injection based on the defined and identified criteria. In addition, the proposed detection model will be able to generate a report regarding the vulnerability level of the web application. As the consequence, the proposed detection model should be able to decrease the possibility of the SQL Injection attack that can be launch onto the web application.