Visible to the public Biblio

Filters: Keyword is Web pages  [Clear All Filters]
2023-04-14
Turnip, Togu Novriansyah, Aruan, Hotma, Siagian, Anita Lasmaria, Siagian, Leonardo.  2022.  Web Browser Extension Development of Structured Query Language Injection Vulnerability Detection Using Long Short-Term Memory Algorithm. 2022 IEEE International Conference of Computer Science and Information Technology (ICOSNIKOM). :1—5.
Structured Query Language Injection (SQLi) is a client-side application vulnerability that allows attackers to inject malicious SQL queries with harmful intents, including stealing sensitive information, bypassing authentication, and even executing illegal operations to cause more catastrophic damage to users on the web application. According to OWASP, the top 10 harmful attacks against web applications are SQL Injection attacks. Moreover, based on data reports from the UK's National Fraud Authority, SQL Injection is responsible for 97% of data exposures. Therefore, in order to prevent the SQL Injection attack, detection SQLi system is essential. The contribution of this research is securing web applications by developing a browser extension for Google Chrome using Long Short-Term Memory (LSTM), which is a unique kind of RNN algorithm capable of learning long-term dependencies like SQL Injection attacks. The results of the model will be deployed in static analysis in a browser extension, and the LSTM algorithm will learn to identify the URL that has to be injected into Damn Vulnerable Web Application (DVWA) as a sample-tested web application. Experimental results show that the proposed SQLi detection model based on the LSTM algorithm achieves an accuracy rate of 99.97%, which means that a reliable client-side can effectively detect whether the URL being accessed contains a SQLi attack or not.
Mingsheng, Xu, Chunxia, Li, Wenhui, Du.  2022.  Research and Development of Dual-Core Browser-Based Compatibility and Security. 2022 IEEE 8th International Conference on Computer and Communications (ICCC). :1697—1701.
Aiming at the current troubles encountered by enterprise employees in their daily work when operating business systems due to web compatibility issues, a dual-core secure browser is designed and developed in the paper based on summarizing the current development status of multi-core browsers, key difficulties and challenges in the field. Based on the Chromium open-source project, the design of a dual-core browser auto-adaptation method is carried out. Firstly, dual-core encapsulation technology is implemented, followed by a study of the core auto-adaptation algorithm, and then a core cookie sharing function is developed based on Hook technology. In addition, the security of the browser is reinforced by designing a cookie manager, adding behavior monitoring functions, and unified platform control to enhance confidentiality and security, providing a safe and secure interface for employees' work and ubiquitous IoT access. While taking security into account, the browser realizes the need for a single browser compatible with all business system web pages of the enterprise, enhancing the operating experience of the client. Finally, the possible future research directions in this field are summarized and prospected.
Umar, Mohammad, Ayyub, Shaheen.  2022.  Intrinsic Decision based Situation Reaction CAPTCHA for Better Turing Test. 2022 International Conference on Industry 4.0 Technology (I4Tech). :1–6.
In this modern era, web security is often required to beware from fraudulent activities. There are several hackers try to build a program that can interact with web pages automatically and try to breach the data or make several junk entries due to that web servers get hanged. To stop the junk entries; CAPTCHA is a solution through which bots can be identified and denied the machine based program to intervene with. CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart. In the progression of CAPTCHA; there are several methods available such as distorted text, picture recognition, math solving and gaming based CAPTCHA. Game based turing test is very much popular now a day but there are several methods through which game can be cracked because game is not intellectual. So, there is a required of intrinsic CAPTCHA. The proposed system is based on Intrinsic Decision based Situation Reaction Challenge. The proposed system is able to better classify the humans and bots by its intrinsic problem. It has been considered as human is more capable to deal with the real life problems and machine is bit poor to understand the situation or how the problem can be solved. So, proposed system challenges with simple situations which is easier for human but almost impossible for bots. Human is required to use his common sense only and problem can be solved with few seconds.
2023-02-03
Alkawaz, Mohammed Hazim, Joanne Steven, Stephanie, Mohammad, Omar Farook, Gapar Md Johar, Md.  2022.  Identification and Analysis of Phishing Website based on Machine Learning Methods. 2022 IEEE 12th Symposium on Computer Applications & Industrial Electronics (ISCAIE). :246–251.
People are increasingly sharing their details online as internet usage grows. Therefore, fraudsters have access to a massive amount of information and financial activities. The attackers create web pages that seem like reputable sites and transmit the malevolent content to victims to get them to provide subtle information. Prevailing phishing security measures are inadequate for detecting new phishing assaults. To accomplish this aim, objective to meet for this research is to analyses and compare phishing website and legitimate by analyzing the data collected from open-source platforms through a survey. Another objective for this research is to propose a method to detect fake sites using Decision Tree and Random Forest approaches. Microsoft Form has been utilized to carry out the survey with 30 participants. Majority of the participants have poor awareness and phishing attack and does not obverse the features of interface before accessing the search browser. With the data collection, this survey supports the purpose of identifying the best phishing website detection where Decision Tree and Random Forest were trained and tested. In achieving high number of feature importance detection and accuracy rate, the result demonstrates that Random Forest has the best performance in phishing website detection compared to Decision Tree.
Philomina, Josna, Fahim Fathima, K A, Gayathri, S, Elias, Glory Elizabeth, Menon, Abhinaya A.  2022.  A comparitative study of machine learning models for the detection of Phishing Websites. 2022 International Conference on Computing, Communication, Security and Intelligent Systems (IC3SIS). :1–7.
Global cybersecurity threats have grown as a result of the evolving digital transformation. Cybercriminals have more opportunities as a result of digitization. Initially, cyberthreats take the form of phishing in order to gain confidential user credentials.As cyber-attacks get more sophisticated and sophisticated, the cybersecurity industry is faced with the problem of utilising cutting-edge technology and techniques to combat the ever-present hostile threats. Hackers use phishing to persuade customers to grant them access to a company’s digital assets and networks. As technology progressed, phishing attempts became more sophisticated, necessitating the development of tools to detect phishing.Machine learning is unsupervised one of the most powerful weapons in the fight against terrorist threats. The features used for phishing detection, as well as the approaches employed with machine learning, are discussed in this study.In this light, the study’s major goal is to propose a unique, robust ensemble machine learning model architecture that gives the highest prediction accuracy with the lowest error rate, while also recommending a few alternative robust machine learning models.Finally, the Random forest algorithm attained a maximum accuracy of 96.454 percent. But by implementing a hybrid model including the 3 classifiers- Decision Trees,Random forest, Gradient boosting classifiers, the accuracy increases to 98.4 percent.
Zheng, Jiahui, Li, Junjian, Li, Chao, Li, Ran.  2022.  A SQL Blind Injection Method Based on Gated Recurrent Neural Network. 2022 7th IEEE International Conference on Data Science in Cyberspace (DSC). :519–525.
Security is undoubtedly the most serious problem for Web applications, and SQL injection (SQLi) attacks are one of the most damaging. The detection of SQL blind injection vulnerability is very important, but unfortunately, it is not fast enough. This is because time-based SQL blind injection lacks web page feedback, so the delay function can only be set artificially to judge whether the injection is successful by observing the response time of the page. However, brute force cracking and binary search methods used in injection require more web requests, resulting in a long time to obtain database information in SQL blind injection. In this paper, a gated recurrent neural network-based SQL blind injection technology is proposed to generate the predictive characters in SQL blind injection. By using the neural language model based on deep learning and character sequence prediction, the method proposed in this paper can learn the regularity of common database information, so that it can predict the next possible character according to the currently obtained database information, and sort it according to probability. In this paper, the training model is evaluated, and experiments are carried out on the shooting range to compare the method used in this paper with sqlmap (the most advanced sqli test automation tool at present). The experimental results show that the method used in this paper is more effective and significant than sqlmap in time-based SQL blind injection. It can obtain the database information of the target site through fewer requests, and run faster.
2022-12-20
Song, Suhwan, Hur, Jaewon, Kim, Sunwoo, Rogers, Philip, Lee, Byoungyoung.  2022.  R2Z2: Detecting Rendering Regressions in Web Browsers through Differential Fuzz Testing. 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE). :1818–1829.
A rendering regression is a bug introduced by a web browser where a web page no longer functions as users expect. Such rendering bugs critically harm the usability of web browsers as well as web applications. The unique aspect of rendering bugs is that they affect the presented visual appearance of web pages, but those web pages have no pre-defined correct appearance. Therefore, it is challenging to automatically detect errors in their appearance. In practice, web browser vendors rely on non-trivial and time-prohibitive manual analysis to detect and handle rendering regressions. This paper proposes R2Z2, an automated tool to find rendering regressions. R2Z2 uses the differential fuzz testing approach, which repeatedly compares the rendering results of two different versions of a browser while providing the same HTML as input. If the rendering results are different, R2Z2 further performs cross browser compatibility testing to check if the rendering difference is indeed a rendering regression. After identifying a rendering regression, R2Z2 will perform an in-depth analysis to aid in fixing the regression. Specifically, R2Z2 performs a delta-debugging-like analysis to pinpoint the exact browser source code commit causing the regression, as well as inspecting the rendering pipeline stages to pinpoint which pipeline stage is responsible. We implemented a prototype of R2Z2 particularly targeting the Chrome browser. So far, R2Z2 found 11 previously undiscovered rendering regressions in Chrome, all of which were confirmed by the Chrome developers. Importantly, in each case, R2Z2 correctly reported the culprit commit. Moreover, R2Z2 correctly pin-pointed the culprit rendering pipeline stage in all but one case.
ISSN: 1558-1225
2022-10-13
Barlow, Luke, Bendiab, Gueltoum, Shiaeles, Stavros, Savage, Nick.  2020.  A Novel Approach to Detect Phishing Attacks using Binary Visualisation and Machine Learning. 2020 IEEE World Congress on Services (SERVICES). :177—182.
Protecting and preventing sensitive data from being used inappropriately has become a challenging task. Even a small mistake in securing data can be exploited by phishing attacks to release private information such as passwords or financial information to a malicious actor. Phishing has now proven so successful, it is the number one attack vector. Many approaches have been proposed to protect against this type of cyber-attack, from additional staff training, enriched spam filters to large collaborative databases of known threats such as PhishTank and OpenPhish. However, they mostly rely upon a user falling victim to an attack and manually adding this new threat to the shared pool, which presents a constant disadvantage in the fight back against phishing. In this paper, we propose a novel approach to protect against phishing attacks using binary visualisation and machine learning. Unlike previous work in this field, our approach uses an automated detection process and requires no further user interaction, which allows faster and more accurate detection process. The experiment results show that our approach has high detection rate.
Singh, Shweta, Singh, M.P., Pandey, Ramprakash.  2020.  Phishing Detection from URLs Using Deep Learning Approach. 2020 5th International Conference on Computing, Communication and Security (ICCCS). :1—4.
Today, the Internet covers worldwide. All over the world, people prefer an E-commerce platform to buy or sell their products. Therefore, cybercrime has become the center of attraction for cyber attackers in cyberspace. Phishing is one such technique where the unidentified structure of the Internet has been used by attackers/criminals that intend to deceive users with the use of the illusory website and emails for obtaining their credentials (like account numbers, passwords, and PINs). Consequently, the identification of a phishing or legitimate web page is a challenging issue due to its semantic structure. In this paper, a phishing detection system is implemented using deep learning techniques to prevent such attacks. The system works on URLs by applying a convolutional neural network (CNN) to detect the phishing webpage. In paper [19] the proposed model has achieved 97.98% accuracy whereas our proposed system achieved accuracy of 98.00% which is better than earlier model. This system doesn’t require any feature engineering as the CNN extract features from the URLs automatically through its hidden layers. This is other advantage of the proposed system over earlier reported in [19] as the feature engineering is a very time-consuming task.
A.A., Athulya, K., Praveen.  2020.  Towards the Detection of Phishing Attacks. 2020 4th International Conference on Trends in Electronics and Informatics (ICOEI)(48184). :337—343.
Phishing is an act of creating a website similar to a legitimate website with a motive of stealing user's confidential information. Phishing fraud might be the most popular cybercrime. Phishing is one of the risks that originated a couple of years back but still prevailing. This paper discusses various phishing attacks, some of the latest phishing evasion techniques used by attackers and anti-phishing approaches. This review raises awareness of those phishing strategies and helps the user to practice phishing prevention. Here, a hybrid approach of phishing detection also described having fast response time and high accuracy.
2022-10-12
Faris, Humam, Yazid, Setiadi.  2021.  Phishing Web Page Detection Methods: URL and HTML Features Detection. 2020 IEEE International Conference on Internet of Things and Intelligence System (IoTaIS). :167—171.
Phishing is a type of fraud on the Internet in the form of fake web pages that mimic the original web pages to trick users into sending sensitive information to phisher. The statistics presented by APWG and Phistank show that the number of phishing websites from 2015 to 2020 tends to increase continuously. To overcome this problem, several studies have been carried out including detecting phishing web pages using various features of web pages with various methods. Unfortunately, the use of several methods is not really effective because the design and evaluation are only too focused on the achievement of detection accuracy in research, but evaluation does not represent application in the real world. Whereas a security detection device should require effectiveness, good performance, and deployable. In this study the authors evaluated several methods and proposed rules-based applications that can detect phishing more efficiently.
2022-09-09
Saini, Anu, Sri, Manepalli Ratna, Thakur, Mansi.  2021.  Intrinsic Plagiarism Detection System Using Stylometric Features and DBSCAN. 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS). :13—18.
Plagiarism is the act of using someone else’s words or ideas without giving them due credit and representing it as one’s own work. In today's world, it is very easy to plagiarize others' work due to advancement in technology, especially by the use of the Internet or other offline sources such as books or magazines. Plagiarism can be classified into two broad categories on the basis of detection namely extrinsic and intrinsic plagiarism. Extrinsic plagiarism detection refers to detecting plagiarism in a document by comparing it against a given reference dataset, whereas, Intrinsic plagiarism detection refers to detecting plagiarism with the help of variation in writing styles without using any reference corpus. Although there are many approaches which can be adopted to detect extrinsic plagiarism, few are available for intrinsic plagiarism detection. In this paper, a simplified approach is proposed for developing an intrinsic plagiarism detector which is helpful in detecting plagiarism even when no reference corpus is available. The approach deals with development of an intrinsic plagiarism detection system by identifying the writing style of authors in the document using stylometric features and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering. The proposed system has an easy to use interactive interface where user has to upload a text document to be checked for plagiarism and the result is displayed on the web page itself. In addition, the user can also see the analysis of the document in the form of graphs.
2022-04-18
Kang, Ji, Sun, Yi, Xie, Hui, Zhu, Xixi, Ding, Zhaoyun.  2021.  Analysis System for Security Situation in Cyberspace Based on Knowledge Graph. 2021 7th International Conference on Big Data and Information Analytics (BigDIA). :385–392.
With the booming of Internet technology, the continuous emergence of new technologies and new algorithms greatly expands the application boundaries of cyberspace. While enjoying the convenience brought by informatization, the society is also facing increasingly severe threats to the security of cyberspace. In cyber security defense, cyberspace operators rely on the discovered vulnerabilities, attack patterns, TTPs, and other knowledge to observe, analyze and determine the current threats to the network and security situation in cyberspace, and then make corresponding decisions. However, most of such open-source knowledge is distributed in different data sources in the form of text or web pages, which is not conducive to the understanding, query and correlation analysis of cyberspace operators. In this paper, a knowledge graph for cyber security is constructed to solve this problem. At first, in the process of obtaining security data from multi-source heterogeneous cyberspaces, we adopt efficient crawler to crawl the required data, paving the way for knowledge graph building. In order to establish the ontology required by the knowledge graph, we abstract the overall framework of security data sources in cyberspace, and depict in detail the correlations among various data sources. Then, based on the \$$\backslash$mathbfOWL +$\backslash$mathbfSWRL\$ language, we construct the cyber security knowledge graph. On this basis, we design an analysis system for situation in cyberspace based on knowledge graph and the Snort intrusion detection system (IDS), and study the rules in Snort. The system integrates and links various public resources from the Internet, including key information such as general platforms, vulnerabilities, weaknesses, attack patterns, tactics, techniques, etc. in real cyberspace, enabling the provision of comprehensive, systematic and rich cyber security knowledge to security researchers and professionals, with the expectation to provide a useful reference for cyber security defense.
2022-04-12
Evangelatos, Pavlos, Iliou, Christos, Mavropoulos, Thanassis, Apostolou, Konstantinos, Tsikrika, Theodora, Vrochidis, Stefanos, Kompatsiaris, Ioannis.  2021.  Named Entity Recognition in Cyber Threat Intelligence Using Transformer-based Models. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :348—353.
The continuous increase in sophistication of threat actors over the years has made the use of actionable threat intelligence a critical part of the defence against them. Such Cyber Threat Intelligence is published daily on several online sources, including vulnerability databases, CERT feeds, and social media, as well as on forums and web pages from the Surface and the Dark Web. Named Entity Recognition (NER) techniques can be used to extract the aforementioned information in an actionable form from such sources. In this paper we investigate how the latest advances in the NER domain, and in particular transformer-based models, can facilitate this process. To this end, the dataset for NER in Threat Intelligence (DNRTI) containing more than 300 pieces of threat intelligence reports from open source threat intelligence websites is used. Our experimental results demonstrate that transformer-based techniques are very effective in extracting cybersecurity-related named entities, by considerably outperforming the previous state- of-the-art approaches tested with DNRTI.
Dalvi, Ashwini, Siddavatam, Irfan, Thakkar, Viraj, Jain, Apoorva, Kazi, Faruk, Bhirud, Sunil.  2021.  Link Harvesting on the Dark Web. 2021 IEEE Bombay Section Signature Conference (IBSSC). :1—5.
In this information age, web crawling on the internet is a prime source for data collection. And with the surface web already being dominated by giants like Google and Microsoft, much attention has been on the Dark Web. While research on crawling approaches is generally available, a considerable gap is present for URL extraction on the dark web. With most literature using the regular expressions methodology or built-in parsers, the problem with these methods is the higher number of false positives generated with the Dark Web, which makes the crawler less efficient. This paper proposes the dedicated parsers methodology for extracting URLs from the dark web, which when compared proves to be better than the regular expression methodology. Factors that make link harvesting on the Dark Web a challenge are discussed in the paper.
Nair, Viswajit Vinod, van Staalduinen, Mark, Oosterman, Dion T..  2021.  Template Clustering for the Foundational Analysis of the Dark Web. 2021 IEEE International Conference on Big Data (Big Data). :2542—2549.
The rapid rise of the Dark Web and supportive technologies has served as the backbone facilitating online illegal activity worldwide. These illegal activities supported by anonymisation technologies such as Tor has made it increasingly elusive to law enforcement agencies. Despite several successful law enforcement operations, illegal activity on the Dark Web is still growing. There are approaches to monitor, mine, and research the Dark Web, all with varying degrees of success. Given the complexity and dynamics of the services offered, we recognize the need for in depth analysis of the Dark Web with regard to its infrastructures, actors, types of abuse and their relationships. This involves the challenging task of information extraction from the very heterogeneous collection of web pages that make up the Dark Web. Most providers develop their services on top of standard frameworks such as WordPress, Simple Machine Forum, phpBB and several other frameworks to deploy their services. As a result, these service providers publish significant number of pages based on similar structural and stylistic templates. We propose an efficient, scalable, repeatable and accurate approach to cluster Dark Web pages based on those structural and stylistic features. Extracting relevant information from those clusters should make it feasible to conduct in depth Dark Web analysis. This paper presents our clustering algorithm to accelerate information extraction, and as a result improve attribution of digital traces to infrastructures or individuals in the fight against cyber crime.
Mahor, Vinod, Rawat, Romil, Kumar, Anil, Chouhan, Mukesh, Shaw, Rabindra Nath, Ghosh, Ankush.  2021.  Cyber Warfare Threat Categorization on CPS by Dark Web Terrorist. 2021 IEEE 4th International Conference on Computing, Power and Communication Technologies (GUCON). :1—6.
The Industrial Internet of Things (IIoT) also referred as Cyber Physical Systems (CPS) as critical elements, expected to play a key role in Industry 4.0 and always been vulnerable to cyber-attacks and vulnerabilities. Terrorists use cyber vulnerability as weapons for mass destruction. The dark web's strong transparency and hard-to-track systems offer a safe haven for criminal activity. On the dark web (DW), there is a wide variety of illicit material that is posted regularly. For supervised training, large-scale web pages are used in traditional DW categorization. However, new study is being hampered by the impossibility of gathering sufficiently illicit DW material and the time spent manually tagging web pages. We suggest a system for accurately classifying criminal activity on the DW in this article. Rather than depending on the vast DW training package, we used authorized regulatory to various types of illicit activity for training Machine Learning (ML) classifiers and get appreciable categorization results. Espionage, Sabotage, Electrical power grid, Propaganda and Economic disruption are the cyber warfare motivations and We choose appropriate data from the open source links for supervised Learning and run a categorization experiment on the illicit material obtained from the actual DW. The results shows that in the experimental setting, using TF-IDF function extraction and a AdaBoost classifier, we were able to achieve an accuracy of 0.942. Our method enables the researchers and System authoritarian agency to verify if their DW corpus includes such illicit activity depending on the applicable rules of the illicit categories they are interested in, allowing them to identify and track possible illicit websites in real time. Because broad training set and expert-supplied seed keywords are not required, this categorization approach offers another option for defining illicit activities on the DW.
2022-01-31
Sjösten, Alexander, Hedin, Daniel, Sabelfeld, Andrei.  2021.  EssentialFP: Exposing the Essence of Browser Fingerprinting. 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :32—48.
Web pages aggressively track users for a variety of purposes from targeted advertisements to enhanced authentication. As browsers move to restrict traditional cookie-based tracking, web pages increasingly move to tracking based on browser fingerprinting. Unfortunately, the state-of-the-art to detect fingerprinting in browsers is often error-prone, resorting to imprecise heuristics and crowd-sourced filter lists. This paper presents EssentialFP, a principled approach to detecting fingerprinting on the web. We argue that the pattern of (i) gathering information from a wide browser API surface (multiple browser-specific sources) and (ii) communicating the information to the network (network sink) captures the essence of fingerprinting. This pattern enables us to clearly distinguish fingerprinting from similar types of scripts like analytics and polyfills. We demonstrate that information flow tracking is an excellent fit for exposing this pattern. To implement EssentialFP we leverage, extend, and deploy JSFlow, a state-of-the-art information flow tracker for JavaScript, in a browser. We illustrate the effectiveness of EssentialFP to spot fingerprinting on the web by evaluating it on two categories of web pages: one where the web pages perform analytics, use polyfills, and show ads, and one where the web pages perform authentication, bot detection, and fingerprinting-enhanced Alexa top pages.
Squarcina, Marco, Calzavara, Stefano, Maffei, Matteo.  2021.  The Remote on the Local: Exacerbating Web Attacks Via Service Workers Caches. 2021 IEEE Security and Privacy Workshops (SPW). :432—443.
Service workers boost the user experience of modern web applications by taking advantage of the Cache API to improve responsiveness and support offline usage. In this paper, we present the first security analysis of the threats posed by this programming practice, identifying an attack with major security implications. In particular, we show how a traditional XSS attack can abuse the Cache API to escalate into a personin-the-middle attack against cached content, thus compromising its confidentiality and integrity. Remarkably, this attack enables new threats which are beyond the scope of traditional XSS. After defining the attack, we study its prevalence in the wild, finding that the large majority of the sites which register service workers using the Cache API are vulnerable as long as a single webpage in the same origin of the service worker is affected by an XSS. Finally, we propose a browser-side countermeasure against this attack, and we analyze its effectiveness and practicality in terms of security benefits and backward compatibility with existing web applications.
Squarcina, Marco, Calzavara, Stefano, Maffei, Matteo.  2021.  The Remote on the Local: Exacerbating Web Attacks Via Service Workers Caches. 2021 IEEE Security and Privacy Workshops (SPW). :432—443.
Service workers boost the user experience of modern web applications by taking advantage of the Cache API to improve responsiveness and support offline usage. In this paper, we present the first security analysis of the threats posed by this programming practice, identifying an attack with major security implications. In particular, we show how a traditional XSS attack can abuse the Cache API to escalate into a personin-the-middle attack against cached content, thus compromising its confidentiality and integrity. Remarkably, this attack enables new threats which are beyond the scope of traditional XSS. After defining the attack, we study its prevalence in the wild, finding that the large majority of the sites which register service workers using the Cache API are vulnerable as long as a single webpage in the same origin of the service worker is affected by an XSS. Finally, we propose a browser-side countermeasure against this attack, and we analyze its effectiveness and practicality in terms of security benefits and backward compatibility with existing web applications.
Mabe, Abigail, Nelson, Michael L., Weigle, Michele C..  2021.  Extending Chromium: Memento-Aware Browser. 2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL). :310—311.
Users rely on their web browser to provide information about the websites they are visiting, such as the security state of the web page their viewing. Current browsers do not differentiate between the live Web and the past Web. If a user loads an archived web page, known as a memento, they have to rely on user interface (UI) elements within the page itself to inform them that the page they are viewing is not the live Web. Memento-awareness extends beyond recognizing a page that has already been archived. The browser should give users the ability to easily archive live web pages as they are browsing. This report presents a proof-of-concept browser that is memento-aware and is created by extending Google's open-source web browser Chromium.
Varshney, Gaurav, Shah, Naman.  2021.  A DNS Security Policy for Timely Detection of Malicious Modification on Webpages. 2021 28th International Conference on Telecommunications (ICT). :1—5.
End users consider the data available through web as unmodified. Even when the web is secured by HTTPS, the data can be tampered in numerous tactical ways reducing trust on the integrity of data at the clients' end. One of the ways in which the web pages can be modified is via client side browser extensions. The extensions can transparently modify the web pages at client's end and can include new data to the web pages with minimal permissions. Clever modifications can be addition of a fake news or a fake advertisement or a link to a phishing website. We have identified through experimentation that such attacks are possible and have potential for serious damages. To prevent and detect such modifications we present a novel domain expressiveness based approach that uses DNS (Domain Name System) TXT records to express the Hash of important web pages that gets verified by the browsers to detect/thwart any modifications to the contents that are launched via client side malicious browser extensions or via cross site scripting. Initial experimentation suggest that the technique has potential to be used and deployed.
2021-06-01
Abhinav, P Y, Bhat, Avakash, Joseph, Christina Terese, Chandrasekaran, K.  2020.  Concurrency Analysis of Go and Java. 2020 5th International Conference on Computing, Communication and Security (ICCCS). :1—6.
There has been tremendous progress in the past few decades towards developing applications that receive data and send data concurrently. In such a day and age, there is a requirement for a language that can perform optimally in such environments. Currently, the two most popular languages in that respect are Go and Java. In this paper, we look to analyze the concurrency features of Go and Java through a complete programming language performance analysis, looking at their compile time, run time, binary sizes and the language's unique concurrency features. This is done by experimenting with the two languages using the matrix multiplication and PageRank algorithms. To the extent of our knowledge, this is the first work which used PageRank algorithm to analyse concurrency. Considering the results of this paper, application developers and researchers can hypothesize on an appropriate language to use for their concurrent programming activity.Results of this paper show that Go performs better for fewer number of computation but is soon taken over by Java as the number of computations drastically increase. This trend is shown to be the opposite when thread creation and management is considered where Java performs better with fewer computation but Go does better later on. Regarding concurrency features both Java with its Executor Service library and Go had their own advantages that made them better for specific applications.
2021-04-09
Mir, N., Khan, M. A. U..  2020.  Copyright Protection for Online Text Information : Using Watermarking and Cryptography. 2020 3rd International Conference on Computer Applications Information Security (ICCAIS). :1—4.
Information and security are interdependent elements. Information security has evolved to be a matter of global interest and to achieve this; it requires tools, policies and assurance of technologies against any relevant security risks. Internet influx while providing a flexible means of sharing the online information economically has rapidly attracted countless writers. Text being an important constituent of online information sharing, creates a huge demand of intellectual copyright protection of text and web itself. Various visible watermarking techniques have been studied for text documents but few for web-based text. In this paper, web page watermarking and cryptography for online content copyrights protection is proposed utilizing the semantic and syntactic rules using HTML (Hypertext Markup Language) and is tested for English and Arabic languages.
2021-02-10
Anagandula, K., Zavarsky, P..  2020.  An Analysis of Effectiveness of Black-Box Web Application Scanners in Detection of Stored SQL Injection and Stored XSS Vulnerabilities. 2020 3rd International Conference on Data Intelligence and Security (ICDIS). :40—48.

Black-box web application scanners are used to detect vulnerabilities in the web application without any knowledge of the source code. Recent research had shown their poor performance in detecting stored Cross-Site Scripting (XSS) and stored SQL Injection (SQLI). The detection efficiency of four black-box scanners on two testbeds, Wackopicko and Custom testbed Scanit (obtained from [5]), have been analyzed in this paper. The analysis showed that the scanners need to be improved for better detection of multi-step stored XSS and stored SQLI. This study involves the interaction between the selected scanners and the web application to measure their efficiency of inserting proper attack vectors in appropriate fields. The results of this research paper indicate that there is not much difference in terms of performance between open-source and commercial black-box scanners used in this research. However, it may depend on the policies and trust issues of the companies using them according to their needs. Some of the possible recommendations are provided to improve the detection rate of stored SQLI and stored XSS vulnerabilities in this paper. The study concludes that the state-of-the-art of automated black-box web application scanners in 2020 needs to be improved to detect stored XSS and stored SQLI more effectively.