Biblio

Filters: Keyword is web security  [Clear All Filters]
2023-05-11
Qbea'h, Mohammad, Alrabaee, Saed, Alshraideh, Mohammad, Sabri, Khair Eddin.  2022.  Diverse Approaches Have Been Presented To Mitigate SQL Injection Attack, But It Is Still Alive: A Review. 2022 International Conference on Computer and Applications (ICCA). :1–5.
A huge amount of stored and transferred data is expanding rapidly. Therefore, managing and securing the big volume of diverse applications should have a high priority. However, Structured Query Language Injection Attack (SQLIA) is one of the most common dangerous threats in the world. Therefore, a large number of approaches and models have been presented to mitigate, detect or prevent SQL injection attack but it is still alive. Most of old and current models are created based on static, dynamic, hybrid or machine learning techniques. However, SQL injection attack still represents the highest risk in the trend of web application security risks based on several recent studies in 2021. In this paper, we present a review of the latest research dealing with SQL injection attack and its types, and demonstrating several types of most recent and current techniques, models and approaches which are used in mitigating, detecting or preventing this type of dangerous attack. Then, we explain the weaknesses and highlight the critical points missing in these techniques. As a result, we still need more efforts to make a real, novel and comprehensive solution to be able to cover all kinds of malicious SQL commands. At the end, we provide significant guidelines to follow in order to mitigate such kind of attack, and we strongly believe that these tips will help developers, decision makers, researchers and even governments to innovate solutions in the future research to stop SQLIA.
2022-12-20
Hassanshahi, Behnaz, Lee, Hyunjun, Krishnan, Paddy.  2022.  Gelato: Feedback-driven and Guided Security Analysis of Client-side Web Applications. 2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER). :618–629.
Modern web applications are getting more sophisticated by using frameworks that make development easy, but pose challenges for security analysis tools. New analysis techniques are needed to handle such frameworks that grow in number and popularity. In this paper, we describe Gelato that addresses the most crucial challenges for a security-aware client-side analysis of highly dynamic web applications. In particular, we use a feedback-driven and state-aware crawler that is able to analyze complex framework-based applications automatically, and is guided to maximize coverage of security-sensitive parts of the program. Moreover, we propose a new lightweight client-side taint analysis that outperforms the state-of-the-art tools, requires no modification to browsers, and reports non-trivial taint flows on modern JavaScript applications. Gelato reports vulnerabilities with higher accuracy than existing tools and achieves significantly better coverage on 12 applications of which three are used in production.
ISSN: 1534-5351
2023-04-14
Raavi, Rupendra, Alqarni, Mansour, Hung, Patrick C.K.  2022.  Implementation of Machine Learning for CAPTCHAs Authentication Using Facial Recognition. 2022 IEEE International Conference on Data Science and Information System (ICDSIS). :1–5.
Web-based technologies are evolving day by day and becoming more interactive and secure. Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is one of the security features that help detect automated bots on the Web. Earlier captcha was complex designed text-based, but some optical recognition-based algorithms can be used to crack it. That is why now the captcha system is image-based. But after the arrival of strong image recognition algorithms, image-based captchas can also be cracked nowadays. In this paper, we propose a new captcha system that can be used to differentiate real humans and bots on the Web. We use advanced deep layers with pre-trained machine learning models for captchas authentication using a facial recognition system.
Umar, Mohammad, Ayyub, Shaheen.  2022.  Intrinsic Decision based Situation Reaction CAPTCHA for Better Turing Test. 2022 International Conference on Industry 4.0 Technology (I4Tech). :1–6.
In this modern era, web security is often required to beware from fraudulent activities. There are several hackers try to build a program that can interact with web pages automatically and try to breach the data or make several junk entries due to that web servers get hanged. To stop the junk entries; CAPTCHA is a solution through which bots can be identified and denied the machine based program to intervene with. CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart. In the progression of CAPTCHA; there are several methods available such as distorted text, picture recognition, math solving and gaming based CAPTCHA. Game based turing test is very much popular now a day but there are several methods through which game can be cracked because game is not intellectual. So, there is a required of intrinsic CAPTCHA. The proposed system is based on Intrinsic Decision based Situation Reaction Challenge. The proposed system is able to better classify the humans and bots by its intrinsic problem. It has been considered as human is more capable to deal with the real life problems and machine is bit poor to understand the situation or how the problem can be solved. So, proposed system challenges with simple situations which is easier for human but almost impossible for bots. Human is required to use his common sense only and problem can be solved with few seconds.
Raut, Yash, Pote, Shreyash, Boricha, Harshank, Gunjgur, Prathmesh.  2022.  A Robust Captcha Scheme for Web Security. 2022 6th International Conference On Computing, Communication, Control And Automation (ICCUBEA. :1–6.
The internet has grown increasingly important in everyone's everyday lives due to the availability of numerous web services such as email, cloud storage, video streaming, music streaming, and search engines. On the other hand, attacks by computer programmes such as bots are a common hazard to these internet services. Captcha is a computer program that helps a server-side company determine whether or not a real user is requesting access. Captcha is a security feature that prevents unauthorised access to a user's account by protecting restricted areas from automated programmes, bots, or hackers. Many websites utilise Captcha to prevent spam and other hazardous assaults when visitors log in. However, in recent years, the complexity of Captcha solving has become difficult for humans too, making it less user friendly. To solve this, we propose creating a Captcha that is both simple and engaging for people while also robust enough to protect sensitive data from bots and hackers on the internet. The suggested captcha scheme employs animated artifacts, rotation, and variable fonts as resistance techniques. The proposed captcha technique proves successful against OCR bots with less than 15% accuracy while being easier to solve for human users with more than 98% accuracy.
ISSN: 2771-1358
2023-03-31
Shahid, Jahanzeb, Muhammad, Zia, Iqbal, Zafar, Khan, Muhammad Sohaib, Amer, Yousef, Si, Weisheng.  2022.  SAT: Integrated Multi-agent Blackbox Security Assessment Tool using Machine Learning. 2022 2nd International Conference on Artificial Intelligence (ICAI). :105–111.
The widespread adoption of eCommerce, iBanking, and eGovernment institutions has resulted in an exponential rise in the use of web applications. Due to a large number of users, web applications have become a prime target of cybercriminals who want to steal Personally Identifiable Information (PII) and disrupt business activities. Hence, there is a dire need to audit the websites and ensure information security. In this regard, several web vulnerability scanners are employed for vulnerability assessment of web applications but attacks are still increasing day by day. Therefore, a considerable amount of research has been carried out to measure the effectiveness and limitations of the publicly available web scanners. It is identified that most of the publicly available scanners possess weaknesses and do not generate desired results. In this paper, the evaluation of publicly available web vulnerability scanners is performed against the top ten OWASP11OWASP® The Open Web Application Security Project (OWASP) is an online community that produces comprehensive articles, documentation, methodologies, and tools in the arena of web and mobile security. vulnerabilities and their performance is measured on the precision of their results. Based on these results, we proposed an Integrated Multi-Agent Blackbox Security Assessment Tool (SAT) for the security assessment of web applications. Research has proved that the vulnerabilities assessment results of the SAT are more extensive and accurate.
2023-03-03
Ajvazi, Grela, Halili, Festim.  2022.  SOAP messaging to provide quality of protection through Kerberos Authentication. 2022 29th International Conference on Systems, Signals and Image Processing (IWSSIP). CFP2255E-ART:1–4.
Service-oriented architecture (SOA) is a widely adopted architecture that uses web services, which have become increasingly important in the development and integration of applications. Its purpose is to allow information system technologies to interact by exchanging messages between sender and recipient using the simple object access protocol (SOAP), an XML document, or the HTTP protocol. We will attempt to provide an overview and analysis of standards in the field of web service security, specifically SOAP messages, using Kerberos authentication, which is a computer network security protocol that provides users with high security for requests between two or more hosts located in an unreliable location such as the internet.Everything that has to do with Kerberos has to deal with systems that rely on data authentication.
ISSN: 2157-8702
2023-02-03
Zheng, Jiahui, Li, Junjian, Li, Chao, Li, Ran.  2022.  A SQL Blind Injection Method Based on Gated Recurrent Neural Network. 2022 7th IEEE International Conference on Data Science in Cyberspace (DSC). :519–525.
Security is undoubtedly the most serious problem for Web applications, and SQL injection (SQLi) attacks are one of the most damaging. The detection of SQL blind injection vulnerability is very important, but unfortunately, it is not fast enough. This is because time-based SQL blind injection lacks web page feedback, so the delay function can only be set artificially to judge whether the injection is successful by observing the response time of the page. However, brute force cracking and binary search methods used in injection require more web requests, resulting in a long time to obtain database information in SQL blind injection. In this paper, a gated recurrent neural network-based SQL blind injection technology is proposed to generate the predictive characters in SQL blind injection. By using the neural language model based on deep learning and character sequence prediction, the method proposed in this paper can learn the regularity of common database information, so that it can predict the next possible character according to the currently obtained database information, and sort it according to probability. In this paper, the training model is evaluated, and experiments are carried out on the shooting range to compare the method used in this paper with sqlmap (the most advanced sqli test automation tool at present). The experimental results show that the method used in this paper is more effective and significant than sqlmap in time-based SQL blind injection. It can obtain the database information of the target site through fewer requests, and run faster.
2023-04-14
Turnip, Togu Novriansyah, Aruan, Hotma, Siagian, Anita Lasmaria, Siagian, Leonardo.  2022.  Web Browser Extension Development of Structured Query Language Injection Vulnerability Detection Using Long Short-Term Memory Algorithm. 2022 IEEE International Conference of Computer Science and Information Technology (ICOSNIKOM). :1—5.
Structured Query Language Injection (SQLi) is a client-side application vulnerability that allows attackers to inject malicious SQL queries with harmful intents, including stealing sensitive information, bypassing authentication, and even executing illegal operations to cause more catastrophic damage to users on the web application. According to OWASP, the top 10 harmful attacks against web applications are SQL Injection attacks. Moreover, based on data reports from the UK's National Fraud Authority, SQL Injection is responsible for 97% of data exposures. Therefore, in order to prevent the SQL Injection attack, detection SQLi system is essential. The contribution of this research is securing web applications by developing a browser extension for Google Chrome using Long Short-Term Memory (LSTM), which is a unique kind of RNN algorithm capable of learning long-term dependencies like SQL Injection attacks. The results of the model will be deployed in static analysis in a browser extension, and the LSTM algorithm will learn to identify the URL that has to be injected into Damn Vulnerable Web Application (DVWA) as a sample-tested web application. Experimental results show that the proposed SQLi detection model based on the LSTM algorithm achieves an accuracy rate of 99.97%, which means that a reliable client-side can effectively detect whether the URL being accessed contains a SQLi attack or not.
2023-02-03
Roobini, M.S., Srividhya, S.R., Sugnaya, Vennela, Kannekanti, Nikhila, Guntumadugu.  2022.  Detection of SQL Injection Attack Using Adaptive Deep Forest. 2022 International Conference on Communication, Computing and Internet of Things (IC3IoT). :1–6.
Injection attack is one of the best 10 security dangers declared by OWASP. SQL infusion is one of the main types of attack. In light of their assorted and quick nature, SQL injection can detrimentally affect the line, prompting broken and public data on the site. Therefore, this article presents a profound woodland-based technique for recognizing complex SQL attacks. Research shows that the methodology we use resolves the issue of expanding and debasing the first condition of the woodland. We are currently presenting the AdaBoost profound timberland-based calculation, which utilizes a blunder level to refresh the heaviness of everything in the classification. At the end of the day, various loads are given during the studio as per the effect of the outcomes on various things. Our model can change the size of the tree quickly and take care of numerous issues to stay away from issues. The aftereffects of the review show that the proposed technique performs better compared to the old machine preparing strategy and progressed preparing technique.
2023-06-22
Barlas, Efe, Du, Xin, Davis, James C..  2022.  Exploiting Input Sanitization for Regex Denial of Service. 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE). :883–895.
Web services use server-side input sanitization to guard against harmful input. Some web services publish their sanitization logic to make their client interface more usable, e.g., allowing clients to debug invalid requests locally. However, this usability practice poses a security risk. Specifically, services may share the regexes they use to sanitize input strings - and regex-based denial of service (ReDoS) is an emerging threat. Although prominent service outages caused by ReDoS have spurred interest in this topic, we know little about the degree to which live web services are vulnerable to ReDoS. In this paper, we conduct the first black-box study measuring the extent of ReDoS vulnerabilities in live web services. We apply the Consistent Sanitization Assumption: that client-side sanitization logic, including regexes, is consistent with the sanitization logic on the server-side. We identify a service's regex-based input sanitization in its HTML forms or its API, find vulnerable regexes among these regexes, craft ReDoS probes, and pinpoint vulnerabilities. We analyzed the HTML forms of 1,000 services and the APIs of 475 services. Of these, 355 services publish regexes; 17 services publish unsafe regexes; and 6 services are vulnerable to ReDoS through their APIs (6 domains; 15 subdomains). Both Microsoft and Amazon Web Services patched their web services as a result of our disclosure. Since these vulnerabilities were from API specifications, not HTML forms, we proposed a ReDoS defense for a popular API validation library, and our patch has been merged. To summarize: in client-visible sanitization logic, some web services advertise Re-DoS vulnerabilities in plain sight. Our results motivate short-term patches and long-term fundamental solutions. “Make measurable what cannot be measured.” -Galileo Galilei
ISSN: 1558-1225
2022-12-20
Van Goethem, Tom, Joosen, Wouter.  2022.  Towards Improving the Deprecation Process of Web Features through Progressive Web Security. 2022 IEEE Security and Privacy Workshops (SPW). :20–30.
To keep up with the continuous modernization of web applications and to facilitate their development, a large number of new features are introduced to the web platform every year. Although new web features typically undergo a security review, issues affecting the privacy and security of users could still surface at a later stage, requiring the deprecation and removal of affected APIs. Furthermore, as the web evolves, so do the expectations in terms of security and privacy, and legacy features might need to be replaced with improved alternatives. Currently, this process of deprecating and removing features is an ad-hoc effort that is largely uncoordinated between the different browser vendors. This causes a discrepancy in terms of compatibility and could eventually lead to the deterrence of the removal of an API, prolonging potential security threats. In this paper we propose a progressive security mechanism that aims to facilitate and standardize the deprecation and removal of features that pose a risk to users’ security, and the introduction of features that aim to provide additional security guarantees.
ISSN: 2770-8411
2022-04-19
Farea, Abdulgbar A. R., Wang, Chengliang, Farea, Ebraheem, Ba Alawi, Abdulfattah.  2021.  Cross-Site Scripting (XSS) and SQL Injection Attacks Multi-classification Using Bidirectional LSTM Recurrent Neural Network. 2021 IEEE International Conference on Progress in Informatics and Computing (PIC). :358–363.
E-commerce, ticket booking, banking, and other web-based applications that deal with sensitive information, such as passwords, payment information, and financial information, are widespread. Some web developers may have different levels of understanding about securing an online application. The two vulnerabilities identified by the Open Web Application Security Project (OWASP) for its 2017 Top Ten List are SQL injection and Cross-site Scripting (XSS). Because of these two vulnerabilities, an attacker can take advantage of these flaws and launch harmful web-based actions. Many published articles concentrated on a binary classification for these attacks. This article developed a new approach for detecting SQL injection and XSS attacks using deep learning. SQL injection and XSS payloads datasets are combined into a single dataset. The word-embedding technique is utilized to convert the word’s text into a vector. Our model used BiLSTM to auto feature extraction, training, and testing the payloads dataset. BiLSTM classified the payloads into three classes: XSS, SQL injection attacks, and normal. The results showed great results in classifying payloads into three classes: XSS attacks, injection attacks, and non-malicious payloads. BiLSTM showed high performance reached 99.26% in terms of accuracy.
2022-06-30
Jadhav, Mohit, Kulkarni, Nupur, Walhekar, Omkar.  2021.  Doodling Based CAPTCHA Authentication System. 2021 Asian Conference on Innovation in Technology (ASIANCON). :1—5.
CAPTCHA (Completely Automated Public Turing Test to tell Computers and Humans Apart) is a widely used challenge-measures to distinguish humans and computer automated programs apart. Several existing CAPTCHAs are reliable for normal users, whereas visually impaired users face a lot of problems with the CAPTCHA authentication process. CAPTCHAs such as Google reCAPTCHA alternatively provides audio CAPTCHA, but many users find it difficult to decipher due to noise, language barrier, and accent of the audio of the CAPTCHA. Existing CAPTCHA systems lack user satisfaction on smartphones thus limiting its use. Our proposed system potentially solves the problem faced by visually impaired users during the process of CAPTCHA authentication. Also, our system makes the authentication process generic across users as well as platforms.
2022-04-19
Wang, Pei, Bangert, Julian, Kern, Christoph.  2021.  If It’s Not Secure, It Should Not Compile: Preventing DOM-Based XSS in Large-Scale Web Development with API Hardening. 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). :1360–1372.
With tons of efforts spent on its mitigation, Cross-site scripting (XSS) remains one of the most prevalent security threats on the internet. Decades of exploitation and remediation demonstrated that code inspection and testing alone does not eliminate XSS vulnerabilities in complex web applications with a high degree of confidence. This paper introduces Google's secure-by-design engineering paradigm that effectively prevents DOM-based XSS vulnerabilities in large-scale web development. Our approach, named API hardening, enforces a series of company-wide secure coding practices. We provide a set of secure APIs to replace native DOM APIs that are prone to XSS vulnerabilities. Through a combination of type contracts and appropriate validation and escaping, the secure APIs ensure that applications based thereon are free of XSS vulnerabilities. We deploy a simple yet capable compile-time checker to guarantee that developers exclusively use our hardened APIs to interact with the DOM. We make various of efforts to scale this approach to tens of thousands of engineers without significant productivity impact. By offering rigorous tooling and consultant support, we help developers adopt the secure coding practices as seamlessly as possible. We present empirical results showing how API hardening has helped reduce the occurrences of XSS vulnerabilities in Google's enormous code base over the course of two-year deployment.
2022-01-31
Squarcina, Marco, Calzavara, Stefano, Maffei, Matteo.  2021.  The Remote on the Local: Exacerbating Web Attacks Via Service Workers Caches. 2021 IEEE Security and Privacy Workshops (SPW). :432—443.
Service workers boost the user experience of modern web applications by taking advantage of the Cache API to improve responsiveness and support offline usage. In this paper, we present the first security analysis of the threats posed by this programming practice, identifying an attack with major security implications. In particular, we show how a traditional XSS attack can abuse the Cache API to escalate into a personin-the-middle attack against cached content, thus compromising its confidentiality and integrity. Remarkably, this attack enables new threats which are beyond the scope of traditional XSS. After defining the attack, we study its prevalence in the wild, finding that the large majority of the sites which register service workers using the Cache API are vulnerable as long as a single webpage in the same origin of the service worker is affected by an XSS. Finally, we propose a browser-side countermeasure against this attack, and we analyze its effectiveness and practicality in terms of security benefits and backward compatibility with existing web applications.
Squarcina, Marco, Calzavara, Stefano, Maffei, Matteo.  2021.  The Remote on the Local: Exacerbating Web Attacks Via Service Workers Caches. 2021 IEEE Security and Privacy Workshops (SPW). :432—443.
Service workers boost the user experience of modern web applications by taking advantage of the Cache API to improve responsiveness and support offline usage. In this paper, we present the first security analysis of the threats posed by this programming practice, identifying an attack with major security implications. In particular, we show how a traditional XSS attack can abuse the Cache API to escalate into a personin-the-middle attack against cached content, thus compromising its confidentiality and integrity. Remarkably, this attack enables new threats which are beyond the scope of traditional XSS. After defining the attack, we study its prevalence in the wild, finding that the large majority of the sites which register service workers using the Cache API are vulnerable as long as a single webpage in the same origin of the service worker is affected by an XSS. Finally, we propose a browser-side countermeasure against this attack, and we analyze its effectiveness and practicality in terms of security benefits and backward compatibility with existing web applications.
2022-08-01
Husa, Eric, Tourani, Reza.  2021.  Vibe: An Implicit Two-Factor Authentication using Vibration Signals. 2021 IEEE Conference on Communications and Network Security (CNS). :236—244.
The increased need for online account security and the prominence of smartphones in today’s society has led to smartphone-based two-factor authentication schemes, in which the second factor is a code received on the user’s smartphone. Evolving two-factor authentication mechanisms suggest using the proximity of the user’s devices as the second authentication factor, avoiding the inconvenience of user-device interaction. These mechanisms often use low-range communication technologies or the similarities of devices’ environments to prove devices’ proximity and user authenticity. However, such mechanisms are vulnerable to colocated adversaries. This paper proposes Vibe-an implicit two-factor authentication mechanism, which uses a vibration communication channel to prove users’ authenticity in a secure and non-intrusive manner. Vibe’s design provides security at the physical layer, reducing the attack surface to the physical surface shared between devices. As a result, it protects users’ security even in the presence of co-located adversaries-the primary drawback of the existing systems. We prototyped Vibe and assessed its performance using commodity hardware in different environments. Our results show an equal error rate of 0.0175 with an end-to-end authentication latency of approximately 3.86 seconds.
2022-01-31
Gurjar, Neelam Singh, S R, Sudheendra S, Kumar, Chejarla Santosh, K. S, Krishnaveni.  2021.  WebSecAsst - A Machine Learning based Chrome Extension. 2021 6th International Conference on Communication and Electronics Systems (ICCES). :1631—1635.
A browser extension, also known as a plugin or an addon, is a small software application that adds functionality to a web browser. However, security threats are always linked with such software where data can be compromised and ultimately trust is broken. The proposed research work jas developed a security model named WebSecAsst, which is a chrome plugin relying on the Machine Learning model XGBoost and VirusTotal to detect malicious websites visited by the user and to detect whether the files downloaded from the internet are Malicious or Safe. During this detection, the proposed model preserves the privacy of the user's data to a greater extent than the existing commercial chrome extensions.
2021-12-20
Wang, Pei, Guðmundsson, Bjarki Ágúst, Kotowicz, Krzysztof.  2021.  Adopting Trusted Types in ProductionWeb Frameworks to Prevent DOM-Based Cross-Site Scripting: A Case Study. 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :60–73.
Cross-site scripting (XSS) is a common security vulnerability found in web applications. DOM-based XSS, one of the variants, is becoming particularly more prevalent with the boom of single-page applications where most of the UI changes are achieved by modifying the DOM through in-browser scripting. It is very easy for developers to introduce XSS vulnerabilities into web applications since there are many ways for user-controlled, unsanitized input to flow into a Web API and get interpreted as HTML markup and JavaScript code. An emerging Web API proposal called Trusted Types aims to prevent DOM XSS by making Web APIs secure by default. Different from other XSS mitigations that mostly focus on post-development protection, Trusted Types direct developers to write XSS-free code in the first place. A common concern when adopting a new security mechanism is how much effort is required to refactor existing code bases. In this paper, we report a case study on adopting Trusted Types in a well-established web framework. Our experience can help the web community better understand the benefits of making web applications compatible with Trusted Types, while also getting to know the related challenges and resolutions. We focused our work on Angular, which is one of the most popular web development frameworks available on the market.
2022-06-14
Singh, A K, Goyal, Navneet.  2021.  Detection of Malicious Webpages Using Deep Learning. 2021 IEEE International Conference on Big Data (Big Data). :3370–3379.
Malicious Webpages have been a serious threat on Internet for the past few years. As per the latest Google Transparency reports, they continue to be top ranked amongst online threats. Various techniques have been used till date to identify malicious sites, to include, Static Heuristics, Honey Clients, Machine Learning, etc. Recently, with the rapid rise of Deep Learning, an interest has aroused to explore Deep Learning techniques for detecting Malicious Webpages. In this paper Deep Learning has been utilized for such classification. The model proposed in this research has used a Deep Neural Network (DNN) with two hidden layers to distinguish between Malicious and Benign Webpages. This DNN model gave high accuracy of 99.81% with very low False Positives (FP) and False Negatives (FN), and with near real-time response on test sample. The model outperformed earlier machine learning solutions in accuracy, precision, recall and time performance metrics.
2021-12-20
Wang, Pei, Bangert, Julian, Kern, Christoph.  2021.  If It's Not Secure, It Should Not Compile: Preventing DOM-Based XSS in Large-Scale Web Development with API Hardening. 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). :1360–1372.
With tons of efforts spent on its mitigation, Cross-site scripting (XSS) remains one of the most prevalent security threats on the internet. Decades of exploitation and remediation demonstrated that code inspection and testing alone does not eliminate XSS vulnerabilities in complex web applications with a high degree of confidence. This paper introduces Google's secure-by-design engineering paradigm that effectively prevents DOM-based XSS vulnerabilities in large-scale web development. Our approach, named API hardening, enforces a series of company-wide secure coding practices. We provide a set of secure APIs to replace native DOM APIs that are prone to XSS vulnerabilities. Through a combination of type contracts and appropriate validation and escaping, the secure APIs ensure that applications based thereon are free of XSS vulnerabilities. We deploy a simple yet capable compile-time checker to guarantee that developers exclusively use our hardened APIs to interact with the DOM. We make various of efforts to scale this approach to tens of thousands of engineers without significant productivity impact. By offering rigorous tooling and consultant support, we help developers adopt the secure coding practices as seamlessly as possible. We present empirical results showing how API hardening has helped reduce the occurrences of XSS vulnerabilities in Google's enormous code base over the course of two-year deployment.
Sun, Jingxue, Huang, Zhiqiu, Yang, Ting, Wang, Wengjie, Zhang, Yuqing.  2021.  A System for Detecting Third-Party Tracking through the Combination of Dynamic Analysis and Static Analysis. IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–6.
With the continuous development of Internet technology, people pay more and more attention to private security. In particular, third-party tracking is a major factor affecting privacy security. So far, the most effective way to prevent third-party tracking is to create a blacklist. However, blacklist generation and maintenance need to be carried out manually which is inefficient and difficult to maintain. In order to generate blacklists more quickly and accurately in this era of big data, this paper proposes a machine learning system MFTrackerDetector against third-party tracking. The system is based on the theory of structural hole and only detects third-party trackers. The system consists of two subsystems, DMTrackerDetector and DFTrackerDetector. DMTrackerDetector is a JavaScript-based subsystem and DFTrackerDetector is a Flash-based subsystem. Because tracking code and non-tracking code often call different APIs, DMTrackerDetector builds a classifier using all the APIs in JavaScript as features and extracts the API features in JavaScript through dynamic analysis. Unlike static analysis method, the dynamic analysis method can effectively avoid code obfuscation. DMTrackerDetector eventually generates a JavaScript-based third-party tracker list named Jlist. DFTrackerDetector constructs a classifier using all the APIs in ActionScript as features and extracts the API features in the flash script through static analysis. DFTrackerDetector finally generates a Flash-based third-party tracker list named Flist. DFTrackerDetector achieved 92.98% accuracy in the Flash test set and DMTrackerDetector achieved 90.79% accuracy in the JavaScript test set. MFTrackerDetector eventually generates a list of third-party trackers, which is a combination of Jlist and Flist.
2021-05-18
Chen, Haibo, Chen, Junzuo, Chen, Jinfu, Yin, Shang, Wu, Yiming, Xu, Jiaping.  2020.  An Automatic Vulnerability Scanner for Web Applications. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1519–1524.
With the progressive development of web applications and the urgent requirement of web security, vulnerability scanner has been particularly emphasized, which is regarded as a fundamental component for web security assurance. Various scanners are developed with the intention of that discovering the possible vulnerabilities in advance to avoid malicious attacks. However, most of them only focus on the vulnerability detection with single target, which fail in satisfying the efficiency demand of users. In this paper, an effective web vulnerability scanner that integrates the information collection with the vulnerability detection is proposed to verify whether the target web application is vulnerable or not. The experimental results show that, by guiding the detection process with the useful collected information, our tool achieves great web vulnerability detection capability with a large scanning scope.
2020-12-14
Yu, L., Chen, L., Dong, J., Li, M., Liu, L., Zhao, B., Zhang, C..  2020.  Detecting Malicious Web Requests Using an Enhanced TextCNN. 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC). :768–777.
This paper proposes an approach that combines a deep learning-based method and a traditional machine learning-based method to efficiently detect malicious requests Web servers received. The first few layers of Convolutional Neural Network for Text Classification (TextCNN) are used to automatically extract powerful semantic features and in the meantime transferable statistical features are defined to boost the detection ability, specifically Web request parameter tampering. The semantic features from TextCNN and transferable statistical features from artificially-designing are grouped together to be fed into Support Vector Machine (SVM), replacing the last layer of TextCNN for classification. To facilitate the understanding of abstract features in form of numerical data in vectors extracted by TextCNN, this paper designs trace-back functions that map max-pooling outputs back to words in Web requests. After investigating the current available datasets for Web attack detection, HTTP Dataset CSIC 2010 is selected to test and verify the proposed approach. Compared with other deep learning models, the experimental results demonstrate that the approach proposed in this paper is competitive with the state-of-the-art.