Biblio
Two-factor authentication (2FA) popularly works by verifying something the user knows (a password) and something she possesses (a token, popularly instantiated with a smart phone). Conventional 2FA systems require extra interaction like typing a verification code, which is not very user-friendly. For improved user experience, recent work aims at zero-effort 2FA, in which a smart phone placed close to a computer (where the user enters her username/password into a browser to log into a server) automatically assists with the authentication. To prove her possession of the smart phone, the user needs to prove the phone is on the login spot, which reduces zero-effort 2FA to co-presence detection. In this paper, we propose SoundAuth, a secure zero-effort 2FA mechanism based on (two kinds of) ambient audio signals. SoundAuth looks for signs of proximity by having the browser and the smart phone compare both their surrounding sounds and certain unpredictable near-ultrasounds; if significant distinguishability is found, SoundAuth rejects the login request. For the ambient signals comparison, we regard it as a classification problem and employ a machine learning technique to analyze the audio signals. Experiments with real login attempts show that SoundAuth not only is comparable to existent schemes concerning utility, but also outperforms them in terms of resilience to attacks. SoundAuth can be easily deployed as it is readily supported by most smart phones and major browsers.
Browser extensions are a way through which third party developers provide a set of additional functionalities on top of the traditional functionalities provided by a browser. It has been identified that the browser extension platform can be used by hackers to carry out attacks of sophisticated kinds. These attacks include phishing, spying, DDoS, email spamming, affiliate fraud, mal-advertising, payment frauds etc. In this paper, we showcase the vulnerability of the current browsers to these attacks by taking Google Chrome as the case study as it is a popular browser. The paper also discusses the technical reason which makes it possible for the attackers to launch such attacks via browser extensions. A set of suggestions and solutions that can thwart the attack possibilities has been discussed.
The advent of HTML 5 revives the life of cross-site scripting attack (XSS) in the web. Cross Document Messaging, Local Storage, Attribute Abuse, Input Validation, Inline Multimedia and SVG emerge as likely targets for serious threats. Introduction of various new tags and attributes can be potentially manipulated to exploit the data on a dynamic website. The XSS attack manages to retain a spot in all the OWASP Top 10 security risks released over the past decade and placed in the seventh spot in OWASP Top 10 of 2017. It is known that XSS attempts to execute scripts with untrusted data without proper validation between websites. XSS executes scripts in the victim's browser which can hijack user sessions, deface websites, or redirect the user to the malicious site. This paper focuses on the development of a browser extension for the popular Google Chromium browser that keeps track of various attack vectors. These vectors primarily include tags and attributes of HTML 5 that may be used maliciously. The developed plugin alerts users whenever a possibility of XSS attack is discovered when a user accesses a particular website.
Today, maintaining the security of the web application is of great importance. Sites Intermediate Script (XSS) is a security flaw that can affect web applications. This error allows an attacker to add their own malicious code to HTML pages that are displayed to the user. Upon execution of the malicious code, the behavior of the system or website can be completely changed. The XSS security vulnerability is used by attackers to steal the resources of a web browser such as cookies, identity information, etc. by adding malicious Java Script code to the victim's web applications. Attackers can use this feature to force a malicious code worker into a Web browser of a user, since Web browsers support the execution of embedded commands on web pages to enable dynamic web pages. This work has been proposed as a technique to detect and prevent manipulation that may occur in web sites, and thus to prevent the attack of Site Intermediate Script (XSS) attacks. Ayrica has developed four different languages that detect XSS explanations with Asp.NET, PHP, PHP and Ruby languages, and the differences in the detection of XSS attacks in environments provided by different programming languages.
In this paper, we examine the recent trend to- wards in-browser mining of cryptocurrencies; in particular, the mining of Monero through Coinhive and similar code- bases. In this model, a user visiting a website will download a JavaScript code that executes client-side in her browser, mines a cryptocurrency - typically without her consent or knowledge - and pays out the seigniorage to the website. Websites may consciously employ this as an alternative or to supplement advertisement revenue, may offer premium content in exchange for mining, or may be unwittingly serving the code as a result of a breach (in which case the seigniorage is collected by the attacker). The cryptocurrency Monero is preferred seemingly for its unfriendliness to large-scale ASIC mining that would drive browser-based efforts out of the market, as well as for its purported privacy features. In this paper, we survey this landscape, conduct some measurements to establish its prevalence and profitability, outline an ethical framework for considering whether it should be classified as an attack or business opportunity, and make suggestions for the detection, mitigation and/or prevention of browser-based mining for non- consenting users.
Currently, no major browser fully checks for TLS/SSL certificate revocations. This is largely due to the fact that the deployed mechanisms for disseminating revocations (CRLs, OCSP, OCSP Stapling, CRLSet, and OneCRL) are each either incomplete, insecure, inefficient, slow to update, not private, or some combination thereof. In this paper, we present CRLite, an efficient and easily-deployable system for proactively pushing all TLS certificate revocations to browsers. CRLite servers aggregate revocation information for all known, valid TLS certificates on the web, and store them in a space-efficient filter cascade data structure. Browsers periodically download and use this data to check for revocations of observed certificates in real-time. CRLite does not require any additional trust beyond the existing PKI, and it allows clients to adopt a fail-closed security posture even in the face of network errors or attacks that make revocation information temporarily unavailable. We present a prototype of name that processes TLS certificates gathered by Rapid7, the University of Michigan, and Google's Certificate Transparency on the server-side, with a Firefox extension on the client-side. Comparing CRLite to an idealized browser that performs correct CRL/OCSP checking, we show that CRLite reduces latency and eliminates privacy concerns. Moreover, CRLite has low bandwidth costs: it can represent all certificates with an initial download of 10 MB (less than 1 byte per revocation) followed by daily updates of 580 KB on average. Taken together, our results demonstrate that complete TLS/SSL revocation checking is within reach for all clients.
Cryptography and encryption is a topic that is blurred by its complexity making it difficult for the majority of the public to easily grasp. The focus of our research is based on SSL technology involving CAs, a centralized system that manages and issues certificates to web servers and computers for validation of identity. We first explain how the certificate provides a secure connection creating a trust between two parties looking to communicate with one another over the internet. Then the paper goes into what happens when trust is compromised and how information that is being transmitted could possibly go into the hands of the wrong person. We are proposing a browser plugin, Certificate Authority Rescue Engine (CAre), to serve as an added source of security with simplicity and visibility. In order to see why CAre will be an added benefit to average and technical users of the internet, one must understand what website security entails. Therefore, this paper will dive deep into website security through the use of public key infrastructure and its core components; certificates, certificate authorities, and their relationship with web browsers.
Active authentication is the problem of continuously verifying the identity of a person based on behavioral aspects of their interaction with a computing device. In this paper, we collect and analyze behavioral biometrics data from 200 subjects, each using their personal Android mobile device for a period of at least 30 days. This data set is novel in the context of active authentication due to its size, duration, number of modalities, and absence of restrictions on tracked activity. The geographical colocation of the subjects in the study is representative of a large closed-world environment such as an organization where the unauthorized user of a device is likely to be an insider threat: coming from within the organization. We consider four biometric modalities: 1) text entered via soft keyboard, 2) applications used, 3) websites visited, and 4) physical location of the device as determined from GPS (when outdoors) or WiFi (when indoors). We implement and test a classifier for each modality and organize the classifiers as a parallel binary decision fusion architecture. We are able to characterize the performance of the system with respect to intruder detection time and to quantify the contribution of each modality to the overall performance.
In recent years, with the advances in JavaScript engines and the adoption of HTML5 APIs, web applications begin to show a tendency to shift their functionality from the server side towards the client side, resulting in dense and complex interactions with HTML documents using the Document Object Model (DOM). As a consequence, client-side vulnerabilities become more and more prevalent. In this paper, we focus on DOM-sourced Cross-site Scripting (XSS), which is a kind of severe but not well-studied vulnerability appearing in browser extensions. Comparing with conventional DOM-based XSS, a new attack surface is introduced by DOM-sourced XSS where the DOM could become a vulnerable source as well besides common sources such as URLs and form inputs. To discover such vulnerability, we propose a detecting framework employing hybrid analysis with two phases. The first phase is the lightweight static analysis consisting of a text filter and an abstract syntax tree parser, which produces potential vulnerable candidates. The second phase is the dynamic symbolic execution with an additional component named shadow DOM, generating a document as a proof-of-concept exploit. In our large-scale real-world experiment, 58 previously unknown DOM-sourced XSS vulnerabilities were discovered in user scripts of the popular browser extension Greasemonkey.
With the pretty prompt growth in Internet content, the main usage pattern of internet is shifting from traditional host-to-host model to content dissemination model. To support content distribution, content delivery networks (CDNs) gives an ad-hoc solution and some of future internet projects suggest a clean-slate design. Web applications have become one of the fundamental internet services. How to effectively support the popular browser-based web application is one of keys to success for future internet projects. This paper proposes the IDNet-based web applications. IDNet consists of id/locator separation scheme and domain-insulated autonomous network architecture (DIANA) which redesign the future internet in the clean slate basis. We design and develop an IDNet Browser based on the open source Qt. IDNet browser enables ID fetching and rendering by both `idp:/' schemes URID (Universal Resource Identifier) and `http:/' schemes URI in HTML The experiment shows that it can well be applicable to the IDNet test topology.
The main usage pattern of internet is shifting from traditional host-to-host central model to content dissemination model. It leads to the pretty prompt growth in Internet content. CDN and P2P are two mainstream techmologies to provide streaming content services in the current Internet. In recent years, some researchers have begun to focus on CDN-P2P-hybrid architecture and ISP-friendly P2P content delivery technology. Web applications have become one of the fundamental internet services. How to effectively support the popular browser-based web application is one of keys to success for future internet projects. This paper proposes ID based browser with caching in IDNet. IDNet consists of id/locator separation scheme and domain-insulated autonomous network architecture (DIANA) which redesign the future internet in the clean slate basis. Experiment shows that ID web browser with caching function can support how to disseminate content and how to find the closet network in IDNet having identical contents.
This paper presents the design and implementation of an information flow tracking framework based on code rewrite to prevent sensitive information leaks in browsers, combining the ideas of taint and information flow analysis. Our system has two main processes. First, it abstracts the semantic of JavaScript code and converts it to a general form of intermediate representation on the basis of JavaScript abstract syntax tree. Second, the abstract intermediate representation is implemented as a special taint engine to analyze tainted information flow. Our approach can ensure fine-grained isolation for both confidentiality and integrity of information. We have implemented a proof-of-concept prototype, named JSTFlow, and have deployed it as a browser proxy to rewrite web applications at runtime. The experiment results show that JSTFlow can guarantee the security of sensitive data and detect XSS attacks with about 3x performance overhead. Because it does not involve any modifications to the target system, our system is readily deployable in practice.
One of the major threats against web applications is Cross-Site Scripting (XSS). The final target of XSS attacks is the client running a particular web browser. During this last decade, several competing web browsers (IE, Netscape, Chrome, Firefox) have evolved to support new features. In this paper, we explore whether the evolution of web browsers is done using systematic security regression testing. Beginning with an analysis of their current exposure degree to XSS, we extend the empirical study to a decade of most popular web browser versions. We use XSS attack vectors as unit test cases and we propose a new method supported by a tool to address this XSS vector testing issue. The analysis on a decade releases of most popular web browsers including mobile ones shows an urgent need of XSS regression testing. We advocate the use of a shared security testing benchmark as a good practice and propose a first set of publicly available XSS vectors as a basis to ensure that security is not sacrificed when a new version is delivered.
Due to the frequent usage of online web applications for various day-to-day activities, web applications are becoming most suitable target for attackers. Cross-Site Scripting also known as XSS attack, one of the most prominent defacing web based attack which can lead to compromise of whole browser rather than just the actual web application, from which attack has originated. Securing web applications using server side solutions is not profitable as developers are not necessarily security aware. Therefore, browser vendors have tried to evolve client side filters to defend against these attacks. This paper shows that even the foremost prevailing XSS filters deployed by latest versions of most widely used web browsers do not provide appropriate defense. We evaluate three browsers - Internet Explorer 11, Google Chrome 32, and Mozilla Firefox 27 for reflected XSS attack against different type of vulnerabilities. We find that none of above is completely able to defend against all possible type of reflected XSS vulnerabilities. Further, we evaluate Firefox after installing an add-on named XSS-Me, which is widely used for testing the reflected XSS vulnerabilities. Experimental results show that this client side solution can shield against greater percentage of vulnerabilities than other browsers. It is witnessed to be more propitious if this add-on is integrated inside the browser instead being enforced as an extension.
Cross-Site Scripting (XSS) is a common attack technique that lets attackers insert the code in the output application of web page which is referred to the web browser of visitor and then the inserted code executes automatically and steals the sensitive information. In order to prevent the users from XSS attack, many client- side solutions have been implemented; most of them being used are the filters that sanitize the malicious input. However, many of these filters do not provide prevention to the newly designed sophisticated attacks such as multiple points of injection, injection into script etc. This paper proposes and implements an approach based on encoding unfiltered reflections for detecting vulnerable web applications which can be exploited using above mentioned sophisticated attacks. Results prove that the proposed approach provides accurate higher detection rate of exploits. In addition to this, an implementation of blocking the execution of malicious scripts have contributed to XSS-Me: an open source Mozilla Firefox security extension that detects for reflected XSS vulnerabilities which can be considered as an effective solution if it is integrated inside the browser rather than being enforced as an extension.
We propose a novel phishing detection architecture based on transparent virtualization technologies and isolation of the own components. The architecture can be deployed as a security extension for virtual machines (VMs) running in the cloud. It uses fine-grained VM introspection (VMI) to extract, filter and scale a color-based fingerprint of web pages which are processed by a browser from the VM's memory. By analyzing the human perceptual similarity between the fingerprints, the architecture can reveal and mitigate phishing attacks which are based on redirection to spoofed web pages and it can also detect “Man-in-the-Browser” (MitB) attacks. To the best of our knowledge, the architecture is the first anti-phishing solution leveraging virtualization technologies. We explain details about the design and the implementation and we show results of an evaluation with real-world data.