Visible to the public Biblio

Filters: Keyword is Web browsers  [Clear All Filters]
2020-04-17
Oest, Adam, Safaei, Yeganeh, Doupé, Adam, Ahn, Gail-Joon, Wardman, Brad, Tyers, Kevin.  2019.  PhishFarm: A Scalable Framework for Measuring the Effectiveness of Evasion Techniques against Browser Phishing Blacklists. 2019 IEEE Symposium on Security and Privacy (SP). :1344—1361.

Phishing attacks have reached record volumes in recent years. Simultaneously, modern phishing websites are growing in sophistication by employing diverse cloaking techniques to avoid detection by security infrastructure. In this paper, we present PhishFarm: a scalable framework for methodically testing the resilience of anti-phishing entities and browser blacklists to attackers' evasion efforts. We use PhishFarm to deploy 2,380 live phishing sites (on new, unique, and previously-unseen .com domains) each using one of six different HTTP request filters based on real phishing kits. We reported subsets of these sites to 10 distinct anti-phishing entities and measured both the occurrence and timeliness of native blacklisting in major web browsers to gauge the effectiveness of protection ultimately extended to victim users and organizations. Our experiments revealed shortcomings in current infrastructure, which allows some phishing sites to go unnoticed by the security community while remaining accessible to victims. We found that simple cloaking techniques representative of real-world attacks- including those based on geolocation, device type, or JavaScript- were effective in reducing the likelihood of blacklisting by over 55% on average. We also discovered that blacklisting did not function as intended in popular mobile browsers (Chrome, Safari, and Firefox), which left users of these browsers particularly vulnerable to phishing attacks. Following disclosure of our findings, anti-phishing entities are now better able to detect and mitigate several cloaking techniques (including those that target mobile users), and blacklisting has also become more consistent between desktop and mobile platforms- but work remains to be done by anti-phishing entities to ensure users are adequately protected. Our PhishFarm framework is designed for continuous monitoring of the ecosystem and can be extended to test future state-of-the-art evasion techniques used by malicious websites.

Almousa, May, Anwar, Mohd.  2019.  Detecting Exploit Websites Using Browser-based Predictive Analytics. 2019 17th International Conference on Privacy, Security and Trust (PST). :1—3.
The popularity of Web-based computing has given increase to browser-based cyberattacks. These cyberattacks use websites that exploit various web browser vulnerabilities. To help regular users avoid exploit websites and engage in safe online activities, we propose a methodology of building a machine learning-powered predictive analytical model that will measure the risk of attacks and privacy breaches associated with visiting different websites and performing online activities using web browsers. The model will learn risk levels from historical data and metadata scraped from web browsers.
2019-10-14
Tymburibá, M., Sousa, H., Pereira, F..  2019.  Multilayer ROP Protection Via Microarchitectural Units Available in Commodity Hardware. 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :315–327.

This paper presents a multilayer protection approach to guard programs against Return-Oriented Programming (ROP) attacks. Upper layers validate most of a program's control flow at a low computational cost; thus, not compromising runtime. Lower layers provide strong enforcement guarantees to handle more suspicious flows; thus, enhancing security. Our multilayer system combines techniques already described in the literature with verifications that we introduce in this paper. We argue that modern versions of x86 processors already provide the microarchitectural units necessary to implement our technique. We demonstrate the effectiveness of our multilayer protection on a extensive suite of benchmarks, which includes: SPEC CPU2006; the three most popular web browsers; 209 benchmarks distributed with LLVM and four well-known systems shown to be vulnerable to ROP exploits. Our experiments indicate that we can protect programs with almost no overhead in practice, allying the good performance of lightweight security techniques with the high dependability of heavyweight approaches.

2018-05-09
Jillepalli, A. A., Leon, D. C. d, Steiner, S., Sheldon, F. T., Haney, M. A..  2017.  Hardening the Client-Side: A Guide to Enterprise-Level Hardening of Web Browsers. 2017 IEEE 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). :687–692.
Today, web browsers are a major avenue for cyber-compromise and data breaches. Web browser hardening, through high-granularity and least privilege tailored configurations, can help prevent or mitigate many of these attack avenues. For example, on a classic client desktop infrastructure, an enforced configuration that enables users to use one browser to connect to critical and trusted websites and a different browser for un-trusted sites, with the former restricted to trusted sites and the latter with JavaScript and Plugins disabled by default, may help prevent most JavaScript and Plugin-based attacks to critical enterprise sites. However, most organizations, today, still allow web browsers to run with their default configurations and allow users to use the same browser to connect to trusted and un-trusted sites alike. In this article, we present detailed steps for remotely hardening multiple web browsers in a Windows-based enterprise, for Internet Explorer and Google Chrome. We hope that system administrators use this guide to jump-start an enterprise-wide strategy for implementing high-granularity and least privilege browser hardening. This will help secure enterprise systems at the front-end in addition to the network perimeter.
2017-12-20
Wazan, A. S., Laborde, R., Chadwick, D. W., Barrere, F., Benzekri, A..  2017.  TLS Connection Validation by Web Browsers: Why do Web Browsers Still Not Agree? 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC). 1:665–674.
The TLS protocol is the primary technology used for securing web transactions. It is based on X.509 certificates that are used for binding the identity of web servers' owners to their public keys. Web browsers perform the validation of X.509 certificates on behalf of Web users. Our previous research in 2009 showed that the validation process of Web browsers is inconsistent and flawed. We showed how this situation might have a negative impact on Web users. From 2009 until now, many new X.509 related standards have been created or updated. In this paper, we performed an increased set of experiments over our 2009 study in order to highlight the improvements and/or regressions in Web browsers' behaviours.
Narvekar, A. N., Joshi, K. K..  2017.  Security sandbox model for modern web environment. 2017 International Conference on Nascent Technologies in Engineering (ICNTE). :1–6.
We require a very good technical knowledge to create automated tests to exploit the browser vulnerabilities. It is usually a combination of technical abilities and set of specific tools. Security concerns is of prime importance when it comes to web browsers. Attacks during surfing, executing any downloaded file and while transmission are very frequent these days and hence all browsers need to be hardened to ensure security. Sandbox is one of the feature where we can prevent malicious applications to run directly on hardware. It is an environment where new or non-trusted applications are executed. Many leading web browsers are trying their level best to implement sandbox. In this paper, we have mentioned the basic necessity of sandbox, current implementations in different web browsers and also present a self-proposed approach.
Dolnák, I., Litvik, J..  2017.  Introduction to HTTP security headers and implementation of HTTP strict transport security (HSTS) header for HTTPS enforcing. 2017 15th International Conference on Emerging eLearning Technologies and Applications (ICETA). :1–4.

This article presents introduction to HTTP Security Headers - new security topic in communication over Internet. It is emphasized that HTTPS protocol and SSL/TLS certificates alone do not offer sufficient level of security for communication among people and devices. In the world of web applications and Internet of Things (IoT), it is vital to bring communication security at higher level, what could be realised via few simple steps. HTTP Response Headers used for different purposes in the past are now the effective way how to propagate security policies from servers to clients (from web servers to web browsers). First improvement is enforcing HTTPS protocol for communication everywhere it is possible and promote this protocol as first and only option for secure connection over the Internet. It is emphasized that HTTP protocol for communication is not suitable anymore.

2017-02-23
Jia, L., Sen, S., Garg, D., Datta, A..  2015.  "A Logic of Programs with Interface-Confined Code". 2015 IEEE 28th Computer Security Foundations Symposium. :512–525.

Interface-confinement is a common mechanism that secures untrusted code by executing it inside a sandbox. The sandbox limits (confines) the code's interaction with key system resources to a restricted set of interfaces. This practice is seen in web browsers, hypervisors, and other security-critical systems. Motivated by these systems, we present a program logic, called System M, for modeling and proving safety properties of systems that execute adversary-supplied code via interface-confinement. In addition to using computation types to specify effects of computations, System M includes a novel invariant type to specify the properties of interface-confined code. The interpretation of invariant type includes terms whose effects satisfy an invariant. We construct a step-indexed model built over traces and prove the soundness of System M relative to the model. System M is the first program logic that allows proofs of safety for programs that execute adversary-supplied code without forcing the adversarial code to be available for deep static analysis. System M can be used to model and verify protocols as well as system designs. We demonstrate the reasoning principles of System M by verifying the state integrity property of the design of Memoir, a previously proposed trusted computing system.

2015-05-05
Mewara, B., Bairwa, S., Gajrani, J..  2014.  Browser's defenses against reflected cross-site scripting attacks. Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on. :662-667.

Due to the frequent usage of online web applications for various day-to-day activities, web applications are becoming most suitable target for attackers. Cross-Site Scripting also known as XSS attack, one of the most prominent defacing web based attack which can lead to compromise of whole browser rather than just the actual web application, from which attack has originated. Securing web applications using server side solutions is not profitable as developers are not necessarily security aware. Therefore, browser vendors have tried to evolve client side filters to defend against these attacks. This paper shows that even the foremost prevailing XSS filters deployed by latest versions of most widely used web browsers do not provide appropriate defense. We evaluate three browsers - Internet Explorer 11, Google Chrome 32, and Mozilla Firefox 27 for reflected XSS attack against different type of vulnerabilities. We find that none of above is completely able to defend against all possible type of reflected XSS vulnerabilities. Further, we evaluate Firefox after installing an add-on named XSS-Me, which is widely used for testing the reflected XSS vulnerabilities. Experimental results show that this client side solution can shield against greater percentage of vulnerabilities than other browsers. It is witnessed to be more propitious if this add-on is integrated inside the browser instead being enforced as an extension.
 

2015-05-01
Abd Aziz, N., Udzir, N.I., Mahmod, R..  2014.  Performance analysis for extended TLS with mutual attestation for platform integrity assurance. Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2014 IEEE 4th Annual International Conference on. :13-18.

A web service is a web-based application connected via the internet connectivity. The common web-based applications are deployed using web browsers and web servers. However, the security of Web Service is a major concern issues since it is not widely studied and integrated in the design stage of Web Service standard. They are add-on modules rather a well-defined solutions in standards. So, various web services security solutions have been defined in order to protect interaction over a network. Remote attestation is an authentication technique proposed by the Trusted Computing Group (TCG) which enables the verification of the trusted environment of platforms and assuring the information is accurate. To incorporate this method in web services framework in order to guarantee the trustworthiness and security of web-based applications, a new framework called TrustWeb is proposed. The TrustWeb framework integrates the remote attestation into SSL/TLS protocol to provide integrity information of the involved endpoint platforms. The framework enhances TLS protocol with mutual attestation mechanism which can help to address the weaknesses of transferring sensitive computations, and a practical way to solve the remote trust issue at the client-server environment. In this paper, we describe the work of designing and building a framework prototype in which attestation mechanism is integrated into the Mozilla Firefox browser and Apache web server. We also present framework solution to show improvement in the efficiency level.