Visible to the public Biblio

Found 146 results

Filters: Keyword is Browsers  [Clear All Filters]
2017-04-20
Rao, K. S., Jain, N., Limaje, N., Gupta, A., Jain, M., Menezes, B..  2016.  Two for the price of one: A combined browser defense against XSS and clickjacking. 2016 International Conference on Computing, Networking and Communications (ICNC). :1–6.
Cross Site Scripting (XSS) and clickjacking have been ranked among the top web application threats in recent times. This paper introduces XBuster - our client-side defence against XSS, implemented as an extension to the Mozilla Firefox browser. XBuster splits each HTTP request parameter into HTML and JavaScript contexts and stores them separately. It searches for both contexts in the HTTP response and handles each context type differently. It defends against all XSS attack vectors including partial script injection, attribute injection and HTML injection. Also, existing XSS filters may inadvertently disable frame busting code used in web pages as a defence against clickjacking. However, XBuster has been designed to detect and neutralize such attempts.
2017-03-07
Nirmal, K., Janet, B., Kumar, R..  2015.  Phishing - the threat that still exists. 2015 International Conference on Computing and Communications Technologies (ICCCT). :139–143.

Phishing is an online security attack in which the hacker aims in harvesting sensitive information like passwords, credit card information etc. from the users by making them to believe what they see is what it is. This threat has been into existence for a decade and there has been continuous developments in counter attacking this threat. However, statistical study reveals how phishing is still a big threat to today's world as the online era booms. In this paper, we look into the art of phishing and have made a practical analysis on how the state of the art anti-phishing systems fail to prevent Phishing. With the loop-holes identified in the state-of-the-art systems, we move ahead paving the roadmap for the kind of system that will counter attack this online security threat more effectively.

Botas, Á, Rodríguez, R. J., Väisänen, T., Zdzichowski, P..  2015.  Counterfeiting and Defending the Digital Forensic Process. 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing. :1966–1971.

During the last years, criminals have become aware of how digital evidences that lead them to courts and jail are collected and analyzed. Hence, they have started to develop antiforensic techniques to evade, hamper, or nullify their evidences. Nowadays, these techniques are broadly used by criminals, causing the forensic analysis to be in a state of decay. To defeat against these techniques, forensic analyst need to first identify them, and then to mitigate somehow their effects. In this paper, wereview the anti-forensic techniques and propose a new taxonomy that relates them to the initial phase of a forensic process mainly affected by each technique. Furthermore, we introduce mitigation techniques for these anti-forensic techniques, considering the chance to overcome the anti-forensic techniques and the difficulty to apply them.

Lakhita, Yadav, S., Bohra, B., Pooja.  2015.  A review on recent phishing attacks in Internet. 2015 International Conference on Green Computing and Internet of Things (ICGCIoT). :1312–1315.

The development of internet comes with the other domain that is cyber-crime. The record and intelligently can be exposed to a user of illegal activity so that it has become important to make the technology reliable. Phishing techniques include domain of email messages. Phishing emails have hosted such a phishing website, where a click on the URL or the malware code as executing some actions to perform is socially engineered messages. Lexically analyzing the URLs can enhance the performance and help to differentiate between the original email and the phishing URL. As assessed in this study, in addition to textual analysis of phishing URL, email classification is successful and results in a highly precise anti phishing.

2017-02-21
H. S. Jeon, H. Jung, W. Chun.  2015.  "An extended web browser for id/locator separation network". 2015 International Conference on Information and Communication Technology Convergence (ICTC). :749-754.

With the pretty prompt growth in Internet content, the main usage pattern of internet is shifting from traditional host-to-host model to content dissemination model. To support content distribution, content delivery networks (CDNs) gives an ad-hoc solution and some of future internet projects suggest a clean-slate design. Web applications have become one of the fundamental internet services. How to effectively support the popular browser-based web application is one of keys to success for future internet projects. This paper proposes the IDNet-based web applications. IDNet consists of id/locator separation scheme and domain-insulated autonomous network architecture (DIANA) which redesign the future internet in the clean slate basis. We design and develop an IDNet Browser based on the open source Qt. IDNet browser enables ID fetching and rendering by both `idp:/' schemes URID (Universal Resource Identifier) and `http:/' schemes URI in HTML The experiment shows that it can well be applicable to the IDNet test topology.

H. S. Jeon, H. Jung, W. Chun.  2015.  "ID Based Web Browser with P2P Property". 2015 9th International Conference on Future Generation Communication and Networking (FGCN). :41-44.

The main usage pattern of internet is shifting from traditional host-to-host central model to content dissemination model. It leads to the pretty prompt growth in Internet content. CDN and P2P are two mainstream techmologies to provide streaming content services in the current Internet. In recent years, some researchers have begun to focus on CDN-P2P-hybrid architecture and ISP-friendly P2P content delivery technology. Web applications have become one of the fundamental internet services. How to effectively support the popular browser-based web application is one of keys to success for future internet projects. This paper proposes ID based browser with caching in IDNet. IDNet consists of id/locator separation scheme and domain-insulated autonomous network architecture (DIANA) which redesign the future internet in the clean slate basis. Experiment shows that ID web browser with caching function can support how to disseminate content and how to find the closet network in IDNet having identical contents.

2015-05-06
Xinhai Zhang, Persson, M., Nyberg, M., Mokhtari, B., Einarson, A., Linder, H., Westman, J., DeJiu Chen, Torngren, M..  2014.  Experience on applying software architecture recovery to automotive embedded systems. Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE), 2014 Software Evolution Week - IEEE Conference on. :379-382.

The importance and potential advantages with a comprehensive product architecture description are well described in the literature. However, developing such a description takes additional resources, and it is difficult to maintain consistency with evolving implementations. This paper presents an approach and industrial experience which is based on architecture recovery from source code at truck manufacturer Scania CV AB. The extracted representation of the architecture is presented in several views and verified on CAN signal level. Lessons learned are discussed.
 

Sayed, B., Traore, I..  2014.  Protection against Web 2.0 Client-Side Web Attacks Using Information Flow Control. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :261-268.

The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.

Sayed, B., Traore, I..  2014.  Protection against Web 2.0 Client-Side Web Attacks Using Information Flow Control. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :261-268.

The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.

2015-05-05
Blankstein, A., Freedman, M.J..  2014.  Automating Isolation and Least Privilege in Web Services. Security and Privacy (SP), 2014 IEEE Symposium on. :133-148.

In many client-facing applications, a vulnerability in any part can compromise the entire application. This paper describes the design and implementation of Passe, a system that protects a data store from unintended data leaks and unauthorized writes even in the face of application compromise. Passe automatically splits (previously shared-memory-space) applications into sandboxed processes. Passe limits communication between those components and the types of accesses each component can make to shared storage, such as a backend database. In order to limit components to their least privilege, Passe uses dynamic analysis on developer-supplied end-to-end test cases to learn data and control-flow relationships between database queries and previous query results, and it then strongly enforces those relationships. Our prototype of Passe acts as a drop-in replacement for the Django web framework. By running eleven unmodified, off-the-shelf applications in Passe, we demonstrate its ability to provide strong security guarantees-Passe correctly enforced 96% of the applications' policies-with little additional overhead. Additionally, in the web-specific setting of the prototype, we also mitigate the cross-component effects of cross-site scripting (XSS) attacks by combining browser HTML5 sandboxing techniques with our automatic component separation.

Sayed, B., Traore, I..  2014.  Protection against Web 2.0 Client-Side Web Attacks Using Information Flow Control. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :261-268.

The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.

Wenmin Xiao, Jianhua Sun, Hao Chen, Xianghua Xu.  2014.  Preventing Client Side XSS with Rewrite Based Dynamic Information Flow. Parallel Architectures, Algorithms and Programming (PAAP), 2014 Sixth International Symposium on. :238-243.

This paper presents the design and implementation of an information flow tracking framework based on code rewrite to prevent sensitive information leaks in browsers, combining the ideas of taint and information flow analysis. Our system has two main processes. First, it abstracts the semantic of JavaScript code and converts it to a general form of intermediate representation on the basis of JavaScript abstract syntax tree. Second, the abstract intermediate representation is implemented as a special taint engine to analyze tainted information flow. Our approach can ensure fine-grained isolation for both confidentiality and integrity of information. We have implemented a proof-of-concept prototype, named JSTFlow, and have deployed it as a browser proxy to rewrite web applications at runtime. The experiment results show that JSTFlow can guarantee the security of sensitive data and detect XSS attacks with about 3x performance overhead. Because it does not involve any modifications to the target system, our system is readily deployable in practice.
 

Jinxin You, Fan Guo.  2014.  Improved CSRFGuard for CSRF attacks defense on Java EE platform. Computer Science Education (ICCSE), 2014 9th International Conference on. :1115-1120.

CSRFGuard is a tool running on the Java EE platform to defend Cross-Site Request Forgery (CSRF) attacks, but there are some shortcomings: scripts should be inserted manually, dynamically created requests cannot be effectively handled as well as defense can be bypassed through Cross-Site Scripting (XSS). Corresponding improvements were made according to the shortcomings. The Servlet filter was used to intercept responses, and responses of pages' source codes were stored by a custom response wrapper class to add script tags, so that scripts were automatically inserted. JavaScript event delegation mechanism was used to bind forms with onfocus and onsubmit events, then dynamically created requests were effectively handled. Token dynamically added through event triggered effectively prevented defense bypassed through XSS. The experimental results show that improved CSRFGuard can be effective to defend CSRF attacks.
 

Abgrall, E., le Traon, Y., Gombault, S., Monperrus, M..  2014.  Empirical Investigation of the Web Browser Attack Surface under Cross-Site Scripting: An Urgent Need for Systematic Security Regression Testing. Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on. :34-41.

One of the major threats against web applications is Cross-Site Scripting (XSS). The final target of XSS attacks is the client running a particular web browser. During this last decade, several competing web browsers (IE, Netscape, Chrome, Firefox) have evolved to support new features. In this paper, we explore whether the evolution of web browsers is done using systematic security regression testing. Beginning with an analysis of their current exposure degree to XSS, we extend the empirical study to a decade of most popular web browser versions. We use XSS attack vectors as unit test cases and we propose a new method supported by a tool to address this XSS vector testing issue. The analysis on a decade releases of most popular web browsers including mobile ones shows an urgent need of XSS regression testing. We advocate the use of a shared security testing benchmark as a good practice and propose a first set of publicly available XSS vectors as a basis to ensure that security is not sacrificed when a new version is delivered.

Mewara, B., Bairwa, S., Gajrani, J..  2014.  Browser's defenses against reflected cross-site scripting attacks. Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on. :662-667.

Due to the frequent usage of online web applications for various day-to-day activities, web applications are becoming most suitable target for attackers. Cross-Site Scripting also known as XSS attack, one of the most prominent defacing web based attack which can lead to compromise of whole browser rather than just the actual web application, from which attack has originated. Securing web applications using server side solutions is not profitable as developers are not necessarily security aware. Therefore, browser vendors have tried to evolve client side filters to defend against these attacks. This paper shows that even the foremost prevailing XSS filters deployed by latest versions of most widely used web browsers do not provide appropriate defense. We evaluate three browsers - Internet Explorer 11, Google Chrome 32, and Mozilla Firefox 27 for reflected XSS attack against different type of vulnerabilities. We find that none of above is completely able to defend against all possible type of reflected XSS vulnerabilities. Further, we evaluate Firefox after installing an add-on named XSS-Me, which is widely used for testing the reflected XSS vulnerabilities. Experimental results show that this client side solution can shield against greater percentage of vulnerabilities than other browsers. It is witnessed to be more propitious if this add-on is integrated inside the browser instead being enforced as an extension.
 

Mewara, B., Bairwa, S., Gajrani, J., Jain, V..  2014.  Enhanced browser defense for reflected Cross-Site Scripting. Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2014 3rd International Conference on. :1-6.

Cross-Site Scripting (XSS) is a common attack technique that lets attackers insert the code in the output application of web page which is referred to the web browser of visitor and then the inserted code executes automatically and steals the sensitive information. In order to prevent the users from XSS attack, many client- side solutions have been implemented; most of them being used are the filters that sanitize the malicious input. However, many of these filters do not provide prevention to the newly designed sophisticated attacks such as multiple points of injection, injection into script etc. This paper proposes and implements an approach based on encoding unfiltered reflections for detecting vulnerable web applications which can be exploited using above mentioned sophisticated attacks. Results prove that the proposed approach provides accurate higher detection rate of exploits. In addition to this, an implementation of blocking the execution of malicious scripts have contributed to XSS-Me: an open source Mozilla Firefox security extension that detects for reflected XSS vulnerabilities which can be considered as an effective solution if it is integrated inside the browser rather than being enforced as an extension.

Rocha, T.S., Souto, E..  2014.  ETSSDetector: A Tool to Automatically Detect Cross-Site Scripting Vulnerabilities. Network Computing and Applications (NCA), 2014 IEEE 13th International Symposium on. :306-309.

The inappropriate use of features intended to improve usability and interactivity of web applications has resulted in the emergence of various threats, including Cross-Site Scripting(XSS) attacks. In this work, we developed ETSS Detector, a generic and modular web vulnerability scanner that automatically analyzes web applications to find XSS vulnerabilities. ETSS Detector is able to identify and analyze all data entry points of the application and generate specific code injection tests for each one. The results shows that the correct filling of the input fields with only valid information ensures a better effectiveness of the tests, increasing the detection rate of XSS attacks.
 

Gupta, M.K., Govil, M.C., Singh, G..  2014.  A context-sensitive approach for precise detection of cross-site scripting vulnerabilities. Innovations in Information Technology (INNOVATIONS), 2014 10th International Conference on. :7-12.

Currently, dependence on web applications is increasing rapidly for social communication, health services, financial transactions and many other purposes. Unfortunately, the presence of cross-site scripting vulnerabilities in these applications allows malicious user to steals sensitive information, install malware, and performs various malicious operations. Researchers proposed various approaches and developed tools to detect XSS vulnerability from source code of web applications. However, existing approaches and tools are not free from false positive and false negative results. In this paper, we propose a taint analysis and defensive programming based HTML context-sensitive approach for precise detection of XSS vulnerability from source code of PHP web applications. It also provides automatic suggestions to improve the vulnerable source code. Preliminary experiments and results on test subjects show that proposed approach is more efficient than existing ones.

2015-05-04
Sah, S.K., Shakya, S., Dhungana, H..  2014.  A security management for Cloud based applications and services with Diameter-AAA. Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on. :6-11.

The Cloud computing offers various services and web based applications over the internet. With the tremendous growth in the development of cloud based services, the security issue is the main challenge and today's concern for the cloud service providers. This paper describes the management of security issues based on Diameter AAA mechanisms for authentication, authorization and accounting (AAA) demanded by cloud service providers. This paper focuses on the integration of Diameter AAA into cloud system architecture.
 

2015-05-01
Abd Aziz, N., Udzir, N.I., Mahmod, R..  2014.  Performance analysis for extended TLS with mutual attestation for platform integrity assurance. Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2014 IEEE 4th Annual International Conference on. :13-18.

A web service is a web-based application connected via the internet connectivity. The common web-based applications are deployed using web browsers and web servers. However, the security of Web Service is a major concern issues since it is not widely studied and integrated in the design stage of Web Service standard. They are add-on modules rather a well-defined solutions in standards. So, various web services security solutions have been defined in order to protect interaction over a network. Remote attestation is an authentication technique proposed by the Trusted Computing Group (TCG) which enables the verification of the trusted environment of platforms and assuring the information is accurate. To incorporate this method in web services framework in order to guarantee the trustworthiness and security of web-based applications, a new framework called TrustWeb is proposed. The TrustWeb framework integrates the remote attestation into SSL/TLS protocol to provide integrity information of the involved endpoint platforms. The framework enhances TLS protocol with mutual attestation mechanism which can help to address the weaknesses of transferring sensitive computations, and a practical way to solve the remote trust issue at the client-server environment. In this paper, we describe the work of designing and building a framework prototype in which attestation mechanism is integrated into the Mozilla Firefox browser and Apache web server. We also present framework solution to show improvement in the efficiency level.

2015-04-30
Biedermann, S., Ruppenthal, T., Katzenbeisser, S..  2014.  Data-centric phishing detection based on transparent virtualization technologies. Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on. :215-223.

We propose a novel phishing detection architecture based on transparent virtualization technologies and isolation of the own components. The architecture can be deployed as a security extension for virtual machines (VMs) running in the cloud. It uses fine-grained VM introspection (VMI) to extract, filter and scale a color-based fingerprint of web pages which are processed by a browser from the VM's memory. By analyzing the human perceptual similarity between the fingerprints, the architecture can reveal and mitigate phishing attacks which are based on redirection to spoofed web pages and it can also detect “Man-in-the-Browser” (MitB) attacks. To the best of our knowledge, the architecture is the first anti-phishing solution leveraging virtualization technologies. We explain details about the design and the implementation and we show results of an evaluation with real-world data.