Visible to the public Biblio

Filters: Keyword is World Wide Web  [Clear All Filters]
2021-02-10
Kerschbaumer, C., Ritter, T., Braun, F..  2020.  Hardening Firefox against Injection Attacks. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :653—663.
Web browsers display content in the form of HTML, CSS and JavaScript retrieved from the world wide web. The loaded content is subject to the web security model and considered untrusted and potentially malicious. To complicate security matters, Firefox uses the same technologies to render its user interface as it does to render untrusted web content which blurs the distinction between the two privilege levels.Getting interactions between the two correct turns out to be complicated and has led to numerous real-world security vulnerabilities. We study those vulnerabilities to discover common threats and explain how we address them systematically to harden Firefox.
2021-02-08
Saleh, A. H., Yousif, A. S., Ahmed, F. Y. H..  2020.  Information Hiding for Text Files by Adopting the Genetic Algorithm and DNA Coding. 2020 IEEE 10th Symposium on Computer Applications Industrial Electronics (ISCAIE). :220–223.
Hiding information is a process to hide data or include it in different digital media such as image, audio, video, and text. However, there are many techniques to achieve the process of hiding information in the image processing, in this paper, a new method has been proposed for hidden data mechanism (which is a text file), then a transposition cipher method has been employed for encryption completed. It can be used to build an encrypted text and also to increase security against possible attacks while sending it over the World Wide Web. A genetic algorithm has been affected in the adjustment of the encoded text and DNA in the creation of an encrypted text that is difficult to detect and then include in the image and that affected the image visual quality. The proposed method outperforms the state of arts in terms of efficiently retrieving the embedded messages. Performance evaluation has been recorded high visual quality scores for the (SNR (single to noise ratio), PSNR (peak single to noise ratio) and MSE (mean square error).
2020-11-20
Lavrenovs, A., Melón, F. J. R..  2018.  HTTP security headers analysis of top one million websites. 2018 10th International Conference on Cyber Conflict (CyCon). :345—370.
We present research on the security of the most popular websites, ranked according to Alexa's top one million list, based on an HTTP response headers analysis. For each of the domains included in the list, we made four different requests: an HTTP/1.1 request to the domain itself and to its "www" subdomain and two more equivalent HTTPS requests. Redirections were always followed. A detailed discussion of the request process and main outcomes is presented, including X.509 certificate issues and comparison of results with equivalent HTTP/2 requests. The body of the responses was discarded, and the HTTP response header fields were stored in a database. We analysed the prevalence of the most important response headers related to web security aspects. In particular, we took into account Strict- Transport-Security, Content-Security-Policy, X-XSS-Protection, X-Frame-Options, Set-Cookie (for session cookies) and X-Content-Type. We also reviewed the contents of response HTTP headers that potentially could reveal unwanted information, like Server (and related headers), Date and Referrer-Policy. This research offers an up-to-date survey of current prevalence of web security policies implemented through HTTP response headers and concludes that most popular sites tend to implement it noticeably more often than less popular ones. Equally, HTTPS sites seem to be far more eager to implement those policies than HTTP only websites. A comparison with previous works show that web security policies based on HTTP response headers are continuously growing, but still far from satisfactory widespread adoption.
2020-06-01
Mohd Ariffin, Noor Afiza, Mohd Sani, Noor Fazlida.  2018.  A Multi-factor Biometric Authentication Scheme Using Attack Recognition and Key Generator Technique for Security Vulnerabilities to Withstand Attacks. 2018 IEEE Conference on Application, Information and Network Security (AINS). :43–48.
Security plays an important role in many authentication applications. Modern era information sharing is boundless and becoming much easier to access with the introduction of the Internet and the World Wide Web. Although this can be considered as a good point, issues such as privacy and data integrity arise due to the lack of control and authority. For this reason, the concept of data security was introduced. Data security can be categorized into two which are secrecy and authentication. In particular, this research was focused on the authentication of data security. There have been substantial research which discusses on multi-factor authentication scheme but most of those research do not entirely protect data against all types of attacks. Most current research only focuses on improving the security part of authentication while neglecting other important parts such as the accuracy and efficiency of the system. Current multifactor authentication schemes were simply not designed to have security, accuracy, and efficiency as their main focus. To overcome the above issue, this research will propose a new multi-factor authentication scheme which is capable to withstand external attacks which are known security vulnerabilities and attacks which are based on user behavior. On the other hand, the proposed scheme still needs to maintain an optimum level of accuracy and efficiency. From the result of the experiments, the proposed scheme was proven to be able to withstand the attacks. This is due to the implementation of the attack recognition and key generator technique together with the use of multi-factor in the proposed scheme.
2020-04-17
Szabo, Roland, Gontean, Aurel.  2019.  The Creation Process of a Secure and Private Mobile Web Browser with no Ads and no Popups. 2019 IEEE 25th International Symposium for Design and Technology in Electronic Packaging (SIITME). :232—235.
The aim of this work is to create a new style web browser. The other web browsers can have safety issues and have many ads and popups. The other web browsers can fill up cache with the logging of big history of visited web pages. This app is a light-weight web browser which is both secure and private with no ads and no popups, just the plain Internet shown in full screen. The app does not store all user data, so the navigation of webpages is done in incognito mode. The app was made to open any new HTML5 web page in a secure and private mode with big focus on loading speed of the web pages.
2020-04-10
Yadollahi, Mohammad Mehdi, Shoeleh, Farzaneh, Serkani, Elham, Madani, Afsaneh, Gharaee, Hossein.  2019.  An Adaptive Machine Learning Based Approach for Phishing Detection Using Hybrid Features. 2019 5th International Conference on Web Research (ICWR). :281—286.

Nowadays, phishing is one of the most usual web threats with regards to the significant growth of the World Wide Web in volume over time. Phishing attackers always use new (zero-day) and sophisticated techniques to deceive online customers. Hence, it is necessary that the anti-phishing system be real-time and fast and also leverages from an intelligent phishing detection solution. Here, we develop a reliable detection system which can adaptively match the changing environment and phishing websites. Our method is an online and feature-rich machine learning technique to discriminate the phishing and legitimate websites. Since the proposed approach extracts different types of discriminative features from URLs and webpages source code, it is an entirely client-side solution and does not require any service from the third-party. The experimental results highlight the robustness and competitiveness of our anti-phishing system to distinguish the phishing and legitimate websites.

2020-04-03
Fawaz, Kassem, Linden, Thomas, Harkous, Hamza.  2019.  Invited Paper: The Applications of Machine Learning in Privacy Notice and Choice. 2019 11th International Conference on Communication Systems Networks (COMSNETS). :118—124.
For more than two decades since the rise of the World Wide Web, the “Notice and Choice” framework has been the governing practice for the disclosure of online privacy practices. The emergence of new forms of user interactions, such as voice, and the enforcement of new regulations, such as the EU's recent General Data Protection Regulation (GDPR), promise to change this privacy landscape drastically. This paper discusses the challenges towards providing the privacy stakeholders with privacy awareness and control in this changing landscape. We will also present our recent research on utilizing Machine learning to analyze privacy policies and settings.
2019-12-16
Marashdih, Abdalla Wasef, Zaaba, Zarul Fitri, Suwais, Khaled.  2018.  Cross Site Scripting: Investigations in PHP Web Application. 2018 International Conference on Promising Electronic Technologies (ICPET). :25–30.

Web applications are now considered one of the common platforms to represent data and conducting service releases throughout the World Wide Web. A number of the most commonly utilised frameworks for web applications are written in PHP. They became main targets because a vast number of servers are running these applications throughout the world. This increase in web application utilisation has made it more attractive to both users and hackers. According to the latest web security reports and research, cross site scripting (XSS) is the most popular vulnerability in PHP web application. XSS is considered an injection type of attack, which results in the theft of sensitive data, cookies, and sessions. Several tools and approaches have focused on detecting this kind of vulnerability in PHP source code. However, it is still a current problem in PHP web applications. This paper describes the popularity of PHP technology among other technologies, and highlight the approaches used to detect the most common vulnerabilities on PHP web applications, which is XSS. In addition, the discussion and the conclusion with future direction of research within this domain are highlighted.

2019-01-31
Abou-Zahra, Shadi, Brewer, Judy, Cooper, Michael.  2018.  Artificial Intelligence (AI) for Web Accessibility: Is Conformance Evaluation a Way Forward? Proceedings of the Internet of Accessible Things. :20:1–20:4.

The term "artificial intelligence" is a buzzword today and is heavily used to market products, services, research, conferences, and more. It is scientifically disputed which types of products and services do actually qualify as "artificial intelligence" versus simply advanced computer technologies mimicking aspects of natural intelligence. Yet it is undisputed that, despite often inflationary use of the term, there are mainstream products and services today that for decades were only thought to be science fiction. They range from industrial automation, to self-driving cars, robotics, and consumer electronics for smart homes, workspaces, education, and many more contexts. Several technological advances enable what is commonly referred to as "artificial intelligence". It includes connected computers and the Internet of Things (IoT), open and big data, low cost computing and storage, and many more. Yet regardless of the definition of the term artificial intelligence, technological advancements in this area provide immense potential, especially for people with disabilities. In this paper we explore some of these potential in the context of web accessibility. We review some existing products and services, and their support for web accessibility. We propose accessibility conformance evaluation as one potential way forward, to accelerate the uptake of artificial intelligence, to improve web accessibility.

2018-09-12
Rahayuda, I. G. S., Santiari, N. P. L..  2017.  Crawling and cluster hidden web using crawler framework and fuzzy-KNN. 2017 5th International Conference on Cyber and IT Service Management (CITSM). :1–7.
Today almost everyone is using internet for daily activities. Whether it's for social, academic, work or business. But only a few of us are aware that internet generally we access only a small part of the overall of internet access. The Internet or the world wide web is divided into several levels, such as web surfaces, deep web or dark web. Accessing internet into deep or dark web is a dangerous thing. This research will be conducted with research on web content and deep content. For a faster and safer search, in this research will be use crawler framework. From the search process will be obtained various kinds of data to be stored into the database. The database classification process will be implemented to know the level of the website. The classification process is done by using the fuzzy-KNN method. The fuzzy-KNN method classifies the results of the crawling framework that contained in the database. Crawling framework will generate data in the form of url address, page info and other. Crawling data will be compared with predefined sample data. The classification result of fuzzy-KNN will result in the data of the web level based on the value of the word specified in the sample data. From the research conducted on several data tests that found there are as much as 20% of the web surface, 7.5% web bergie, 20% deep web, 22.5% charter and 30% dark web. Research is only done on some test data, it is necessary to add some data in order to get better result. Better crawler frameworks can speed up crawling results, especially at certain web levels because not all crawler frameworks can work at a particular web level, the tor browser's can be used but the crawler framework sometimes can not work.
2018-02-06
Berkowsky, J. A., Hayajneh, T..  2017.  Security Issues with Certificate Authorities. 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON). :449–455.

The current state of the internet relies heavily on SSL/TLS and the certificate authority model. This model has systematic problems, both in its design as well as its implementation. There are problems with certificate revocation, certificate authority governance, breaches, poor security practices, single points of failure and with root stores. This paper begins with a general introduction to SSL/TLS and a description of the role of certificates, certificate authorities and root stores in the current model. This paper will then explore problems with the current model and describe work being done to help mitigate these problems.

2017-12-20
Abdelhamid, N., Thabtah, F., Abdel-jaber, H..  2017.  Phishing detection: A recent intelligent machine learning comparison based on models content and features. 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). :72–77.

In the last decade, numerous fake websites have been developed on the World Wide Web to mimic trusted websites, with the aim of stealing financial assets from users and organizations. This form of online attack is called phishing, and it has cost the online community and the various stakeholders hundreds of million Dollars. Therefore, effective counter measures that can accurately detect phishing are needed. Machine learning (ML) is a popular tool for data analysis and recently has shown promising results in combating phishing when contrasted with classic anti-phishing approaches, including awareness workshops, visualization and legal solutions. This article investigates ML techniques applicability to detect phishing attacks and describes their pros and cons. In particular, different types of ML techniques have been investigated to reveal the suitable options that can serve as anti-phishing tools. More importantly, we experimentally compare large numbers of ML techniques on real phishing datasets and with respect to different metrics. The purpose of the comparison is to reveal the advantages and disadvantages of ML predictive models and to show their actual performance when it comes to phishing attacks. The experimental results show that Covering approach models are more appropriate as anti-phishing solutions, especially for novice users, because of their simple yet effective knowledge bases in addition to their good phishing detection rate.