Biblio
Nowadays, phishing is one of the most usual web threats with regards to the significant growth of the World Wide Web in volume over time. Phishing attackers always use new (zero-day) and sophisticated techniques to deceive online customers. Hence, it is necessary that the anti-phishing system be real-time and fast and also leverages from an intelligent phishing detection solution. Here, we develop a reliable detection system which can adaptively match the changing environment and phishing websites. Our method is an online and feature-rich machine learning technique to discriminate the phishing and legitimate websites. Since the proposed approach extracts different types of discriminative features from URLs and webpages source code, it is an entirely client-side solution and does not require any service from the third-party. The experimental results highlight the robustness and competitiveness of our anti-phishing system to distinguish the phishing and legitimate websites.
Web applications are now considered one of the common platforms to represent data and conducting service releases throughout the World Wide Web. A number of the most commonly utilised frameworks for web applications are written in PHP. They became main targets because a vast number of servers are running these applications throughout the world. This increase in web application utilisation has made it more attractive to both users and hackers. According to the latest web security reports and research, cross site scripting (XSS) is the most popular vulnerability in PHP web application. XSS is considered an injection type of attack, which results in the theft of sensitive data, cookies, and sessions. Several tools and approaches have focused on detecting this kind of vulnerability in PHP source code. However, it is still a current problem in PHP web applications. This paper describes the popularity of PHP technology among other technologies, and highlight the approaches used to detect the most common vulnerabilities on PHP web applications, which is XSS. In addition, the discussion and the conclusion with future direction of research within this domain are highlighted.
The term "artificial intelligence" is a buzzword today and is heavily used to market products, services, research, conferences, and more. It is scientifically disputed which types of products and services do actually qualify as "artificial intelligence" versus simply advanced computer technologies mimicking aspects of natural intelligence. Yet it is undisputed that, despite often inflationary use of the term, there are mainstream products and services today that for decades were only thought to be science fiction. They range from industrial automation, to self-driving cars, robotics, and consumer electronics for smart homes, workspaces, education, and many more contexts. Several technological advances enable what is commonly referred to as "artificial intelligence". It includes connected computers and the Internet of Things (IoT), open and big data, low cost computing and storage, and many more. Yet regardless of the definition of the term artificial intelligence, technological advancements in this area provide immense potential, especially for people with disabilities. In this paper we explore some of these potential in the context of web accessibility. We review some existing products and services, and their support for web accessibility. We propose accessibility conformance evaluation as one potential way forward, to accelerate the uptake of artificial intelligence, to improve web accessibility.
The current state of the internet relies heavily on SSL/TLS and the certificate authority model. This model has systematic problems, both in its design as well as its implementation. There are problems with certificate revocation, certificate authority governance, breaches, poor security practices, single points of failure and with root stores. This paper begins with a general introduction to SSL/TLS and a description of the role of certificates, certificate authorities and root stores in the current model. This paper will then explore problems with the current model and describe work being done to help mitigate these problems.
In the last decade, numerous fake websites have been developed on the World Wide Web to mimic trusted websites, with the aim of stealing financial assets from users and organizations. This form of online attack is called phishing, and it has cost the online community and the various stakeholders hundreds of million Dollars. Therefore, effective counter measures that can accurately detect phishing are needed. Machine learning (ML) is a popular tool for data analysis and recently has shown promising results in combating phishing when contrasted with classic anti-phishing approaches, including awareness workshops, visualization and legal solutions. This article investigates ML techniques applicability to detect phishing attacks and describes their pros and cons. In particular, different types of ML techniques have been investigated to reveal the suitable options that can serve as anti-phishing tools. More importantly, we experimentally compare large numbers of ML techniques on real phishing datasets and with respect to different metrics. The purpose of the comparison is to reveal the advantages and disadvantages of ML predictive models and to show their actual performance when it comes to phishing attacks. The experimental results show that Covering approach models are more appropriate as anti-phishing solutions, especially for novice users, because of their simple yet effective knowledge bases in addition to their good phishing detection rate.