Biblio
In this paper, we present an overview of the problems associated with the cross-site scripting (XSS) in the graphical content of web applications. The brief analysis of vulnerabilities for graphical files and factors responsible for making SVG images vulnerable to XSS attacks are discussed. XML treatment methods and their practical testing are performed. As a result, the set of rules for protecting the graphic content of the websites and prevent XSS vulnerabilities are proposed.
While because the range of web users have increased exponentially, thus has the quantity of attacks that decide to use it for malicious functions. The vulnerability that has become usually exploited is thought as cross-site scripting (XSS). Cross-site Scripting (XSS) refers to client-side code injection attack whereby a malicious user will execute malicious scripts (also usually stated as a malicious payload) into a legitimate web site or web based application. XSS is amongst the foremost rampant of web based application vulnerabilities and happens once an internet based application makes use of un-validated or un-encoded user input at intervals the output it generates. In such instances, the victim is unaware that their data is being transferred from a website that he/she trusts to a different site controlled by the malicious user. In this paper we shall focus on type 1 or "non-persistent cross-site scripting". With non-persistent cross-site scripting, malicious code or script is embedded in a Web request, and then partially or entirely echoed (or "reflected") by the Web server without encoding or validation in the Web response. The malicious code or script is then executed in the client's Web browser which could lead to several negative outcomes, such as the theft of session data and accessing sensitive data within cookies. In order for this type of cross-site scripting to be successful, a malicious user must coerce a user into clicking a link that triggers the non-persistent cross-site scripting attack. This is usually done through an email that encourages the user to click on a provided malicious link, or to visit a web site that is fraught with malicious links. In this paper it will be discussed and elaborated as to how attack surfaces related to type 1 or "non-persistent cross-site scripting" attack shall be reduced using secure development life cycle practices and techniques.
Phishing as one of the most well-known cybercrime activities is a deception of online users to steal their personal or confidential information by impersonating a legitimate website. Several machine learning-based strategies have been proposed to detect phishing websites. These techniques are dependent on the features extracted from the website samples. However, few studies have actually considered efficient feature selection for detecting phishing attacks. In this work, we investigate an agreement on the definitive features which should be used in phishing detection. We apply Fuzzy Rough Set (FRS) theory as a tool to select most effective features from three benchmarked data sets. The selected features are fed into three often used classifiers for phishing detection. To evaluate the FRS feature selection in developing a generalizable phishing detection, the classifiers are trained by a separate out-of-sample data set of 14,000 website samples. The maximum F-measure gained by FRS feature selection is 95% using Random Forest classification. Also, there are 9 universal features selected by FRS over all the three data sets. The F-measure value using this universal feature set is approximately 93% which is a comparable result in contrast to the FRS performance. Since the universal feature set contains no features from third-part services, this finding implies that with no inquiry from external sources, we can gain a faster phishing detection which is also robust toward zero-day attacks.
Phishing is a security attack to acquire personal information like passwords, credit card details or other account details of a user by means of websites or emails. Phishing websites look similar to the legitimate ones which make it difficult for a layman to differentiate between them. As per the reports of Anti Phishing Working Group (APWG) published in December 2018, phishing against banking services and payment processor was high. Almost all the phishy URLs use HTTPS and use redirects to avoid getting detected. This paper presents a focused literature survey of methods available to detect phishing websites. A comparative study of the in-use anti-phishing tools was accomplished and their limitations were acknowledged. We analyzed the URL-based features used in the past to improve their definitions as per the current scenario which is our major contribution. Also, a step wise procedure of designing an anti-phishing model is discussed to construct an efficient framework which adds to our contribution. Observations made out of this study are stated along with recommendations on existing systems.
Many governments organizations in Libya have started transferring traditional government services to e-government. These e-services will benefit a wide range of public. However, deployment of e-government bring many new security issues. Attackers would take advantages of vulnerabilities in these e-services and would conduct cyber attacks that would result in data loss, services interruptions, privacy loss, financial loss, and other significant loss. The number of vulnerabilities in e-services have increase due to the complexity of the e-services system, a lack of secure programming practices, miss-configuration of systems and web applications vulnerabilities, or not staying up-to-date with security patches. Unfortunately, there is a lack of study being done to assess the current security level of Libyan government websites. Therefore, this study aims to assess the current security of 16 Libyan government websites using penetration testing framework. In this assessment, no exploits were committed or tried on the websites. In penetration testing framework (pen test), there are four main phases: Reconnaissance, Scanning, Enumeration, Vulnerability Assessment and, SSL encryption evaluation. The aim of a security assessment is to discover vulnerabilities that could be exploited by attackers. We also conducted a Content Analysis phase for all websites. In this phase, we searched for security and privacy policies implementation information on the government websites. The aim is to determine whether the websites are aware of current accepted standard for security and privacy. From our security assessment results of 16 Libyan government websites, we compared the websites based on the number of vulnerabilities found and the level of security policies. We only found 9 websites with high and medium vulnerabilities. Many of these vulnerabilities are due to outdated software and systems, miss-configuration of systems and not applying the latest security patches. These vulnerabilities could be used by cyber hackers to attack the systems and caused damages to the systems. Also, we found 5 websites didn't implement any SSL encryption for data transactions. Lastly, only 2 websites have published security and privacy policies on their websites. This seems to indicate that these websites were not concerned with current standard in security and privacy. Finally, we classify the 16 websites into 4 safety categories: highly unsafe, unsafe, somewhat unsafe and safe. We found only 1 website with a highly unsafe ranking. Based on our finding, we concluded that the security level of the Libyan government websites are adequate, but can be further improved. However, immediate actions need to be taken to mitigate possible cyber attacks by fixing the vulnerabilities and implementing SSL encryption. Also, the websites need to publish their security and privacy policy so the users could trust their websites.
Bitcoin provides freshness properties by forming a blockchain where each block is associated with its timestamp and the previous block. Due to these properties, the Bitcoin protocol is being used as a decentralized, trusted, and secure timestamping service. Although Bitcoin participants which create new blocks cannot modify their order, they can manipulate timestamps almost undetected. This undermines the Bitcoin protocol as a reliable timestamping service. In particular, a newcomer that synchronizes the entire blockchain has a little guarantee about timestamps of all blocks. In this paper, we present a simple yet powerful mechanism that increases the reliability of Bitcoin timestamps. Our protocol can provide evidence that a block was created within a certain time range. The protocol is efficient, backward compatible, and surprisingly, currently deployed SSL/TLS servers can act as reference time sources. The protocol has many applications and can be used for detecting various attacks against the Bitcoin protocol.
Phishing has increased tremendously over last few years and it has become a serious threat to global security and economy. Existing literature dealing with the problem of phishing is scarce. Phishing is a deception technique that uses a combination of technology and social engineering to acquire sensitive information such as online banking passwords, credit card or bank account details [2]. Phishing can be done through emails and websites to collect confidential information. Phishers design fraudulent websites which look similar to the legitimate websites and lure the user to visit the malicious website. Therefore, the users must be aware of malicious websites to protect their sensitive data [1]. But it is very difficult to distinguish between legitimate and fake website especially for nontechnical users [4]. Moreover, phishing sites are growing rapidly. The aim of this paper is to demonstrate phishing detection using fuzzy logic and interpreting results using different defuzzification methods.
Text-based CAPTCHAs are still commonly used to attempt to prevent automated access to web services. By displaying an image of distorted text, they attempt to create a challenge image that OCR software can not interpret correctly, but a human user can easily determine the correct response to. This work focuses on a CAPTCHA used by a popular Chinese language question-and-answer website and how resilient it is to modern machine learning methods. While the majority of text-based CAPTCHAs focus on transcription tasks, the CAPTCHA solved in this work is based on localization of inverted symbols in a distorted image. A convolutional neural network (CNN) was created to evaluate the likelihood of a region in the image belonging to an inverted character. It is used with a feature map and clustering to identify potential locations of inverted characters. Training of the CNN was performed using curriculum learning and compared to other potential training methods. The proposed method was able to determine the correct response in 95.2% of cases of a simulated CAPTCHA and 67.6% on a set of real CAPTCHAs. Potential methods to increase difficulty of the CAPTCHA and the success rate of the automated solver are considered.
There are over 1 billion websites today, and most of them are designed using content management systems. Cybersecurity is one of the most discussed topics when it comes to a web application and protecting the confidentiality, integrity of data has become paramount. SQLi is one of the most commonly used techniques that hackers use to exploit a security vulnerability in a web application. In this paper, we compared SQLi vulnerabilities found on the three most commonly used content management systems using a vulnerability scanner called Nikto, then SQLMAP for penetration testing. This was carried on default WordPress, Drupal and Joomla website pages installed on a LAMP server (Iocalhost). Results showed that each of the content management systems was not susceptible to SQLi attacks but gave warnings about other vulnerabilities that could be exploited. Also, we suggested practices that could be implemented to prevent SQL injections.
SQL injection is well known a method of executing SQL queries and retrieving sensitive information from a website connected database. This process poses a threat to those applications which are poorly coded in the today's world. SQL is considered as one of the top 10 vulnerabilities even in 2018. To keep a track of the vulnerabilities that each of the websites are facing, we employ a tool called Acunetix which allows us to find the vulnerabilities of a specific website. This tool also suggests measures on how to ensure preventive measures. Using this implementation, we discover vulnerabilities in an actual website. Such a real-world implementation would be useful for instructional use in a foundational cybersecurity course.
Secret passwords are very widely used for user authentication to websites, despite their known shortcomings. Most websites using passwords also implement password recovery to allow users to re-establish a shared secret if the existing value is forgotten; many such systems involve sending a password recovery email to the user, e.g. containing a secret link. The security of password recovery, and hence the entire user-website relationship, depends on the email being acted upon correctly; unfortunately, as we show, such emails are not always designed to maximise security and can introduce vulnerabilities into recovery. To understand better this serious practical security problem, we surveyed password recovery emails for 50 of the top English language websites. We investigated a range of security and usability issues for such emails, covering their design, structure and content (including the nature of the user instructions), the techniques used to recover the password, and variations in email content from one web service to another. Many well-known web services, including Facebook, Dropbox, and Microsoft, suffer from recovery email design, structure and content issues. This is, to our knowledge, the first study of its type reported in the literature. This study has enabled us to formulate a set of recommendations for the design of such emails.
The advent of HTML 5 revives the life of cross-site scripting attack (XSS) in the web. Cross Document Messaging, Local Storage, Attribute Abuse, Input Validation, Inline Multimedia and SVG emerge as likely targets for serious threats. Introduction of various new tags and attributes can be potentially manipulated to exploit the data on a dynamic website. The XSS attack manages to retain a spot in all the OWASP Top 10 security risks released over the past decade and placed in the seventh spot in OWASP Top 10 of 2017. It is known that XSS attempts to execute scripts with untrusted data without proper validation between websites. XSS executes scripts in the victim's browser which can hijack user sessions, deface websites, or redirect the user to the malicious site. This paper focuses on the development of a browser extension for the popular Google Chromium browser that keeps track of various attack vectors. These vectors primarily include tags and attributes of HTML 5 that may be used maliciously. The developed plugin alerts users whenever a possibility of XSS attack is discovered when a user accesses a particular website.
Today, maintaining the security of the web application is of great importance. Sites Intermediate Script (XSS) is a security flaw that can affect web applications. This error allows an attacker to add their own malicious code to HTML pages that are displayed to the user. Upon execution of the malicious code, the behavior of the system or website can be completely changed. The XSS security vulnerability is used by attackers to steal the resources of a web browser such as cookies, identity information, etc. by adding malicious Java Script code to the victim's web applications. Attackers can use this feature to force a malicious code worker into a Web browser of a user, since Web browsers support the execution of embedded commands on web pages to enable dynamic web pages. This work has been proposed as a technique to detect and prevent manipulation that may occur in web sites, and thus to prevent the attack of Site Intermediate Script (XSS) attacks. Ayrica has developed four different languages that detect XSS explanations with Asp.NET, PHP, PHP and Ruby languages, and the differences in the detection of XSS attacks in environments provided by different programming languages.
To add more functionality and enhance usability of web applications, JavaScript (JS) is frequently used. Even with many advantages and usefulness of JS, an annoying fact is that many recent cyberattacks such as drive-by-download attacks exploit vulnerability of JS codes. In general, malicious JS codes are not easy to detect, because they sneakily exploit vulnerabilities of browsers and plugin software, and attack visitors of a web site unknowingly. To protect users from such threads, the development of an accurate detection system for malicious JS is soliciting. Conventional approaches often employ signature and heuristic-based methods, which are prone to suffer from zero-day attacks, i.e., causing many false negatives and/or false positives. For this problem, this paper adopts a machine-learning approach to feature learning called Doc2Vec, which is a neural network model that can learn context information of texts. The extracted features are given to a classifier model (e.g., SVM and neural networks) and it judges the maliciousness of a JS code. In the performance evaluation, we use the D3M Dataset (Drive-by-Download Data by Marionette) for malicious JS codes and JSUPACK for benign ones for both training and test purposes. We then compare the performance to other feature learning methods. Our experimental results show that the proposed Doc2Vec features provide better accuracy and fast classification in malicious JS code detection compared to conventional approaches.
In this paper, we examine the recent trend to- wards in-browser mining of cryptocurrencies; in particular, the mining of Monero through Coinhive and similar code- bases. In this model, a user visiting a website will download a JavaScript code that executes client-side in her browser, mines a cryptocurrency - typically without her consent or knowledge - and pays out the seigniorage to the website. Websites may consciously employ this as an alternative or to supplement advertisement revenue, may offer premium content in exchange for mining, or may be unwittingly serving the code as a result of a breach (in which case the seigniorage is collected by the attacker). The cryptocurrency Monero is preferred seemingly for its unfriendliness to large-scale ASIC mining that would drive browser-based efforts out of the market, as well as for its purported privacy features. In this paper, we survey this landscape, conduct some measurements to establish its prevalence and profitability, outline an ethical framework for considering whether it should be classified as an attack or business opportunity, and make suggestions for the detection, mitigation and/or prevention of browser-based mining for non- consenting users.
According to the new Tor network (6.0.5 version) can help the domestic users easily realize "over the wall", and of course criminals may use it to visit deep and dark website also. The paper analyzes the core technology of the new Tor network: the new flow obfuscation technology based on meek plug-in and real instance is used to verify the new Tor network's fast connectivity. On the basis of analyzing the traffic confusion mechanism and the network crime based on Tor, it puts forward some measures to prevent the using of Tor network to implement network crime.
Figuring innovations and development of web diminishes the exertion required for different procedures. Among them the most profited businesses are electronic frameworks, managing an account, showcasing, web based business and so on. This framework mostly includes the data trades ceaselessly starting with one host then onto the next. Amid this move there are such a variety of spots where the secrecy of the information and client gets loosed. Ordinarily the zone where there is greater likelihood of assault event is known as defenceless zones. Electronic framework association is one of such place where numerous clients performs there undertaking as indicated by the benefits allotted to them by the director. Here the aggressor makes the utilization of open ranges, for example, login or some different spots from where the noxious script is embedded into the framework. This scripts points towards trading off the security imperatives intended for the framework. Few of them identified with clients embedded scripts towards web communications are SQL infusion and cross webpage scripting (XSS). Such assaults must be distinguished and evacuated before they have an effect on the security and classification of the information. Amid the most recent couple of years different arrangements have been incorporated to the framework for making such security issues settled on time. Input approvals is one of the notable fields however experiences the issue of execution drops and constrained coordinating. Some other component, for example, disinfection and polluting will create high false report demonstrating the misclassified designs. At the center, both include string assessment and change investigation towards un-trusted hotspots for totally deciphering the effect and profundity of the assault. This work proposes an enhanced lead based assault discovery with specifically message fields for viably identifying the malevolent scripts. The work obstructs the ordinary access for malignant so- rce utilizing and hearty manage coordinating through unified vault which routinely gets refreshed. At the underlying level of assessment, the work appears to give a solid base to further research.
It is a well-known fact that nowadays access to sensitive information is being performed through the use of a three-tier-architecture. Web applications have become a handy interface between users and data. As database-driven web applications are being used more and more every day, web applications are being seen as a good target for attackers with the aim of accessing sensitive data. If an organization fails to deploy effective data protection systems, they might be open to various attacks. Governmental organizations, in particular, should think beyond traditional security policies in order to achieve proper data protection. It is, therefore, imperative to perform security testing and make sure that there are no holes in the system, before an attack happens. One of the most commonly used web application attacks is by insertion of an SQL query from the client side of the application. This attack is called SQL Injection. Since an SQL Injection vulnerability could possibly affect any website or web application that makes use of an SQL-based database, the vulnerability is one of the oldest, most prevalent and most dangerous of web application vulnerabilities. To overcome the SQL injection problems, there is a need to use different security systems. In this paper, we will use 3 different scenarios for testing security systems. Using Penetration testing technique, we will try to find out which is the best solution for protecting sensitive data within the government network of Kosovo.
Wikipedia is one of the most popular information platforms on the Internet. The user access pattern to Wikipedia pages depends on their relevance in the current worldwide social discourse. We use publically available statistics about the top-1000 most popular pages on each day to estimate the efficiency of caches for support of the platform. While the data volumes are moderate, the main goal of Wikipedia caches is to reduce access times for page views and edits. We study the impact of most popular pages on the achievable cache hit rate in comparison to Zipf request distributions and we include daily dynamics in popularity.