Visible to the public Biblio

Found 100 results

Filters: Keyword is Web sites  [Clear All Filters]
2020-01-20
Huang, Yongjie, Yang, Qiping, Qin, Jinghui, Wen, Wushao.  2019.  Phishing URL Detection via CNN and Attention-Based Hierarchical RNN. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :112–119.
Phishing websites have long been a serious threat to cyber security. For decades, many researchers have been devoted to developing novel techniques to detect phishing websites automatically. While state-of-the-art solutions can achieve superior performances, they require substantial manual feature engineering and are not adept at detecting newly emerging phishing attacks. Therefore, developing techniques that can detect phishing websites automatically and handle zero-day phishing attacks swiftly is still an open challenge in this area. In this work, we propose PhishingNet, a deep learning-based approach for timely detection of phishing Uniform Resource Locators (URLs). Specifically, we use a Convolutional Neural Network (CNN) module to extract character-level spatial feature representations of URLs; meanwhile, we employ an attention-based hierarchical Recurrent Neural Network(RNN) module to extract word-level temporal feature representations of URLs. We then fuse these feature representations via a three-layer CNN to build accurate feature representations of URLs, on which we train a phishing URL classifier. Extensive experiments on a verified dataset collected from the Internet demonstrate that the feature representations extracted automatically are conducive to the improvement of the generalization ability of our approach on newly emerging URLs, which makes our approach achieve competitive performance against other state-of-the-art approaches.
2019-12-16
Zubarev, Dmytro, Skarga-Bandurova, Inna.  2019.  Cross-Site Scripting for Graphic Data: Vulnerabilities and Prevention. 2019 10th International Conference on Dependable Systems, Services and Technologies (DESSERT). :154–160.

In this paper, we present an overview of the problems associated with the cross-site scripting (XSS) in the graphical content of web applications. The brief analysis of vulnerabilities for graphical files and factors responsible for making SVG images vulnerable to XSS attacks are discussed. XML treatment methods and their practical testing are performed. As a result, the set of rules for protecting the graphic content of the websites and prevent XSS vulnerabilities are proposed.

Bukhari, Syed Nisar, Ahmad Dar, Muneer, Iqbal, Ummer.  2018.  Reducing attack surface corresponding to Type 1 cross-site scripting attacks using secure development life cycle practices. 2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB). :1–4.

While because the range of web users have increased exponentially, thus has the quantity of attacks that decide to use it for malicious functions. The vulnerability that has become usually exploited is thought as cross-site scripting (XSS). Cross-site Scripting (XSS) refers to client-side code injection attack whereby a malicious user will execute malicious scripts (also usually stated as a malicious payload) into a legitimate web site or web based application. XSS is amongst the foremost rampant of web based application vulnerabilities and happens once an internet based application makes use of un-validated or un-encoded user input at intervals the output it generates. In such instances, the victim is unaware that their data is being transferred from a website that he/she trusts to a different site controlled by the malicious user. In this paper we shall focus on type 1 or "non-persistent cross-site scripting". With non-persistent cross-site scripting, malicious code or script is embedded in a Web request, and then partially or entirely echoed (or "reflected") by the Web server without encoding or validation in the Web response. The malicious code or script is then executed in the client's Web browser which could lead to several negative outcomes, such as the theft of session data and accessing sensitive data within cookies. In order for this type of cross-site scripting to be successful, a malicious user must coerce a user into clicking a link that triggers the non-persistent cross-site scripting attack. This is usually done through an email that encourages the user to click on a provided malicious link, or to visit a web site that is fraught with malicious links. In this paper it will be discussed and elaborated as to how attack surfaces related to type 1 or "non-persistent cross-site scripting" attack shall be reduced using secure development life cycle practices and techniques.

2019-11-26
Zabihimayvan, Mahdieh, Doran, Derek.  2019.  Fuzzy Rough Set Feature Selection to Enhance Phishing Attack Detection. 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1-6.

Phishing as one of the most well-known cybercrime activities is a deception of online users to steal their personal or confidential information by impersonating a legitimate website. Several machine learning-based strategies have been proposed to detect phishing websites. These techniques are dependent on the features extracted from the website samples. However, few studies have actually considered efficient feature selection for detecting phishing attacks. In this work, we investigate an agreement on the definitive features which should be used in phishing detection. We apply Fuzzy Rough Set (FRS) theory as a tool to select most effective features from three benchmarked data sets. The selected features are fed into three often used classifiers for phishing detection. To evaluate the FRS feature selection in developing a generalizable phishing detection, the classifiers are trained by a separate out-of-sample data set of 14,000 website samples. The maximum F-measure gained by FRS feature selection is 95% using Random Forest classification. Also, there are 9 universal features selected by FRS over all the three data sets. The F-measure value using this universal feature set is approximately 93% which is a comparable result in contrast to the FRS performance. Since the universal feature set contains no features from third-part services, this finding implies that with no inquiry from external sources, we can gain a faster phishing detection which is also robust toward zero-day attacks.

Patil, Srushti, Dhage, Sudhir.  2019.  A Methodical Overview on Phishing Detection along with an Organized Way to Construct an Anti-Phishing Framework. 2019 5th International Conference on Advanced Computing Communication Systems (ICACCS). :588-593.

Phishing is a security attack to acquire personal information like passwords, credit card details or other account details of a user by means of websites or emails. Phishing websites look similar to the legitimate ones which make it difficult for a layman to differentiate between them. As per the reports of Anti Phishing Working Group (APWG) published in December 2018, phishing against banking services and payment processor was high. Almost all the phishy URLs use HTTPS and use redirects to avoid getting detected. This paper presents a focused literature survey of methods available to detect phishing websites. A comparative study of the in-use anti-phishing tools was accomplished and their limitations were acknowledged. We analyzed the URL-based features used in the past to improve their definitions as per the current scenario which is our major contribution. Also, a step wise procedure of designing an anti-phishing model is discussed to construct an efficient framework which adds to our contribution. Observations made out of this study are stated along with recommendations on existing systems.

2019-10-23
Ali, Abdullah Ahmed, Zamri Murah, Mohd.  2018.  Security Assessment of Libyan Government Websites. 2018 Cyber Resilience Conference (CRC). :1-4.

Many governments organizations in Libya have started transferring traditional government services to e-government. These e-services will benefit a wide range of public. However, deployment of e-government bring many new security issues. Attackers would take advantages of vulnerabilities in these e-services and would conduct cyber attacks that would result in data loss, services interruptions, privacy loss, financial loss, and other significant loss. The number of vulnerabilities in e-services have increase due to the complexity of the e-services system, a lack of secure programming practices, miss-configuration of systems and web applications vulnerabilities, or not staying up-to-date with security patches. Unfortunately, there is a lack of study being done to assess the current security level of Libyan government websites. Therefore, this study aims to assess the current security of 16 Libyan government websites using penetration testing framework. In this assessment, no exploits were committed or tried on the websites. In penetration testing framework (pen test), there are four main phases: Reconnaissance, Scanning, Enumeration, Vulnerability Assessment and, SSL encryption evaluation. The aim of a security assessment is to discover vulnerabilities that could be exploited by attackers. We also conducted a Content Analysis phase for all websites. In this phase, we searched for security and privacy policies implementation information on the government websites. The aim is to determine whether the websites are aware of current accepted standard for security and privacy. From our security assessment results of 16 Libyan government websites, we compared the websites based on the number of vulnerabilities found and the level of security policies. We only found 9 websites with high and medium vulnerabilities. Many of these vulnerabilities are due to outdated software and systems, miss-configuration of systems and not applying the latest security patches. These vulnerabilities could be used by cyber hackers to attack the systems and caused damages to the systems. Also, we found 5 websites didn't implement any SSL encryption for data transactions. Lastly, only 2 websites have published security and privacy policies on their websites. This seems to indicate that these websites were not concerned with current standard in security and privacy. Finally, we classify the 16 websites into 4 safety categories: highly unsafe, unsafe, somewhat unsafe and safe. We found only 1 website with a highly unsafe ranking. Based on our finding, we concluded that the security level of the Libyan government websites are adequate, but can be further improved. However, immediate actions need to be taken to mitigate possible cyber attacks by fixing the vulnerabilities and implementing SSL encryption. Also, the websites need to publish their security and privacy policy so the users could trust their websites.

Szalachowski, Pawel.  2018.  (Short Paper) Towards More Reliable Bitcoin Timestamps. 2018 Crypto Valley Conference on Blockchain Technology (CVCBT). :101-104.

Bitcoin provides freshness properties by forming a blockchain where each block is associated with its timestamp and the previous block. Due to these properties, the Bitcoin protocol is being used as a decentralized, trusted, and secure timestamping service. Although Bitcoin participants which create new blocks cannot modify their order, they can manipulate timestamps almost undetected. This undermines the Bitcoin protocol as a reliable timestamping service. In particular, a newcomer that synchronizes the entire blockchain has a little guarantee about timestamps of all blocks. In this paper, we present a simple yet powerful mechanism that increases the reliability of Bitcoin timestamps. Our protocol can provide evidence that a block was created within a certain time range. The protocol is efficient, backward compatible, and surprisingly, currently deployed SSL/TLS servers can act as reference time sources. The protocol has many applications and can be used for detecting various attacks against the Bitcoin protocol.

2019-05-01
Shirsat, S. D..  2018.  Demonstrating Different Phishing Attacks Using Fuzzy Logic. 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT). :57-61.

Phishing has increased tremendously over last few years and it has become a serious threat to global security and economy. Existing literature dealing with the problem of phishing is scarce. Phishing is a deception technique that uses a combination of technology and social engineering to acquire sensitive information such as online banking passwords, credit card or bank account details [2]. Phishing can be done through emails and websites to collect confidential information. Phishers design fraudulent websites which look similar to the legitimate websites and lure the user to visit the malicious website. Therefore, the users must be aware of malicious websites to protect their sensitive data [1]. But it is very difficult to distinguish between legitimate and fake website especially for nontechnical users [4]. Moreover, phishing sites are growing rapidly. The aim of this paper is to demonstrate phishing detection using fuzzy logic and interpreting results using different defuzzification methods.

2019-04-01
Stein, G., Peng, Q..  2018.  Low-Cost Breaking of a Unique Chinese Language CAPTCHA Using Curriculum Learning and Clustering. 2018 IEEE International Conference on Electro/Information Technology (EIT). :0595–0600.

Text-based CAPTCHAs are still commonly used to attempt to prevent automated access to web services. By displaying an image of distorted text, they attempt to create a challenge image that OCR software can not interpret correctly, but a human user can easily determine the correct response to. This work focuses on a CAPTCHA used by a popular Chinese language question-and-answer website and how resilient it is to modern machine learning methods. While the majority of text-based CAPTCHAs focus on transcription tasks, the CAPTCHA solved in this work is based on localization of inverted symbols in a distorted image. A convolutional neural network (CNN) was created to evaluate the likelihood of a region in the image belonging to an inverted character. It is used with a feature map and clustering to identify potential locations of inverted characters. Training of the CNN was performed using curriculum learning and compared to other potential training methods. The proposed method was able to determine the correct response in 95.2% of cases of a simulated CAPTCHA and 67.6% on a set of real CAPTCHAs. Potential methods to increase difficulty of the CAPTCHA and the success rate of the automated solver are considered.

Li, Z., Liao, Q..  2018.  CAPTCHA: Machine or Human Solvers? A Game-Theoretical Analysis 2018 5th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2018 4th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :18–23.
CAPTCHAs have become an ubiquitous defense used to protect open web resources from being exploited at scale. Traditionally, attackers have developed automatic programs known as CAPTCHA solvers to bypass the mechanism. With the presence of cheap labor in developing countries, hackers now have options to use human solvers. In this research, we develop a game theoretical framework to model the interactions between the defender and the attacker regarding the design and countermeasure of CAPTCHA system. With the result of equilibrium analysis, both parties can determine the optimal allocation of software-based or human-based CAPTCHA solvers. Counterintuitively, instead of the traditional wisdom of making CAPTCHA harder and harder, it may be of best interest of the defender to make CAPTCHA easier. We further suggest a welfare-improving CAPTCHA business model by involving decentralized cryptocurrency computation.
2019-02-25
Ojagbule, O., Wimmer, H., Haddad, R. J..  2018.  Vulnerability Analysis of Content Management Systems to SQL Injection Using SQLMAP. SoutheastCon 2018. :1–7.

There are over 1 billion websites today, and most of them are designed using content management systems. Cybersecurity is one of the most discussed topics when it comes to a web application and protecting the confidentiality, integrity of data has become paramount. SQLi is one of the most commonly used techniques that hackers use to exploit a security vulnerability in a web application. In this paper, we compared SQLi vulnerabilities found on the three most commonly used content management systems using a vulnerability scanner called Nikto, then SQLMAP for penetration testing. This was carried on default WordPress, Drupal and Joomla website pages installed on a LAMP server (Iocalhost). Results showed that each of the content management systems was not susceptible to SQLi attacks but gave warnings about other vulnerabilities that could be exploited. Also, we suggested practices that could be implemented to prevent SQL injections.

Vyamajala, S., Mohd, T. K., Javaid, A..  2018.  A Real-World Implementation of SQL Injection Attack Using Open Source Tools for Enhanced Cybersecurity Learning. 2018 IEEE International Conference on Electro/Information Technology (EIT). :0198–0202.

SQL injection is well known a method of executing SQL queries and retrieving sensitive information from a website connected database. This process poses a threat to those applications which are poorly coded in the today's world. SQL is considered as one of the top 10 vulnerabilities even in 2018. To keep a track of the vulnerabilities that each of the websites are facing, we employ a tool called Acunetix which allows us to find the vulnerabilities of a specific website. This tool also suggests measures on how to ensure preventive measures. Using this implementation, we discover vulnerabilities in an actual website. Such a real-world implementation would be useful for instructional use in a foundational cybersecurity course.

2019-02-14
Maqbali, F. A., Mitchell, C. J..  2018.  Email-Based Password Recovery - Risking or Rescuing Users? 2018 International Carnahan Conference on Security Technology (ICCST). :1-5.

Secret passwords are very widely used for user authentication to websites, despite their known shortcomings. Most websites using passwords also implement password recovery to allow users to re-establish a shared secret if the existing value is forgotten; many such systems involve sending a password recovery email to the user, e.g. containing a secret link. The security of password recovery, and hence the entire user-website relationship, depends on the email being acted upon correctly; unfortunately, as we show, such emails are not always designed to maximise security and can introduce vulnerabilities into recovery. To understand better this serious practical security problem, we surveyed password recovery emails for 50 of the top English language websites. We investigated a range of security and usability issues for such emails, covering their design, structure and content (including the nature of the user instructions), the techniques used to recover the password, and variations in email content from one web service to another. Many well-known web services, including Facebook, Dropbox, and Microsoft, suffer from recovery email design, structure and content issues. This is, to our knowledge, the first study of its type reported in the literature. This study has enabled us to formulate a set of recommendations for the design of such emails.

2019-01-16
Jia, Z., Cui, X., Liu, Q., Wang, X., Liu, C..  2018.  Micro-Honeypot: Using Browser Fingerprinting to Track Attackers. 2018 IEEE Third International Conference on Data Science in Cyberspace (DSC). :197–204.
Web attacks have proliferated across the whole Internet in recent years. To protect websites, security vendors and researchers collect attack information using web honeypots. However, web attackers can hide themselves by using stepping stones (e.g., VPN, encrypted proxy) or anonymous networks (e.g., Tor network). Conventional web honeypots lack an effective way to gather information about an attacker's identity, which raises a big obstacle for cybercrime traceability and forensics. Traditional forensics methods are based on traffic analysis; it requires that defenders gain access to the entire network. It is not suitable for honeypots. In this paper, we present the design, implementation, and deployment of the Micro-Honeypot, which aims to use the browser fingerprinting technique to track a web attacker. Traditional honeypot lure attackers and records attacker's activity. Micro-Honeypot is deployed in a honeypot. It will run and gather identity information when an attacker visits the honeypot. Our preliminary results show that Micro-Honeypot could collect more information and track attackers although they might have used proxies or anonymous networks to hide themselves.
Sivanesan, A. P., Mathur, A., Javaid, A. Y..  2018.  A Google Chromium Browser Extension for Detecting XSS Attack in HTML5 Based Websites. 2018 IEEE International Conference on Electro/Information Technology (EIT). :0302–0304.

The advent of HTML 5 revives the life of cross-site scripting attack (XSS) in the web. Cross Document Messaging, Local Storage, Attribute Abuse, Input Validation, Inline Multimedia and SVG emerge as likely targets for serious threats. Introduction of various new tags and attributes can be potentially manipulated to exploit the data on a dynamic website. The XSS attack manages to retain a spot in all the OWASP Top 10 security risks released over the past decade and placed in the seventh spot in OWASP Top 10 of 2017. It is known that XSS attempts to execute scripts with untrusted data without proper validation between websites. XSS executes scripts in the victim's browser which can hijack user sessions, deface websites, or redirect the user to the malicious site. This paper focuses on the development of a browser extension for the popular Google Chromium browser that keeps track of various attack vectors. These vectors primarily include tags and attributes of HTML 5 that may be used maliciously. The developed plugin alerts users whenever a possibility of XSS attack is discovered when a user accesses a particular website.

Baykara, M., Güçlü, S..  2018.  Applications for detecting XSS attacks on different web platforms. 2018 6th International Symposium on Digital Forensic and Security (ISDFS). :1–6.

Today, maintaining the security of the web application is of great importance. Sites Intermediate Script (XSS) is a security flaw that can affect web applications. This error allows an attacker to add their own malicious code to HTML pages that are displayed to the user. Upon execution of the malicious code, the behavior of the system or website can be completely changed. The XSS security vulnerability is used by attackers to steal the resources of a web browser such as cookies, identity information, etc. by adding malicious Java Script code to the victim's web applications. Attackers can use this feature to force a malicious code worker into a Web browser of a user, since Web browsers support the execution of embedded commands on web pages to enable dynamic web pages. This work has been proposed as a technique to detect and prevent manipulation that may occur in web sites, and thus to prevent the attack of Site Intermediate Script (XSS) attacks. Ayrica has developed four different languages that detect XSS explanations with Asp.NET, PHP, PHP and Ruby languages, and the differences in the detection of XSS attacks in environments provided by different programming languages.

2018-12-10
Ndichu, S., Ozawa, S., Misu, T., Okada, K..  2018.  A Machine Learning Approach to Malicious JavaScript Detection using Fixed Length Vector Representation. 2018 International Joint Conference on Neural Networks (IJCNN). :1–8.

To add more functionality and enhance usability of web applications, JavaScript (JS) is frequently used. Even with many advantages and usefulness of JS, an annoying fact is that many recent cyberattacks such as drive-by-download attacks exploit vulnerability of JS codes. In general, malicious JS codes are not easy to detect, because they sneakily exploit vulnerabilities of browsers and plugin software, and attack visitors of a web site unknowingly. To protect users from such threads, the development of an accurate detection system for malicious JS is soliciting. Conventional approaches often employ signature and heuristic-based methods, which are prone to suffer from zero-day attacks, i.e., causing many false negatives and/or false positives. For this problem, this paper adopts a machine-learning approach to feature learning called Doc2Vec, which is a neural network model that can learn context information of texts. The extracted features are given to a classifier model (e.g., SVM and neural networks) and it judges the maliciousness of a JS code. In the performance evaluation, we use the D3M Dataset (Drive-by-Download Data by Marionette) for malicious JS codes and JSUPACK for benign ones for both training and test purposes. We then compare the performance to other feature learning methods. Our experimental results show that the proposed Doc2Vec features provide better accuracy and fast classification in malicious JS code detection compared to conventional approaches.

2018-11-19
Eskandari, S., Leoutsarakos, A., Mursch, T., Clark, J..  2018.  A First Look at Browser-Based Cryptojacking. 2018 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :58–66.

In this paper, we examine the recent trend to- wards in-browser mining of cryptocurrencies; in particular, the mining of Monero through Coinhive and similar code- bases. In this model, a user visiting a website will download a JavaScript code that executes client-side in her browser, mines a cryptocurrency - typically without her consent or knowledge - and pays out the seigniorage to the website. Websites may consciously employ this as an alternative or to supplement advertisement revenue, may offer premium content in exchange for mining, or may be unwittingly serving the code as a result of a breach (in which case the seigniorage is collected by the attacker). The cryptocurrency Monero is preferred seemingly for its unfriendliness to large-scale ASIC mining that would drive browser-based efforts out of the market, as well as for its purported privacy features. In this paper, we survey this landscape, conduct some measurements to establish its prevalence and profitability, outline an ethical framework for considering whether it should be classified as an attack or business opportunity, and make suggestions for the detection, mitigation and/or prevention of browser-based mining for non- consenting users.

2018-09-12
Lin, Z., Tong, L., Zhijie, M., Zhen, L..  2017.  Research on Cyber Crime Threats and Countermeasures about Tor Anonymous Network Based on Meek Confusion Plug-in. 2017 International Conference on Robots Intelligent System (ICRIS). :246–249.

According to the new Tor network (6.0.5 version) can help the domestic users easily realize "over the wall", and of course criminals may use it to visit deep and dark website also. The paper analyzes the core technology of the new Tor network: the new flow obfuscation technology based on meek plug-in and real instance is used to verify the new Tor network's fast connectivity. On the basis of analyzing the traffic confusion mechanism and the network crime based on Tor, it puts forward some measures to prevent the using of Tor network to implement network crime.

Rahayuda, I. G. S., Santiari, N. P. L..  2017.  Crawling and cluster hidden web using crawler framework and fuzzy-KNN. 2017 5th International Conference on Cyber and IT Service Management (CITSM). :1–7.
Today almost everyone is using internet for daily activities. Whether it's for social, academic, work or business. But only a few of us are aware that internet generally we access only a small part of the overall of internet access. The Internet or the world wide web is divided into several levels, such as web surfaces, deep web or dark web. Accessing internet into deep or dark web is a dangerous thing. This research will be conducted with research on web content and deep content. For a faster and safer search, in this research will be use crawler framework. From the search process will be obtained various kinds of data to be stored into the database. The database classification process will be implemented to know the level of the website. The classification process is done by using the fuzzy-KNN method. The fuzzy-KNN method classifies the results of the crawling framework that contained in the database. Crawling framework will generate data in the form of url address, page info and other. Crawling data will be compared with predefined sample data. The classification result of fuzzy-KNN will result in the data of the web level based on the value of the word specified in the sample data. From the research conducted on several data tests that found there are as much as 20% of the web surface, 7.5% web bergie, 20% deep web, 22.5% charter and 30% dark web. Research is only done on some test data, it is necessary to add some data in order to get better result. Better crawler frameworks can speed up crawling results, especially at certain web levels because not all crawler frameworks can work at a particular web level, the tor browser's can be used but the crawler framework sometimes can not work.
Rafiuddin, M. F. B., Minhas, H., Dhubb, P. S..  2017.  A dark web story in-depth research and study conducted on the dark web based on forensic computing and security in Malaysia. 2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI). :3049–3055.
The following is a research conducted on the Dark Web to study and identify the ins and outs of the dark web, what the dark web is all about, the various methods available to access the dark web and many others. The researchers have also included the steps and precautions taken before the dark web was opened. Apart from that, the findings and the website links / URL are also included along with a description of the sites. The primary usage of the dark web and some of the researcher's experience has been further documented in this research paper.
2018-06-07
Dikhit, A. S., Karodiya, K..  2017.  Result evaluation of field authentication based SQL injection and XSS attack exposure. 2017 International Conference on Information, Communication, Instrumentation and Control (ICICIC). :1–6.

Figuring innovations and development of web diminishes the exertion required for different procedures. Among them the most profited businesses are electronic frameworks, managing an account, showcasing, web based business and so on. This framework mostly includes the data trades ceaselessly starting with one host then onto the next. Amid this move there are such a variety of spots where the secrecy of the information and client gets loosed. Ordinarily the zone where there is greater likelihood of assault event is known as defenceless zones. Electronic framework association is one of such place where numerous clients performs there undertaking as indicated by the benefits allotted to them by the director. Here the aggressor makes the utilization of open ranges, for example, login or some different spots from where the noxious script is embedded into the framework. This scripts points towards trading off the security imperatives intended for the framework. Few of them identified with clients embedded scripts towards web communications are SQL infusion and cross webpage scripting (XSS). Such assaults must be distinguished and evacuated before they have an effect on the security and classification of the information. Amid the most recent couple of years different arrangements have been incorporated to the framework for making such security issues settled on time. Input approvals is one of the notable fields however experiences the issue of execution drops and constrained coordinating. Some other component, for example, disinfection and polluting will create high false report demonstrating the misclassified designs. At the center, both include string assessment and change investigation towards un-trusted hotspots for totally deciphering the effect and profundity of the assault. This work proposes an enhanced lead based assault discovery with specifically message fields for viably identifying the malevolent scripts. The work obstructs the ordinary access for malignant so- rce utilizing and hearty manage coordinating through unified vault which routinely gets refreshed. At the underlying level of assessment, the work appears to give a solid base to further research.

2018-05-24
Maraj, A., Rogova, E., Jakupi, G., Grajqevci, X..  2017.  Testing Techniques and Analysis of SQL Injection Attacks. 2017 2nd International Conference on Knowledge Engineering and Applications (ICKEA). :55–59.

It is a well-known fact that nowadays access to sensitive information is being performed through the use of a three-tier-architecture. Web applications have become a handy interface between users and data. As database-driven web applications are being used more and more every day, web applications are being seen as a good target for attackers with the aim of accessing sensitive data. If an organization fails to deploy effective data protection systems, they might be open to various attacks. Governmental organizations, in particular, should think beyond traditional security policies in order to achieve proper data protection. It is, therefore, imperative to perform security testing and make sure that there are no holes in the system, before an attack happens. One of the most commonly used web application attacks is by insertion of an SQL query from the client side of the application. This attack is called SQL Injection. Since an SQL Injection vulnerability could possibly affect any website or web application that makes use of an SQL-based database, the vulnerability is one of the oldest, most prevalent and most dangerous of web application vulnerabilities. To overcome the SQL injection problems, there is a need to use different security systems. In this paper, we will use 3 different scenarios for testing security systems. Using Penetration testing technique, we will try to find out which is the best solution for protecting sensitive data within the government network of Kosovo.

2018-05-09
Jillepalli, A. A., Leon, D. C. d, Steiner, S., Sheldon, F. T., Haney, M. A..  2017.  Hardening the Client-Side: A Guide to Enterprise-Level Hardening of Web Browsers. 2017 IEEE 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). :687–692.
Today, web browsers are a major avenue for cyber-compromise and data breaches. Web browser hardening, through high-granularity and least privilege tailored configurations, can help prevent or mitigate many of these attack avenues. For example, on a classic client desktop infrastructure, an enforced configuration that enables users to use one browser to connect to critical and trusted websites and a different browser for un-trusted sites, with the former restricted to trusted sites and the latter with JavaScript and Plugins disabled by default, may help prevent most JavaScript and Plugin-based attacks to critical enterprise sites. However, most organizations, today, still allow web browsers to run with their default configurations and allow users to use the same browser to connect to trusted and un-trusted sites alike. In this article, we present detailed steps for remotely hardening multiple web browsers in a Windows-based enterprise, for Internet Explorer and Google Chrome. We hope that system administrators use this guide to jump-start an enterprise-wide strategy for implementing high-granularity and least privilege browser hardening. This will help secure enterprise systems at the front-end in addition to the network perimeter.
2018-03-26
Hasslinger, G., Kunbaz, M., Hasslinger, F., Bauschert, T..  2017.  Web Caching Evaluation from Wikipedia Request Statistics. 2017 15th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt). :1–6.

Wikipedia is one of the most popular information platforms on the Internet. The user access pattern to Wikipedia pages depends on their relevance in the current worldwide social discourse. We use publically available statistics about the top-1000 most popular pages on each day to estimate the efficiency of caches for support of the platform. While the data volumes are moderate, the main goal of Wikipedia caches is to reduce access times for page views and edits. We study the impact of most popular pages on the achievable cache hit rate in comparison to Zipf request distributions and we include daily dynamics in popularity.