Biblio
The legacy security defense mechanisms cannot resist where emerging sophisticated threats such as zero-day and malware campaigns have profoundly changed the dimensions of cyber-attacks. Recent studies indicate that cyber threat intelligence plays a crucial role in implementing proactive defense operations. It provides a knowledge-sharing platform that not only increases security awareness and readiness but also enables the collaborative defense to diminish the effectiveness of potential attacks. In this paper, we propose a secure distributed model to facilitate cyber threat intelligence sharing among diverse participants. The proposed model uses blockchain technology to assure tamper-proof record-keeping and smart contracts to guarantee immutable logic. We use an open-source permissioned blockchain platform, Hyperledger Fabric, to implement the blockchain application. We also utilize the flexibility and management capabilities of Software-Defined Networking to be integrated with the proposed sharing platform to enhance defense perspectives against threats in the system. In the end, collaborative DDoS attack mitigation is taken as a case study to demonstrate our approach.
Malicious domain names are consistently changing. It is challenging to keep blacklists of malicious domain names up-to-date because of the time lag between its creation and detection. Even if a website is clean itself, it does not necessarily mean that it won't be used as a pivot point to redirect users to malicious destinations. To address this issue, this paper demonstrates how to use linkage analysis and open-source threat intelligence to visualize the relationship of malicious domain names whilst verifying their categories, i.e., drive-by download, unwanted software etc. Featured by a graph-based model that could present the inter-connectivity of malicious domain names in a dynamic fashion, the proposed approach proved to be helpful for revealing the group patterns of different kinds of malicious domain names. When applied to analyze a blacklisted set of URLs in a real enterprise network, it showed better effectiveness than traditional methods and yielded a clearer view of the common patterns in the data.
Modern Browsers have become sophisticated applications, providing a portal to the web. Browsers host a complex mix of interpreters such as HTML and JavaScript, allowing not only useful functionality but also malicious activities, known as browser-hijacking. These attacks can be particularly difficult to detect, as they usually operate within the scope of normal browser behaviour. CryptoJacking is a form of browser-hijacking that has emerged as a result of the increased popularity and profitability of cryptocurrencies, and the introduction of new cryptocurrencies that promote CPU-based mining. This paper proposes MANiC (Multi-step AssessmeNt for Crypto-miners), a system to detect CryptoJacking websites. It uses regular expressions that are compiled in accordance with the API structure of different miner families. This allows the detection of crypto-mining scripts and the extraction of parameters that could be used to detect suspicious behaviour associated with CryptoJacking. When MANiC was used to analyse the Alexa top 1m websites, it detected 887 malicious URLs containing miners from 11 different families and demonstrated favourable results when compared to related CryptoJacking research. We demonstrate that MANiC can be used to provide insights into this new threat, to identify new potential features of interest and to establish a ground-truth dataset, assisting future research.
Phishing attacks have reached record volumes in recent years. Simultaneously, modern phishing websites are growing in sophistication by employing diverse cloaking techniques to avoid detection by security infrastructure. In this paper, we present PhishFarm: a scalable framework for methodically testing the resilience of anti-phishing entities and browser blacklists to attackers' evasion efforts. We use PhishFarm to deploy 2,380 live phishing sites (on new, unique, and previously-unseen .com domains) each using one of six different HTTP request filters based on real phishing kits. We reported subsets of these sites to 10 distinct anti-phishing entities and measured both the occurrence and timeliness of native blacklisting in major web browsers to gauge the effectiveness of protection ultimately extended to victim users and organizations. Our experiments revealed shortcomings in current infrastructure, which allows some phishing sites to go unnoticed by the security community while remaining accessible to victims. We found that simple cloaking techniques representative of real-world attacks- including those based on geolocation, device type, or JavaScript- were effective in reducing the likelihood of blacklisting by over 55% on average. We also discovered that blacklisting did not function as intended in popular mobile browsers (Chrome, Safari, and Firefox), which left users of these browsers particularly vulnerable to phishing attacks. Following disclosure of our findings, anti-phishing entities are now better able to detect and mitigate several cloaking techniques (including those that target mobile users), and blacklisting has also become more consistent between desktop and mobile platforms- but work remains to be done by anti-phishing entities to ensure users are adequately protected. Our PhishFarm framework is designed for continuous monitoring of the ecosystem and can be extended to test future state-of-the-art evasion techniques used by malicious websites.
Nowadays, phishing is one of the most usual web threats with regards to the significant growth of the World Wide Web in volume over time. Phishing attackers always use new (zero-day) and sophisticated techniques to deceive online customers. Hence, it is necessary that the anti-phishing system be real-time and fast and also leverages from an intelligent phishing detection solution. Here, we develop a reliable detection system which can adaptively match the changing environment and phishing websites. Our method is an online and feature-rich machine learning technique to discriminate the phishing and legitimate websites. Since the proposed approach extracts different types of discriminative features from URLs and webpages source code, it is an entirely client-side solution and does not require any service from the third-party. The experimental results highlight the robustness and competitiveness of our anti-phishing system to distinguish the phishing and legitimate websites.
Stealing confidential information from a database has become a severe vulnerability issue for web applications. The attacks can be prevented by defining a whitelist of SQL queries issued by web applications and detecting queries not in list. For large-scale web applications, automated generation of the whitelist is conducted because manually defining numerous query patterns is impractical for developers. Conventional methods for automated generation are unable to detect attacks immediately because of the long time required for collecting legitimate queries. Moreover, they require application-specific implementations that reduce the versatility of the methods. As described herein, we propose a method to generate a whitelist automatically using queries issued during web application tests. Our proposed method uses the queries generated during application tests. It is independent of specific applications, which yields improved timeliness against attacks and versatility for multiple applications.
Phishing is a security attack to acquire personal information like passwords, credit card details or other account details of a user by means of websites or emails. Phishing websites look similar to the legitimate ones which make it difficult for a layman to differentiate between them. As per the reports of Anti Phishing Working Group (APWG) published in December 2018, phishing against banking services and payment processor was high. Almost all the phishy URLs use HTTPS and use redirects to avoid getting detected. This paper presents a focused literature survey of methods available to detect phishing websites. A comparative study of the in-use anti-phishing tools was accomplished and their limitations were acknowledged. We analyzed the URL-based features used in the past to improve their definitions as per the current scenario which is our major contribution. Also, a step wise procedure of designing an anti-phishing model is discussed to construct an efficient framework which adds to our contribution. Observations made out of this study are stated along with recommendations on existing systems.
Telephone spam has become an increasingly prevalent problem in many countries all over the world. For example, the US Federal Trade Commission's (FTC) National Do Not Call Registry's number of cumulative complaints of spam/scam calls reached 30.9 million submissions in 2016. Naturally, telephone carriers can play an important role in the fight against spam. However, due to the extremely large volume of calls that transit across large carrier networks, it is challenging to mine their vast amounts of call detail records (CDRs) to accurately detect and block spam phone calls. This is because CDRs only contain high-level metadata (e.g., source and destination numbers, call start time, call duration, etc.) related to each phone calls. In addition, ground truth about both benign and spam-related phone numbers is often very scarce (only a tiny fraction of all phone numbers can be labeled). More importantly, telephone carriers are extremely sensitive to false positives, as they need to avoid blocking any non-spam calls, making the detection of spam-related numbers even more challenging. In this paper, we present a novel detection system that aims to discover telephone numbers involved in spam campaigns. Given a small seed of known spam phone numbers, our system uses a combination of unsupervised and supervised machine learning methods to mine new, previously unknown spam numbers from large datasets of call detail records (CDRs). Our objective is not to detect all possible spam phone calls crossing a carrier's network, but rather to expand the list of known spam numbers while aiming for zero false positives, so that the newly discovered numbers may be added to a phone blacklist, for example. To evaluate our system, we have conducted experiments over a large dataset of real-world CDRs provided by a leading telephony provider in China, while tuning the system to produce no false positives. The experimental results show that our system is able to greatly expand on the initial seed of known spam numbers by up to about 250%.
Application whitelisting software allows only examined and trusted applications to run on user's machine. Since many malicious files don't require administrative privileges in order for them to be executed, whitelisting can be the only way to block the execution of unauthorized applications in enterprise environment and thus prevent infection or data breach. In order to assess the current state of such solutions, the access to three whitelisting solution licenses was obtained with the purpose to test their effectiveness against different modern types of ransomware found in the wild. To conduct this study a virtual environment was used with Windows Server and Enterprise editions installed. The objective of this paper is not to evaluate each vendor or make recommendations of purchasing specific software but rather to assess the ability of application control solutions to block execution of ransomware files, as well as assess the potential for future research. The results of the research show the promise and effectiveness of whitelisting solutions.
Phishing is a major concern on the Internet today and many users are falling victim because of criminal's deceitful tactics. Blacklisting is still the most common defence users have against such phishing websites, but is failing to cope with the increasing number. In recent years, researchers have devised modern ways of detecting such websites using machine learning. One such method is to create machine learnt models of URL features to classify whether URLs are phishing. However, there are varying opinions on what the best approach is for features and algorithms. In this paper, the objective is to evaluate the performance of the Random Forest algorithm using a lexical only dataset. The performance is benchmarked against other machine learning algorithms and additionally against those reported in the literature. Initial results from experiments indicate that the Random Forest algorithm performs the best yielding an 86.9% accuracy.
We propose the first verifier-local revocation scheme for privacy-enhancing attribute-based credentials (PABCs) that is practically usable in large-scale applications, such as national eID cards, public transportation and physical access control systems. By using our revocation scheme together with existing PABCs, it is possible to prove attribute ownership in constant time and verify the proof and the revocation status in the time logarithmic in the number of revoked users, independently of the number of all valid users in the system. Proofs can be efficiently generated using only offline constrained devices, such as existing smart-cards. These features are achieved by using a new construction called \$n\$-times unlinkable proofs. We show the full cryptographic description of the scheme, prove its security, discuss parameters influencing scalability and provide details on implementation aspects. As a side result of independent interest, we design a more efficient proof of knowledge of weak Boneh-Boyen signatures, that does not require any pairing computation on the prover side.