Biblio
Nowadays is becoming trivial to have multiple virtual machines working in parallel on hardware platforms with high processing power. This appropriate cost effective approach can be found at Internet Service Providers, in cloud service providers’ environments, in research and development lab testing environment (for example Universities’ student’s lab), in virtual application for security evaluation and in many other places. In the aforementioned cases, it is often necessary to start and/or stop virtual machines on the fly. In cloud service providers all the creation / tear down actions are triggered by a customer request and cannot be postponed or delayed for later evaluation. When a new virtual machine is created, it is imperative to assign unique IP addresses to all network interfaces and also domain name system DNS records that contain text based data, IP addresses, etc. Even worse, if a virtual machine has to be stopped or torn down, the critical network resources such as IP addresses and DNS records have to be carefully controlled in order to avoid IP addresses conflicts and name resolution problems between an old virtual machine and a newly created virtual machine. This paper proposes a provisioning mechanism to avoid both DNS records and IP addresses conflicts due to human misconfiguration, problems that can cause networking operation service disruptions.
Domain Name System (DNS) is the Internet's system for converting alphabetic names into numeric IP addresses. It is one of the early and vulnerable network protocols, which has several security loopholes that have been exploited repeatedly over the years. The clustering task for the automatic recognition of these attacks uses machine learning approaches based on semi-supervised learning. A family of bio-inspired algorithms, well known as Swarm Intelligence (SI) methods, have recently emerged to meet the requirements for the clustering task and have been successfully applied to various real-world clustering problems. In this paper, Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), and Kmeans, which is one of the most popular cluster algorithms, have been applied. Furthermore, hybrid algorithms consisting of Kmeans and PSO, and Kmeans and ABC have been proposed for the clustering process. The Canadian Institute for Cybersecurity (CIC) data set has been used for this investigation. In addition, different measures of clustering performance have been used to compare the different algorithms.
Load balancing and IP anycast are traffic routing algorithms used to speed up delivery of the Domain Name System. In case of a DDoS attack or an overload condition, the value of these protocols is critical, as they can provide intrinsic DDoS mitigation with the failover alternatives. In this paper, we present a methodology for predicting the next DNS response in the light of a potential redirection to less busy servers, in order to mitigate the size of the attack. Our experiments were conducted using data from the Nov. 2015 attack of the Root DNS servers and Logistic Regression, k-Nearest Neighbors, Support Vector Machines and Random Forest as our primary classifiers. The models were able to successfully predict up to 83% of responses for Root Letters that operated on a small number of sites and consequently suffered the most during the attacks. On the other hand, regarding DNS requests coming from more distributed Root servers, the models demonstrated lower accuracy. Our analysis showed a correlation between the True Positive Rate metric and the number of sites, as well as a clear need for intelligent management of traffic in load balancing practices.
Top-level domains play an important role in domain name system. Close attention should be paid to security of top level domains. In this paper, we found many configuration anomalies of top-level domains by analyzing their resource records. We got resource records of top-level domains from root name servers and authoritative servers of top-level domains. By comparing these resource records, we observed the anomalies in top-level domains. For example, there are 8 servers shared by more than one hundred top-level domains; Some TTL fields or SERIAL fields of resource records obtained on each NS servers of the same top-level domain were inconsistent; some authoritative servers of top-level domains were unreachable. Those anomalies may affect the availability of top-level domains. We hope that these anomalies can draw top-level domain administrators' attention to security of top-level domains.
Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.
Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.
This paper proposes a method to detect two primary means of using the Domain Name System (DNS) for malicious purposes. We develop machine learning models to detect information exfiltration from compromised machines and the establishment of command & control (C&C) servers via tunneling. We validate our approach by experiments where we successfully detect a malware used in several recent Advanced Persistent Threat (APT) attacks [1]. The novelty of our method is its robustness, simplicity, scalability, and ease of deployment in a production environment.
Domain Name System (DNS) had been recognized as an indispensable and fundamental infrastructure of current Internet. However, due to the original design philosophy and easy access principle, one can conveniently wiretap the DNS requests and responses. Such phenomenon is a serious threat for user privacy protection especially when an inside hacking takes place. Motivated by such circumstances, we proposed a ports distribution management solution to relieve the potential information leakage inside local DNS. Users will be able to utilize pre-assigned port numbers instead of default port 53. Selection method of port numbers at the server side and interactive process with corresponding end host are investigated. The necessary implementation steps, including modifications of destination port field, extension option usage, etc., are also discussed. A mathematical model is presented to further evaluate the performance. Both the possible blocking probability and port utilization are illustrated. We expect that this solution will be beneficial not only for the users in security enhancement, but also for the DNS servers in resources optimization.
This paper illuminates the problem of non-secure DNS dynamic updates, which allow a miscreant to manipulate DNS entries in the zone files of authoritative name servers. We refer to this type of attack as to zone poisoning. This paper presents the first measurement study of the vulnerability. We analyze a random sample of 2.9 million domains and the Alexa top 1 million domains and find that at least 1,877 (0.065%) and 587 (0.062%) of domains are vulnerable, respectively. Among the vulnerable domains are governments, health care providers and banks, demonstrating that the threat impacts important services. Via this study and subsequent notifications to affected parties, we aim to improve the security of the DNS ecosystem.
Software-Defined Networking (SDN) has emerged as a promising direction for next-generation network design. Due to its clean-slate and highly flexible design, it is believed to be the foundational principle for designing network architectures and improving their flexibility, resilience, reliability, and security. As the technology matures, research in both industry and academia has designed a considerable number of tools to scale software-defined networks, in preparation for the wide deployment in wide-area networks. In this paper, we survey the mechanisms that can be used to address the scalability issues in software-defined wide-area networks. Starting from a successful distributed system, the Domain Name System, we discuss the essential elements to make a large scale network infrastructure scalable. Then, the existing technologies proposed in the literature are reviewed in three categories: scaling out/up the data plane and scaling the control plane. We conclude with possible research directions towards scaling software-defined wide-area networks.
The Domain Name System (DNS) is widely seen as a vital protocol of the modern Internet. For example, popular services like load balancers and Content Delivery Networks heavily rely on DNS. Because of its important role, DNS is also a desirable target for malicious activities such as spamming, phishing, and botnets. To protect networks against these attacks, a number of DNS-based security approaches have been proposed. The key insight of our study is to measure the effectiveness of security approaches that rely on DNS in large-scale networks. For this purpose, we answer the following questions, How often is DNS used? Are most of the Internet flows established after contacting DNS? In this study, we collected data from the University of Auckland campus network with more than 33,000 Internet users and processed it to find out how DNS is being used. Moreover, we studied the flows that were established with and without contacting DNS. Our results show that less than 5 percent of the observed flows use DNS. Therefore, we argue that those security approaches that solely depend on DNS are not sufficient to protect large-scale networks.