Biblio

Found 5938 results

Filters: First Letter Of Last Name is S  [Clear All Filters]
2017-03-07
Muthusamy, Vinod, Slominski, Aleksander, Ishakian, Vatche, Khalaf, Rania, Reason, Johnathan, Rozsnyai, Szabolcs.  2016.  Lessons Learned Using a Process Mining Approach to Analyze Events from Distributed Applications. Proceedings of the 10th ACM International Conference on Distributed and Event-based Systems. :199–204.

The execution of distributed applications are captured by the events generated by the individual components. However, understanding the behavior of these applications from their event logs can be a complex and error prone task, compounded by the fact that applications continuously change rendering any knowledge obsolete. We describe our experiences applying a suite of process-aware analytic tools to a number of real world scenarios, and distill our lessons learned. For example, we have seen that these tools are used iteratively, where insights gained at one stage inform the configuration decisions made at an earlier stage. As well, we have observed that data onboarding, where the raw data is cleaned and transformed, is the most critical stage in the pipeline and requires the most manual effort and domain knowledge. In particular, missing, inconsistent, and low-resolution event time stamps are recurring problems that require better solutions. The experiences and insights presented here will assist practitioners applying process analytic tools to real scenarios, and reveal to researchers some of the more pressing challenges in this space.

2017-05-30
Kothari, Suresh, Tamrawi, Ahmed, Sauceda, Jeremías, Mathews, Jon.  2016.  Let's Verify Linux: Accelerated Learning of Analytical Reasoning Through Automation and Collaboration. Proceedings of the 38th International Conference on Software Engineering Companion. :394–403.

We describe our experiences in the classroom using the internet to collaboratively verify a significant safety and security property across the entire Linux kernel. With 66,609 instances to check across three versions of Linux, the naive approach of simply dividing up the code and assigning it to students does not scale, and does little to educate. However, by teaching and applying analytical reasoning, the instances can be categorized effectively, the problems of scale can be managed, and students can collaborate and compete with one another to achieve an unprecedented level of verification. We refer to our approach as Evidence-Enabled Collaborative Verification (EECV). A key aspect of this approach is the use of visual software models, which provide mathematically rigorous and critical evidence for verification. The visual models make analytical reasoning interactive, interesting and applicable to large software. Visual models are generated automatically using a tool we have developed called L-SAP [14]. This tool generates an Instance Verification Kit (IVK) for each instance, which contains all of the verification evidence for the instance. The L-SAP tool is implemented on a software graph database platform called Atlas [6]. This platform comes with a powerful query language and interactive visualization to build and apply visual models for software verification. The course project is based on three recent versions of the Linux operating system with altogether 37 MLOC and 66,609 verification instances. The instances are accessible through a website [2] for students to collaborate and compete. The Atlas platform, the L-SAP tool, the structured labs for the project, and the lecture slides are available upon request for academic use.

2018-02-02
Moyer, T., Chadha, K., Cunningham, R., Schear, N., Smith, W., Bates, A., Butler, K., Capobianco, F., Jaeger, T., Cable, P..  2016.  Leveraging Data Provenance to Enhance Cyber Resilience. 2016 IEEE Cybersecurity Development (SecDev). :107–114.

Building secure systems used to mean ensuring a secure perimeter, but that is no longer the case. Today's systems are ill-equipped to deal with attackers that are able to pierce perimeter defenses. Data provenance is a critical technology in building resilient systems that will allow systems to recover from attackers that manage to overcome the "hard-shell" defenses. In this paper, we provide background information on data provenance, details on provenance collection, analysis, and storage techniques and challenges. Data provenance is situated to address the challenging problem of allowing a system to "fight-through" an attack, and we help to identify necessary work to ensure that future systems are resilient.

2017-09-19
Washha, Mahdi, Qaroush, Aziz, Sedes, Florence.  2016.  Leveraging Time for Spammers Detection on Twitter. Proceedings of the 8th International Conference on Management of Digital EcoSystems. :109–116.

Twitter is one of the most popular microblogging social systems, which provides a set of distinctive posting services operating in real time. The flexibility of these services has attracted unethical individuals, so-called "spammers", aiming at spreading malicious, phishing, and misleading information. Unfortunately, the existence of spam results non-ignorable problems related to search and user's privacy. In the battle of fighting spam, various detection methods have been designed, which work by automating the detection process using the "features" concept combined with machine learning methods. However, the existing features are not effective enough to adapt spammers' tactics due to the ease of manipulation in the features. Also, the graph features are not suitable for Twitter based applications, though the high performance obtainable when applying such features. In this paper, beyond the simple statistical features such as number of hashtags and number of URLs, we examine the time property through advancing the design of some features used in the literature, and proposing new time based features. The new design of features is divided between robust advanced statistical features incorporating explicitly the time attribute, and behavioral features identifying any posting behavior pattern. The experimental results show that the new form of features is able to classify correctly the majority of spammers with an accuracy higher than 93% when using Random Forest learning algorithm, applied on a collected and annotated data-set. The results obtained outperform the accuracy of the state of the art features by about 6%, proving the significance of leveraging time in detecting spam accounts.

2017-08-02
Sultana, Nik, Kohlweiss, Markulf, Moore, Andrew W..  2016.  Light at the Middle of the Tunnel: Middleboxes for Selective Disclosure of Network Monitoring to Distrusted Parties. Proceedings of the 2016 Workshop on Hot Topics in Middleboxes and Network Function Virtualization. :1–6.

Network monitoring is vital to the administration and operation of networks, but it requires privileged access that only highly trusted parties are granted. This severely limits the opportunity for external parties, such as service or equipment providers, auditors, or even clients, to measure the health or operation of a network in which they are stakeholders, but do not have access to its internal structure. In this position paper we propose the use of middleboxes to open up network monitoring to external parties using privacy-preserving technology. This will allow distrusted parties to make more inferences about the network state than currently possible, without learning any precise information about the network or the data that crosses it. Thus the state of the network will be more transparent to external stakeholders, who will be empowered to verify claims made by network operators. Network operators will be able to provide more information about their network without compromising security or privacy.

2017-10-18
Luger, Ewa, Sellen, Abigail.  2016.  "Like Having a Really Bad PA": The Gulf Between User Expectation and Experience of Conversational Agents. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :5286–5297.

The past four years have seen the rise of conversational agents (CAs) in everyday life. Apple, Microsoft, Amazon, Google and Facebook have all embedded proprietary CAs within their software and, increasingly, conversation is becoming a key mode of human-computer interaction. Whilst we have long been familiar with the notion of computers that speak, the investigative concern within HCI has been upon multimodality rather than dialogue alone, and there is no sense of how such interfaces are used in everyday life. This paper reports the findings of interviews with 14 users of CAs in an effort to understand the current interactional factors affecting everyday use. We find user expectations dramatically out of step with the operation of the systems, particularly in terms of known machine intelligence, system capability and goals. Using Norman's 'gulfs of execution and evaluation' [30] we consider the implications of these findings for the design of future systems.

2018-05-11
2017-04-03
Urbina, David I., Giraldo, Jairo A., Cardenas, Alvaro A., Tippenhauer, Nils Ole, Valente, Junia, Faisal, Mustafa, Ruths, Justin, Candell, Richard, Sandberg, Henrik.  2016.  Limiting the Impact of Stealthy Attacks on Industrial Control Systems. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1092–1105.

While attacks on information systems have for most practical purposes binary outcomes (information was manipulated/eavesdropped, or not), attacks manipulating the sensor or control signals of Industrial Control Systems (ICS) can be tuned by the attacker to cause a continuous spectrum in damages. Attackers that want to remain undetected can attempt to hide their manipulation of the system by following closely the expected behavior of the system, while injecting just enough false information at each time step to achieve their goals. In this work, we study if attack-detection can limit the impact of such stealthy attacks. We start with a comprehensive review of related work on attack detection schemes in the security and control systems community. We then show that many of those works use detection schemes that are not limiting the impact of stealthy attacks. We propose a new metric to measure the impact of stealthy attacks and how they relate to our selection on an upper bound on false alarms. We finally show that the impact of such attacks can be mitigated in several cases by the proper combination and configuration of detection schemes. We demonstrate the effectiveness of our algorithms through simulations and experiments using real ICS testbeds and real ICS systems.

2017-03-07
Schild, Christopher-J., Schultz, Simone.  2016.  Linking Deutsche Bundesbank Company Data Using Machine-Learning-Based Classification: Extended Abstract. Proceedings of the Second International Workshop on Data Science for Macro-Modeling. :10:1–10:3.

We present a process of linking various Deutsche Bundesbank datasources on companies based on a semi-automatic classification. The linkage process involves data cleaning and harmonization, blocking, construction of comparison features, as well as training and testing a statistical classification model on a "ground-truth" subset of known matches and non-matches. The evaluation of our method shows that the process limits the need for manual classifications to a small percentage of ambiguously classified match candidates.

2018-05-10
2017-08-22
Hintze, Daniel, Koch, Eckhard, Scholz, Sebastian, Mayrhofer, René.  2016.  Location-based Risk Assessment for Mobile Authentication. Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. :85–88.

Mobile devices offer access to our digital lives and thus need to be protected against the risk of unauthorized physical access by applying strong authentication, which in turn adversely affects usability. The actual risk, however, depends on dynamic factors like day and time. In this paper we discuss the idea of using location-based risk assessment in combination with multi-modal biometrics to adjust the level of authentication necessary to the situational risk of unauthorized access.

2017-05-22
Bloom, Gedare, Parmer, Gabriel, Simha, Rahul.  2016.  LockDown: An Operating System for Achieving Service Continuity by Quarantining Principals. Proceedings of the 9th European Workshop on System Security. :7:1–7:6.

This paper introduces quarantine, a new security primitive for an operating system to use in order to protect information and isolate malicious behavior. Quarantine's core feature is the ability to fork a protection domain on-the-fly to isolate a specific principal's execution of untrusted code without risk of a compromise spreading. Forking enables the OS to ensure service continuity by permitting even high-risk operations to proceed, albeit subject to greater scrutiny and constraints. Quarantine even partitions executing threads that share resources into isolated protection domains. We discuss the design and implementation of quarantine within the LockDown OS, a security-focused evolution of the Composite component-based microkernel OS. Initial performance results for quarantine show that about 98% of the overhead comes from the cost of copying memory to the new protection domain.

2017-05-18
Hsu, Daniel, Sabato, Sivan.  2016.  Loss Minimization and Parameter Estimation with Heavy Tails. J. Mach. Learn. Res.. 17:543–582.

This work studies applications and generalizations of a simple estimation technique that provides exponential concentration under heavy-tailed distributions, assuming only bounded low-order moments. We show that the technique can be used for approximate minimization of smooth and strongly convex losses, and specifically for least squares linear regression. For instance, our d-dimensional estimator requires just O(d log(1/δ)) random samples to obtain a constant factor approximation to the optimal least squares loss with probability 1-δ, without requiring the covariates or noise to be bounded or subgaussian. We provide further applications to sparse linear regression and low-rank covariance matrix estimation with similar allowances on the noise and covariate distributions. The core technique is a generalization of the median-of-means estimator to arbitrary metric spaces.

2017-09-05
Page, Adam, Attaran, Nasrin, Shea, Colin, Homayoun, Houman, Mohsenin, Tinoosh.  2016.  Low-Power Manycore Accelerator for Personalized Biomedical Applications. Proceedings of the 26th Edition on Great Lakes Symposium on VLSI. :63–68.

Wearable personal health monitoring systems can offer a cost effective solution for human healthcare. These systems must provide both highly accurate, secured and quick processing and delivery of vast amount of data. In addition, wearable biomedical devices are used in inpatient, outpatient, and at home e-Patient care that must constantly monitor the patient's biomedical and physiological signals 24/7. These biomedical applications require sampling and processing multiple streams of physiological signals with strict power and area footprint. The processing typically consists of feature extraction, data fusion, and classification stages that require a large number of digital signal processing and machine learning kernels. In response to these requirements, in this paper, a low-power, domain-specific many-core accelerator named Power Efficient Nano Clusters (PENC) is proposed to map and execute the kernels of these applications. Experimental results show that the manycore is able to reduce energy consumption by up to 80% and 14% for DSP and machine learning kernels, respectively, when optimally parallelized. The performance of the proposed PENC manycore when acting as a coprocessor to an Intel Atom processor is compared with existing commercial off-the-shelf embedded processing platforms including Intel Atom, Xilinx Artix-7 FPGA, and NVIDIA TK1 ARM-A15 with GPU SoC. The results show that the PENC manycore architecture reduces the energy by as much as 10X while outperforming all off-the-shelf embedded processing platforms across all studied machine learning classifiers.

2017-09-19
Hamid, Yasir, Sugumaran, M., Journaux, Ludovic.  2016.  Machine Learning Techniques for Intrusion Detection: A Comparative Analysis. Proceedings of the International Conference on Informatics and Analytics. :53:1–53:6.

With the growth of internet world has transformed into a global market with all monetary and business exercises being carried online. Being the most imperative resource of the developing scene, it is the vulnerable object and hence needs to be secured from the users with dangerous personality set. Since the Internet does not have focal surveillance component, assailants once in a while, utilizing varied and advancing hacking topologies discover a path to bypass framework's security and one such collection of assaults is Intrusion. An intrusion is a movement of breaking into the framework by compromising the security arrangements of the framework set up. The technique of looking at the system information for the conceivable intrusions is known intrusion detection. For the last two decades, automatic intrusion detection system has been an important exploration point. Till now researchers have developed Intrusion Detection Systems (IDS) with the capability of detecting attacks in several available environments; latest on the scene are Machine Learning approaches. Machine learning techniques are the set of evolving algorithms that learn with experience, have improved performance in the situations they have already encountered and also enjoy a broad range of applications in speech recognition, pattern detection, outlier analysis etc. There are a number of machine learning techniques developed for different applications and there is no universal technique that can work equally well on all datasets. In this work, we evaluate all the machine learning algorithms provided by Weka against the standard data set for intrusion detection i.e. KddCupp99. Different measurements contemplated are False Positive Rate, precision, ROC, True Positive Rate.

2017-10-03
Luu, Loi, Chu, Duc-Hiep, Olickel, Hrishi, Saxena, Prateek, Hobor, Aquinas.  2016.  Making Smart Contracts Smarter. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :254–269.

Cryptocurrencies record transactions in a decentralized data structure called a blockchain. Two of the most popular cryptocurrencies, Bitcoin and Ethereum, support the feature to encode rules or scripts for processing transactions. This feature has evolved to give practical shape to the ideas of smart contracts, or full-fledged programs that are run on blockchains. Recently, Ethereum's smart contract system has seen steady adoption, supporting tens of thousands of contracts, holding millions dollars worth of virtual coins. In this paper, we investigate the security of running smart contracts based on Ethereum in an open distributed network like those of cryptocurrencies. We introduce several new security problems in which an adversary can manipulate smart contract execution to gain profit. These bugs suggest subtle gaps in the understanding of the distributed semantics of the underlying platform. As a refinement, we propose ways to enhance the operational semantics of Ethereum to make contracts less vulnerable. For developers writing contracts for the existing Ethereum system, we build a symbolic execution tool called Oyente to find potential security bugs. Among 19, 336 existing Ethereum contracts, Oyente flags 8, 833 of them as vulnerable, including the TheDAO bug which led to a 60 million US dollar loss in June 2016. We also discuss the severity of other attacks for several case studies which have source code available and confirm the attacks (which target only our accounts) in the main Ethereum network.

2017-09-15
Vemparala, Swapna, Di Troia, Fabio, Corrado, Visaggio Aaron, Austin, Thomas H., Stamo, Mark.  2016.  Malware Detection Using Dynamic Birthmarks. Proceedings of the 2016 ACM on International Workshop on Security And Privacy Analytics. :41–46.

In this paper, we compare the effectiveness of Hidden Markov Models (HMMs) with that of Profile Hidden Markov Models (PHMMs), where both are trained on sequences of API calls. We compare our results to static analysis using HMMs trained on sequences of opcodes, and show that dynamic analysis achieves significantly stronger results in many cases. Furthermore, in comparing our two dynamic analysis approaches, we find that using PHMMs consistently outperforms our technique based on HMMs.

2017-05-22
Keller, Marcel, Orsini, Emmanuela, Scholl, Peter.  2016.  MASCOT: Faster Malicious Arithmetic Secure Computation with Oblivious Transfer. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :830–842.

We consider the task of secure multi-party computation of arithmetic circuits over a finite field. Unlike Boolean circuits, arithmetic circuits allow natural computations on integers to be expressed easily and efficiently. In the strongest setting of malicious security with a dishonest majority –- where any number of parties may deviate arbitrarily from the protocol –- most existing protocols require expensive public-key cryptography for each multiplication in the preprocessing stage of the protocol, which leads to a high total cost. We present a new protocol that overcomes this limitation by using oblivious transfer to perform secure multiplications in general finite fields with reduced communication and computation. Our protocol is based on an arithmetic view of oblivious transfer, with careful consistency checks and other techniques to obtain malicious security at a cost of less than 6 times that of semi-honest security. We describe a highly optimized implementation together with experimental results for up to five parties. By making extensive use of parallelism and SSE instructions, we improve upon previous runtimes for MPC over arithmetic circuits by more than 200 times.

2017-07-24
Smart, Nigel P..  2016.  Masking and MPC: When Crypto Theory Meets Crypto Practice. Proceedings of the 2016 ACM Workshop on Theory of Implementation Security. :1–1.

I will explain the linkage between threshold implementation masking schemes and multi-party computation. The basic principles that need to be taken from multi-party computation will be presented, as well as some basic protocols. The different natures of the resources and threat models between the two different applications of secret sharing will also be covered.

2017-10-25
Sonowal, Gunikhan, Kuppusamy, K. S..  2016.  MASPHID: A Model to Assist Screen Reader Users for Detecting Phishing Sites Using Aural and Visual Similarity Measures. Proceedings of the International Conference on Informatics and Analytics. :87:1–87:6.

Phishing is one of the major issues in cyber security. In phishing, attackers steal sensitive information from users by impersonation of legitimate websites. This information captured by phisher is used for variety of scenarios such as buying goods using online transaction illegally or sometime may sell the collected user data to illegal sources. Till date, various detection techniques are proposed by different researchers but still phishing detection remains a challenging problem. While phishing remains to be a threat for all users, persons with visual impairments fall under the soft target category, as they primarily depend on the non-visual web access mode. The persons with visual impairments solely depends on the audio generated by the screen readers to identify and comprehend a web page. This weak-link shall be harnessed by attackers in creating impersonate sites that produces same audio output but are visually different. This paper proposes a model titled "MASPHID" (Model for Assisting Screenreader users to Phishing Detection) to assist persons with visual impairments in detecting phishing sites which are aurally similar but visually dissimilar. The proposed technique is designed in such a manner that phishing detection shall be carried out without burdening the users with technical details. This model works against zeroday phishing attack and evaluate high accuracy.

2017-03-07
Wang, Xi, Sun, Zhenfeng, Zhang, Wenqiang, Zhou, Yu, Jiang, Yu-Gang.  2016.  Matching User Photos to Online Products with Robust Deep Features. Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval. :7–14.

This paper focuses on a practically very important problem of matching a real-world product photo to exactly the same item(s) in online shopping sites. The task is extremely challenging because the user photos (i.e., the queries in this scenario) are often captured in uncontrolled environments, while the product images in online shops are mostly taken by professionals with clean backgrounds and perfect lighting conditions. To tackle the problem, we study deep network architectures and training schemes, with the goal of learning a robust deep feature representation that is able to bridge the domain gap between the user photos and the online product images. Our contributions are two-fold. First, we propose an alternative of the popular contrastive loss used in siamese deep networks, namely robust contrastive loss, where we "relax" the penalty on positive pairs to alleviate over-fitting. Second, a multi-task fine-tuning approach is introduced to learn a better feature representation, which not only incorporates knowledge from the provided training photo pairs, but also explores additional information from the large ImageNet dataset to regularize the fine-tuning procedure. Experiments on two challenging real-world datasets demonstrate that both the robust contrastive loss and the multi-task fine-tuning approach are effective, leading to very promising results with a time cost suitable for real-time retrieval.

2017-08-02
Symeonidis, Panagiotis.  2016.  Matrix and Tensor Decomposition in Recommender Systems. Proceedings of the 10th ACM Conference on Recommender Systems. :429–430.

This turorial offers a rich blend of theory and practice regarding dimensionality reduction methods, to address the information overload problem in recommender systems. This problem affects our everyday experience while searching for knowledge on a topic. Naive Collaborative Filtering cannot deal with challenging issues such as scalability, noise, and sparsity. We can deal with all the aforementioned challenges by applying matrix and tensor decomposition methods. These methods have been proven to be the most accurate (i.e., Netflix prize) and efficient for handling big data. For each method (SVD, SVD++, timeSVD++, HOSVD, CUR, etc.) we will provide a detailed theoretical mathematical background and a step-by-step analysis, by using an integrated toy example, which runs throughout all parts of the tutorial, helping the audience to understand clearly the differences among factorisation methods.

2017-04-24
Peres, Bruna Soares, Souza, Otavio Augusto de Oliveira, Santos, Bruno Pereira, Junior, Edson Roteia Araujo, Goussevskaia, Olga, Vieira, Marcos Augusto Menezes, Vieira, Luiz Filipe Menezes, Loureiro, Antonio Alfredo Ferreira.  2016.  Matrix: Multihop Address Allocation and Dynamic Any-to-Any Routing for 6LoWPAN. Proceedings of the 19th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems. :302–309.

Standard routing protocols for IPv6 over Low power Wireless Personal Area Networks (6LoWPAN) are mainly designed for data collection applications and work by establishing a tree-based network topology, which enables packets to be sent upwards, from the leaves to the root, adapting to dynamics of low-power communication links. The routing tables in such unidirectional networks are very simple and small since each node just needs to maintain the address of its parent in the tree, providing the best-quality route at every moment. In this work, we propose Matrix, a platform-independent routing protocol that utilizes the existing tree structure of the network to enable reliable and efficient any-to-any data traffic. Matrix uses hierarchical IPv6 address assignment in order to optimize routing table size, while preserving bidirectional routing. Moreover, it uses a local broadcast mechanism to forward messages to the right subtree when persistent node or link failures occur. We implemented Matrix on TinyOS and evaluated its performance both analytically and through simulations on TOSSIM. Our results show that the proposed protocol is superior to available protocols for 6LoWPAN, when it comes to any-to-any data communication, in terms of reliability, message efficiency, and memory footprint.

2017-05-17
Mell, Peter, Shook, James, Harang, Richard.  2016.  Measuring and Improving the Effectiveness of Defense-in-Depth Postures. Proceedings of the 2Nd Annual Industrial Control System Security Workshop. :15–22.

Defense-in-depth is an important security architecture principle that has significant application to industrial control systems (ICS), cloud services, storehouses of sensitive data, and many other areas. We claim that an ideal defense-in-depth posture is 'deep', containing many layers of security, and 'narrow', the number of node independent attack paths is minimized. Unfortunately, accurately calculating both depth and width is difficult using standard graph algorithms because of a lack of independence between multiple vulnerability instances (i.e., if an attacker can penetrate a particular vulnerability on one host then they can likely penetrate the same vulnerability on another host). To address this, we represent known weaknesses and vulnerabilities as a type of colored attack graph. We measure depth and width through solving the shortest color path and minimum color cut problems. We prove both of these to be NP-Hard and thus for our solution we provide a suite of greedy heuristics. We then empirically apply our approach to large randomly generated networks as well as to ICS networks generated from a published ICS attack template. Lastly, we discuss how to use these results to help guide improvements to defense-in-depth postures.

2017-04-24
Jonker, Mattijs, Sperotto, Anna, van Rijswijk-Deij, Roland, Sadre, Ramin, Pras, Aiko.  2016.  Measuring the Adoption of DDoS Protection Services. Proceedings of the 2016 Internet Measurement Conference. :279–285.

Distributed Denial-of-Service (DDoS) attacks have steadily gained in popularity over the last decade, their intensity ranging from mere nuisance to severe. The increased number of attacks, combined with the loss of revenue for the targets, has given rise to a market for DDoS Protection Service (DPS) providers, to whom victims can outsource the cleansing of their traffic by using traffic diversion. In this paper, we investigate the adoption of cloud-based DPSs worldwide. We focus on nine leading providers. Our outlook on adoption is made on the basis of active DNS measurements. We introduce a methodology that allows us, for a given domain name, to determine if traffic diversion to a DPS is in effect. It also allows us to distinguish various methods of traffic diversion and protection. For our analysis we use a long-term, large-scale data set that covers well over 50\textbackslash% of all names in the global domain namespace, in daily snapshots, over a period of 1.5 years. Our results show that DPS adoption has grown by 1.24x in our measurement period, a prominent trend compared to the overall expansion of the namespace. Our study also reveals that adoption is often lead by big players such as large Web hosters, which activate or deactivate DDoS protection for millions of domain names at once.