Biblio

Filters: Keyword is sampling  [Clear All Filters]
2020-08-28
Hasanin, Tawfiq, Khoshgoftaar, Taghi M., Leevy, Joffrey L..  2019.  A Comparison of Performance Metrics with Severely Imbalanced Network Security Big Data. 2019 IEEE 20th International Conference on Information Reuse and Integration for Data Science (IRI). :83—88.

Severe class imbalance between the majority and minority classes in large datasets can prejudice Machine Learning classifiers toward the majority class. Our work uniquely consolidates two case studies, each utilizing three learners implemented within an Apache Spark framework, six sampling methods, and five sampling distribution ratios to analyze the effect of severe class imbalance on big data analytics. We use three performance metrics to evaluate this study: Area Under the Receiver Operating Characteristic Curve, Area Under the Precision-Recall Curve, and Geometric Mean. In the first case study, models were trained on one dataset (POST) and tested on another (SlowlorisBig). In the second case study, the training and testing dataset roles were switched. Our comparison of performance metrics shows that Area Under the Precision-Recall Curve and Geometric Mean are sensitive to changes in the sampling distribution ratio, whereas Area Under the Receiver Operating Characteristic Curve is relatively unaffected. In addition, we demonstrate that when comparing sampling methods, borderline-SMOTE2 outperforms the other methods in the first case study, and Random Undersampling is the top performer in the second case study.

2018-11-28
Biswas, Swarnendu, Cao, Man, Zhang, Minjia, Bond, Michael D., Wood, Benjamin P..  2017.  Lightweight Data Race Detection for Production Runs. Proceedings of the 26th International Conference on Compiler Construction. :11–21.

To detect data races that harm production systems, program analysis must target production runs. However, sound and precise data race detection adds too much run-time overhead for use in production systems. Even existing approaches that provide soundness or precision incur significant limitations. This work addresses the need for soundness (no missed races) and precision (no false races) by introducing novel, efficient production-time analyses that address each need separately. (1) Precise data race detection is useful for developers, who want to fix bugs but loathe false positives. We introduce a precise analysis called RaceChaser that provides low, bounded run-time overhead. (2) Sound race detection benefits analyses and tools whose correctness relies on knowledge of all potential data races. We present a sound, efficient approach called Caper that combines static and dynamic analysis to catch all data races in observed runs. RaceChaser and Caper are useful not only on their own; we introduce a framework that combines these analyses, using Caper as a sound filter for precise data race detection by RaceChaser. Our evaluation shows that RaceChaser and Caper are efficient and effective, and compare favorably with existing state-of-the-art approaches. These results suggest that RaceChaser and Caper enable practical data race detection that is precise and sound, respectively, ultimately leading to more reliable software systems.

2017-08-02
Agarwal, Pankaj K., Fox, Kyle, Munagala, Kamesh, Nath, Abhinandan.  2016.  Parallel Algorithms for Constructing Range and Nearest-Neighbor Searching Data Structures. Proceedings of the 35th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems. :429–440.

With the massive amounts of data available today, it is common to store and process data using multiple machines. Parallel programming platforms such as MapReduce and its variants are popular frameworks for handling such large data. We present the first provably efficient algorithms to compute, store, and query data structures for range queries and approximate nearest neighbor queries in a popular parallel computing abstraction that captures the salient features of MapReduce and other massively parallel communication (MPC) models. In particular, we describe algorithms for \$kd\$-trees, range trees, and BBD-trees that only require O(1) rounds of communication for both preprocessing and querying while staying competitive in terms of running time and workload to their classical counterparts. Our algorithms are randomized, but they can be made deterministic at some increase in their running time and workload while keeping the number of rounds of communication to be constant.

2017-09-26
Rothberg, Valentin, Dietrich, Christian, Ziegler, Andreas, Lohmann, Daniel.  2016.  Towards Scalable Configuration Testing in Variable Software. Proceedings of the 2016 ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences. :156–167.

Testing a software product line such as Linux implies building the source with different configurations. Manual approaches to generate configurations that enable code of interest are doomed to fail due to the high amount of variation points distributed over the feature model, the build system and the source code. Research has proposed various approaches to generate covering configurations, but the algorithms show many drawbacks related to run-time, exhaustiveness and the amount of generated configurations. Hence, analyzing an entire Linux source can yield more than 30 thousand configurations and thereby exceeds the limited budget and resources for build testing. In this paper, we present an approach to fill the gap between a systematic generation of configurations and the necessity to fully build software in order to test it. By merging previously generated configurations, we reduce the number of necessary builds and enable global variability-aware testing. We reduce the problem of merging configurations to finding maximum cliques in a graph. We evaluate the approach on the Linux kernel, compare the results to common practices in industry, and show that our implementation scales even when facing graphs with millions of edges.

2017-03-07
Chu, Xu, Ilyas, Ihab F., Krishnan, Sanjay, Wang, Jiannan.  2016.  Data Cleaning: Overview and Emerging Challenges. Proceedings of the 2016 International Conference on Management of Data. :2201–2206.

Detecting and repairing dirty data is one of the perennial challenges in data analytics, and failure to do so can result in inaccurate analytics and unreliable decisions. Over the past few years, there has been a surge of interest from both industry and academia on data cleaning problems including new abstractions, interfaces, approaches for scalability, and statistical techniques. To better understand the new advances in the field, we will first present a taxonomy of the data cleaning literature in which we highlight the recent interest in techniques that use constraints, rules, or patterns to detect errors, which we call qualitative data cleaning. We will describe the state-of-the-art techniques and also highlight their limitations with a series of illustrative examples. While traditionally such approaches are distinct from quantitative approaches such as outlier detection, we also discuss recent work that casts such approaches into a statistical estimation framework including: using Machine Learning to improve the efficiency and accuracy of data cleaning and considering the effects of data cleaning on statistical analysis.

2016-12-06
Ju-Sung Lee, Jurgen Pfeffer.  2015.  Robustness of Network Metrics in the Context of Digital Communication Data. HICSS '15 Proceedings of the 2015 48th Hawaii International Conference on System Sciences.

Social media data and other web-based network data are large and dynamic rendering the identification of structural changes in such systems a hard problem. Typically, online data is constantly streaming and results in data that is incomplete thus necessitating the need to understand the robustness of network metrics on partial or sampled network data. In this paper, we examine the effects of sampling on key network centrality metrics using two empirical communication datasets. Correlations between network metrics of original and sampled nodes offer a measure of sampling accuracy. The relationship between sampling and accuracy is convergent and amenable to nonlinear analysis. Naturally, larger edge samples induce sampled graphs that are more representative of the original graph. However, this effect is attenuated when larger sets of nodes are recovered in the samples. Also, we find that the graph structure plays a prominent role in sampling accuracy. Centralized graphs, in which fewer nodes enjoy higher centrality scores, offer more representative samples.