Visible to the public Efficient Estimation of Read Density When Caching for Big Data Processing

TitleEfficient Estimation of Read Density When Caching for Big Data Processing
Publication TypeConference Paper
Year of Publication2019
AuthorsTalluri, Sacheendra, Iosup, Alexandru
Conference NameIEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)
Keywordsapproximate histogram, Arrays, Big Data, big data processing systems, big data workloads, cache policy, cache storage, cached objects, caching policy, cloud computing, cloud workloads, Conferences, data handling, Data models, database management systems, Distributed Systems, Histograms, hit rate, Human Behavior, MapReduce deployment, Mathematical model, Metrics, parallel processing, policy-based technique, pubcrawl, read density, resilience, Resiliency, runtime-space efficient computation, Scalability, Spark deployment, trace-based simulation, traditional database, Web Caching, workload traces representative
Abstract

Big data processing systems are becoming increasingly more present in cloud workloads. Consequently, they are starting to incorporate more sophisticated mechanisms from traditional database and distributed systems. We focus in this work on the use of caching policies, which for big data raise important new challenges. Not only they must respond to new variants of the trade-off between hit rate, response time, and the space consumed by the cache, but they must do so at possibly higher volume and velocity than web and database workloads. Previous caching policies have not been tested experimentally with big data workloads. We address these challenges in this work. We propose the Read Density family of policies, which is a principled approach to quantify the utility of cached objects through a family of utility functions that depend on the frequency of reads of an object. We further design the Approximate Histogram, which is a policy-based technique based on an array of counters. This technique promises to achieve runtime-space efficient computation of the metric required by the cache policy. We evaluate through trace-based simulation the caching policies from the Read Density family, and compare them with over ten state-of-the-art alternatives. We use two workload traces representative for big data processing, collected from commercial Spark and MapReduce deployments. While we achieve comparable performance to the state-of-art with less parameters, meaningful performance improvement for big data workloads remain elusive.

DOI10.1109/INFCOMW.2019.8845043
Citation Keytalluri_efficient_2019