Visible to the public Biblio

Filters: Keyword is caching policy  [Clear All Filters]
2021-02-22
Abdelaal, M., Karadeniz, M., Dürr, F., Rothermel, K..  2020.  liteNDN: QoS-Aware Packet Forwarding and Caching for Named Data Networks. 2020 IEEE 17th Annual Consumer Communications Networking Conference (CCNC). :1–9.
Recently, named data networking (NDN) has been introduced to connect the world of computing devices via naming data instead of their containers. Through this strategic change, NDN brings several new features to network communication, including in-network caching, multipath forwarding, built-in multicast, and data security. Despite these unique features of NDN networking, there exist plenty of opportunities for continuing developments, especially with packet forwarding and caching. In this context, we introduce liteNDN, a novel forwarding and caching strategy for NDN networks. liteNDN comprises a cooperative forwarding strategy through which NDN routers share their knowledge, i.e. data names and interfaces, to optimize their packet forwarding decisions. Subsequently, liteNDN leverages that knowledge to estimate the probability of each downstream path to swiftly retrieve the requested data. Additionally, liteNDN exploits heuristics, such as routing costs and data significance, to make proper decisions about caching normal as well as segmented packets. The proposed approach has been extensively evaluated in terms of the data retrieval latency, network utilization, and the cache hit rate. The results showed that liteNDN, compared to conventional NDN forwarding and caching strategies, achieves much less latency while reducing the unnecessary traffic and caching activities.
2020-02-18
Talluri, Sacheendra, Iosup, Alexandru.  2019.  Efficient Estimation of Read Density When Caching for Big Data Processing. IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :502–507.

Big data processing systems are becoming increasingly more present in cloud workloads. Consequently, they are starting to incorporate more sophisticated mechanisms from traditional database and distributed systems. We focus in this work on the use of caching policies, which for big data raise important new challenges. Not only they must respond to new variants of the trade-off between hit rate, response time, and the space consumed by the cache, but they must do so at possibly higher volume and velocity than web and database workloads. Previous caching policies have not been tested experimentally with big data workloads. We address these challenges in this work. We propose the Read Density family of policies, which is a principled approach to quantify the utility of cached objects through a family of utility functions that depend on the frequency of reads of an object. We further design the Approximate Histogram, which is a policy-based technique based on an array of counters. This technique promises to achieve runtime-space efficient computation of the metric required by the cache policy. We evaluate through trace-based simulation the caching policies from the Read Density family, and compare them with over ten state-of-the-art alternatives. We use two workload traces representative for big data processing, collected from commercial Spark and MapReduce deployments. While we achieve comparable performance to the state-of-art with less parameters, meaningful performance improvement for big data workloads remain elusive.