Biblio
Filters: Keyword is cache hit rate [Clear All Filters]
Adaptive Caching for Beneficial Content Distribution in Information-Centric Networking. 2020 International Conference on Information Networking (ICOIN). :535–540.
.
2020. Currently, little attention has been carried out to address the feasibility of in-network caching in Information-Centric Networking (ICN) for the design and real-world deployment of future networks. Towards this line, in this paper, we propose a beneficial caching scheme in ICN by storing no more than a specific number of replicas for each content. Particularly, to realize an optimal content distribution for deploying caches in ICN, a content can be cached either partially or as a full-object corresponding to its request arrival rate and data traffic. Also, we employ a utility-based replacement in each content node to keep the most recent and popular content items in the ICN interconnections. The evaluation results show that the proposal improves the cache hit rate and cache diversity considerably, and acts as a beneficial caching approach for network and service providers in ICN. Specifically, the proposed caching mechanism is easy to deploy, robust, and relevant for the content-based providers by enabling them to offer users high Quality of Service (QoS) and gain benefits at the same time.
liteNDN: QoS-Aware Packet Forwarding and Caching for Named Data Networks. 2020 IEEE 17th Annual Consumer Communications Networking Conference (CCNC). :1–9.
.
2020. Recently, named data networking (NDN) has been introduced to connect the world of computing devices via naming data instead of their containers. Through this strategic change, NDN brings several new features to network communication, including in-network caching, multipath forwarding, built-in multicast, and data security. Despite these unique features of NDN networking, there exist plenty of opportunities for continuing developments, especially with packet forwarding and caching. In this context, we introduce liteNDN, a novel forwarding and caching strategy for NDN networks. liteNDN comprises a cooperative forwarding strategy through which NDN routers share their knowledge, i.e. data names and interfaces, to optimize their packet forwarding decisions. Subsequently, liteNDN leverages that knowledge to estimate the probability of each downstream path to swiftly retrieve the requested data. Additionally, liteNDN exploits heuristics, such as routing costs and data significance, to make proper decisions about caching normal as well as segmented packets. The proposed approach has been extensively evaluated in terms of the data retrieval latency, network utilization, and the cache hit rate. The results showed that liteNDN, compared to conventional NDN forwarding and caching strategies, achieves much less latency while reducing the unnecessary traffic and caching activities.