Visible to the public Biblio

Filters: Keyword is LRU  [Clear All Filters]
2021-05-18
Hasslinger, Gerhard, Ntougias, Konstantinos, Hasslinger, Frank, Hohlfeld, Oliver.  2020.  General Knapsack Bounds of Web Caching Performance Regarding the Properties of each Cacheable Object. 2020 IFIP Networking Conference (Networking). :821–826.
Caching strategies have been evaluated and compared in many studies, most often via simulation, but also in analytic methods. Knapsack solutions provide a general analytical approach for upper bounds on web caching performance. They assume objects of maximum (value/size) ratio being selected as cache content, with flexibility to define the caching value. Therefore the popularity, cost, size, time-to-live restrictions etc. per object can be included an overall caching goal, e.g., for reducing delay and/or transport path length in content delivery. The independent request model (IRM) leads to basic knapsack bounds for static optimum cache content. We show that a 2-dimensional (2D-)knapsack solution covers arbitrary request pattern, which selects dynamically changing content yielding maximum caching value for any predefined request sequence. Moreover, Belady's optimum strategy for clairvoyant caching is identified as a special case of our 2D-knapsack solution when all objects are unique. We also summarize a comprehensive picture of the demands and efficiency criteria for web caching, including updating speed and overheads. Our evaluations confirm significant performance gaps from LRU to advanced GreedyDual and score-based web caching methods and to the knapsack bounds.
2020-02-18
Hasslinger, Gerhard, Ntougias, Konstantinos, Hasslinger, Frank, Hohlfeld, Oliver.  2019.  Fast and Efficient Web Caching Methods Regarding the Size and Performance Measures per Data Object. 2019 IEEE 24th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1–7.

Caching methods are developed since 50 years for paging in CPU and database systems, and since 25 years for web caching as main application areas among others. Pages of unique size are usual in CPU caches, whereas web caches are storing data chunks of different size in a widely varying range. We study the impact of different object sizes on the performance and the overhead of web caching. This entails different caching goals, starting from the byte and object hit ratio to a generalized value hit ratio for optimized costs and benefits of caching regarding traffic engineering (TE), reduced delays and other QoS measures. The selection of the cache contents turns out to be crucial for the web cache efficiency with awareness of the size and other properties in a score for each object. We introduce a new class of rank exchange caching methods and show how their performance compares to other strategies with extensions needed to include the size and scores for QoS and TE caching goals. Finally, we derive bounds on the object, byte and value hit ratio for the independent request model (IRM) based on optimum knapsack solutions of the cache content.

2019-01-16
Akhtar, U., Lee, S..  2018.  Adaptive Cache Replacement in Efficiently Querying Semantic Big Data. 2018 IEEE International Conference on Web Services (ICWS). :367–370.
This paper addresses the problem of querying Knowledge bases (KBs) that store semantic big data. For efficiently querying data the most important factor is cache replacement policy, which determines the overall query response. As cache is limited in size, less frequently accessed data should be removed to provide more space to hot triples (frequently accessed). So, to achieve a similar performance to RDBMS, we proposed an Adaptive Cache Replacement (ACR) policy that predict the hot triples from query log. Moreover, performance bottleneck of triplestore, makes realworld application difficult. To achieve a closer performance similar to RDBMS, we have proposed an Adaptive Cache Replacement (ACR) policy that predict the hot triples from query log. Our proposed algorithm effectively replaces cache with high accuracy. To implement cache replacement policy, we have applied exponential smoothing, a forecast method, to collect most frequently accessed triples. The evaluation result shows that the proposed scheme outperforms the existing cache replacement policies, such as LRU (least recently used) and LFU (least frequently used), in terms of higher hit rates and less time overhead.