Visible to the public Biblio

Filters: Keyword is Web Caching  [Clear All Filters]
2019-01-16
Aloui, M., Elbiaze, H., Glitho, R., Yangui, S..  2018.  Analytics as a service architecture for cloud-based CDN: Case of video popularity prediction. 2018 15th IEEE Annual Consumer Communications Networking Conference (CCNC). :1–4.
User Generated Videos (UGV) are the dominating content stored in scattered caches to meet end-user Content Delivery Networks (CDN) requests with quality of service. End-User behaviour leads to a highly variable UGV popularity. This aspect can be exploited to efficiently utilize the limited storage of the caches, and improve the hit ratio of UGVs. In this paper, we propose a new architecture for Data Analytics in Cloud-based CDN to derive UGVs popularity online. This architecture uses RESTful web services to gather CDN logs, store them through generic collections in a NoSQL database, and calculate related popular UGVs in a real time fashion. It uses a dynamic model training and prediction services to provide each CDN with related popular videos to be cached based on the latest trained model. The proposed architecture is implemented with k-means clustering prediction model and the obtained results are 99.8% accurate.
Abdelwahed, N., Letaifa, A. Ben, Asmi, S. El.  2018.  Content Based Algorithm Aiming to Improve the WEB\_QoE Over SDN Networks. 2018 32nd International Conference on Advanced Information Networking and Applications Workshops (WAINA). :153–158.
Since the 1990s, the concept of QoE has been increasingly present and many scientists take it into account within different fields of application. Taking for example the case of video streaming, the QoE has been well studied in this case while for the web the study of its QoE is relatively neglected. The Quality of Experience (QoE) is the set of objective and subjective characteristics that satisfy retain or give confidence to a user through the life cycle of a service. There are researches that take the different measurement metrics of QoE as a subject, others attack new ways to improve this QoE in order to satisfy the customer and gain his loyalty. In this paper, we focus on the web QoE that is declined by researches despite its great importance given the complexity of new web pages and their utility that is increasingly critical. The wealth of new web pages in images, videos, audios etc. and their growing significance prompt us to write this paper, in which we discuss a new method that aims to improve the web QoE in a software-defined network (SDN). Our proposed method consists in automating and making more flexible the management of the QoE improvement of the web pages and this by writing an algorithm that, depending on the case, chooses the necessary treatment to improve the web QoE of the page concerned and using both web prefetching and caching to accelerate the data transfer when the user asks for it. The first part of the paper discusses the advantages and disadvantages of existing works. In the second part we propose an automatic algorithm that treats each case with the appropriate solution that guarantees its best performance. The last part is devoted to the evaluation of the performance.
Uddin, M. Y. S., Venkatasubramanian, N..  2018.  Edge Caching for Enriched Notifications Delivery in Big Active Data. 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS). :696–705.
In this paper, we propose a set of caching strategies for big active data (BAD) systems. BAD is a data management paradigm that allows ingestion of massive amount of data from heterogeneous sources, such as sensor data, social networks, web and crowdsourced data in a large data cluster consisting of many computing and storage nodes, and enables a very large number of end users to subscribe to those data items through declarative subscriptions. A set of distributed broker nodes connect these end users to the backend data cluster, manage their subscriptions and deliver the subscription results to the end users. Unlike the most traditional publish-subscribe systems that match subscriptions against a single stream of publications to generate notifications, BAD can match subscriptions across multiple publications (by leveraging storage in the backend) and thus can enrich notifications with a rich set of diverse contents. As the matched results are delivered to the end users through the brokers, the broker node caches the results for a while so that the subscribers can retrieve them with reduced latency. Interesting research questions arise in this context so as to determine which result objects to cache or drop when the cache becomes full (eviction-based caching) or to admit objects with an explicit expiration time indicating how much time they should reside in the cache (TTL based caching). To this end, we propose a set of caching strategies for the brokers and show that the schemes achieve varying degree of efficiency in terms of notification delivery in the BAD system. We evaluate our schemes via a prototype implementation and through detailed simulation studies.
Hasslinger, G., Ntougias, K., Hasslinger, F., Hohlfeld, O..  2018.  Comparing Web Cache Implementations for Fast O(1) Updates Based on LRU, LFU and Score Gated Strategies. 2018 IEEE 23rd International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1–7.
To be applicable to high user request workloads, web caching strategies benefit from low implementation and update effort. In this regard, the Least Recently Used (LRU) replacement principle is a simple and widely-used method. Despite its popularity, LRU has deficits in the achieved hit rate performance and cannot consider transport and network optimization criteria for selecting content to be cached. As a result, many alternatives have been proposed in the literature, which improve the cache performance at the cost of higher complexity. In this work, we evaluate the implementation complexity and runtime performance of LRU, Least Frequently Used (LFU), and score based strategies in the class of fast O(1) updates with constant effort per request. We implement Window LFU (W-LFU) within this class and show that O(1) update effort can be achieved. We further compare fast update schemes of Score Gated LRU and new Score Gated Polling (SGP). SGP is simpler than LRU and provides full flexibility for arbitrary score assessment per data object as information basis for performance optimization regarding network cost and quality measures.
Akhtar, U., Lee, S..  2018.  Adaptive Cache Replacement in Efficiently Querying Semantic Big Data. 2018 IEEE International Conference on Web Services (ICWS). :367–370.
This paper addresses the problem of querying Knowledge bases (KBs) that store semantic big data. For efficiently querying data the most important factor is cache replacement policy, which determines the overall query response. As cache is limited in size, less frequently accessed data should be removed to provide more space to hot triples (frequently accessed). So, to achieve a similar performance to RDBMS, we proposed an Adaptive Cache Replacement (ACR) policy that predict the hot triples from query log. Moreover, performance bottleneck of triplestore, makes realworld application difficult. To achieve a closer performance similar to RDBMS, we have proposed an Adaptive Cache Replacement (ACR) policy that predict the hot triples from query log. Our proposed algorithm effectively replaces cache with high accuracy. To implement cache replacement policy, we have applied exponential smoothing, a forecast method, to collect most frequently accessed triples. The evaluation result shows that the proposed scheme outperforms the existing cache replacement policies, such as LRU (least recently used) and LFU (least frequently used), in terms of higher hit rates and less time overhead.
Aktaş, Mehmet F., Wang, Chen, Youssef, Alaa, Steinder, Malgorzata Gosia.  2018.  Resource Profile Advisor for Containers in Cognitive Platform. Proceedings of the ACM Symposium on Cloud Computing. :506–506.
Containers have transformed the cluster management into an application oriented endeavor, thus being widely used as the deployment units (i.e., micro-services) of large scale cloud services. As opposed to VMs, containers allow for resource provisioning with fine granularity and their resource usage directly reflects the micro-service behaviors. Container management systems like Kubernetes and Mesos provision resources to containers according to the capacity requested by the developers. Resource usages estimated by the developers are grossly inaccurate. They tend to be risk-averse and over provision resources, as under-provisioning would cause poor runtime performance or failures. Without actually running the workloads, resource provisioning is challenging. However, benchmarking production workloads at scale requires huge manual efforts. In this work, we leverage IBM Monitoring service to profile the resource usage of production IBM Watson services in rolling windows by focusing on both evaluating how developers request resources and characterizing the actual resource usage. Our resource profiling study reveals two important characteristics of the cognitive workloads. 1. Stationarity. According to Augmented Dickey-Fuller test with 95% confidence, more than 95% of the container instances have stationary CPU usage while more than 85% have stationary memory usage, indicating that resource usage statistics do not change over time. We find for the majority of containers that the stationarity can be detected at the early stage of container execution and can hold throughout their lifespans. In addition, containers with non-stationary CPU or memory usage are also observed to implement predictable usage trends and patterns (e.g., trend stationarity or seasonality). 2. Predictability by container image. By clustering the containers based on their images, container resource usages within the same cluster are observed to exhibit strong statistical similarity. This suggests that the history of resource usage for one instance can be used to predict usage for future instances that run the same container image. Based on profiling results of running containers in rolling windows, we propose a resource usage advisory system to refine the requested resource values of the running and arriving containers as illustrated in Fig. 1. Our system continuously retrieves the resource usage metrics of running containers from IBM monitoring service and predicts the resource usage profiles in a container resource usage prediction agent. Upon the arrival of a new pod1, the resource profile advisor, proposed as a module in the web-hooked admission controller in Kubernetes, checks whether the resource profile of each container in the pod has been predicted with confidence. If a container's profile has been predicted and cached in the container resource profile database, the default requested values of containers are refined by the predicted ones; otherwise, containers are forwarded to the scheduler without any change. Similarly, a resource profile auto-scaler is proposed to update the requested resource values of containers for running pods2 as soon as the database is updated. Our study shows that developers request at least 1 core-per-second (cps) CPU and 1 GB memory for ≥ 70% of the containers, while ≥ 80% of the containers actually use less than 1 cps and 1GB. Additionally, \textbackslashtextasciitilde 20% of the containers are significantly under provisioned. We use resource usage data in one day to generate container resource profiles and evaluate our approach based on the actual usage on the following day. Without our system, average CPU (memory) usage for \textbackslashtextgreater90% of containers lies outside of 50% - 100% (70% - 100%) of the requested values. Our evaluation shows that our system can advise request values appropriately so that average and 95th percentile CPU (memory) usage for \textbackslashtextgreater90% of the containers are within 50% - 100% (70% - 100%) of the requested values. Furthermore, average CPU (memory) utilization across all pods is raised from 10% (26%) to 54% (88%).
Choi, Jongsok, Lian, Ruolong, Li, Zhi, Canis, Andrew, Anderson, Jason.  2018.  Accelerating Memcached on AWS Cloud FPGAs. Proceedings of the 9th International Symposium on Highly-Efficient Accelerators and Reconfigurable Technologies. :2:1–2:8.
In recent years, FPGAs have been deployed in data centres of major cloud service providers, such as Microsoft [1], Amazon [2], Alibaba [3], Tencent [4], Huawei [5], and Nimbix [6]. This marks the beginning of bringing FPGA computing to the masses, as being in the cloud, one can access an FPGA from anywhere. A wide range of applications are run in the cloud, including web servers and databases among many others. Memcached is a high-performance in-memory ob ject caching system, which acts as a caching layer between web servers and databases. It is used by many companies, including Flicker, Wikipedia, Wordpress, and Facebook [7, 8]. In this paper, we present a Memcached accelerator implemented on the AWS FPGA cloud (F1 instance). Compared to AWS ElastiCache, an AWS-managed CPU Memcached service, our Memcached accelerator provides up to 9 x better throughput and latency. A live demo of the Memcached accelerator running on F1 can be accessed on our website [9].
Sudar, Samuel, Welsh, Matt, Anderson, Richard.  2018.  Siskin: Leveraging the Browser to Share Web Content in Disconnected Environments. Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies. :18:1–18:7.
Schools in the developing world frequently do not have high bandwidth or reliable connections, limiting their access to web content. As a result, schools are increasingly turning to Offline Educational Resources (OERs), employing purpose-built local hardware to serve content. These approaches can be expensive and difficult to maintain in resource-constrained settings. We present Siskin, an alternative approach that leverages the ubiquity of web browsers to provide a distributed content access cache between user devices on the local network. We demonstrate that this system allows access to web pages offline by identifying the browser as a ubiquitous platform. We build and evaluate a prototype, showing that existing web protocols and infrastructure can be leveraged to create a powerful content cache over a local network.
Pan, Cheng, Hu, Xiameng, Zhou, Lan, Luo, Yingwei, Wang, Xiaolin, Wang, Zhenlin.  2018.  PACE: Penalty Aware Cache Modeling with Enhanced AET. Proceedings of the 9th Asia-Pacific Workshop on Systems. :19:1–19:8.
Past cache modeling techniques are typically limited to a cache system with a fixed cache line/block size. This limitation is not a problem for a hardware cache where the cache line size is uniform. However, modern in-memory software caches, such as Memcached and Redis, are able to cache varied-size data objects. A software cache supports update and delete operations in addition to only reads and writes for a hardware cache. Moreover, existing cache models often assume that the penalty for each cache miss is identical, which is not true especially for software cache targeting web services, and past cache management policies that aim to improve cache hit rate are no longer sufficient. We propose a more general cache model that can handle varied cache block sizes, nonuniform miss penalties, and diverse cache operations. In this paper, we first extend a state-of-the-art cache model to accurately predict cache miss ratios for variable cache sizes when object size, updates and deletions are considered. We then apply this model to drive cache management when miss penalty is brought into consideration. Our approach delivers better results than a recent penalty-aware cache management scheme, Hyperbolic Caching, especially when cache budget is tight. Another advantage of our approach is that it provides predictable and controllable cache management on cache space allocation, especially when multiple applications share the cache space.
Chen, Muhao, Zhao, Qi, Du, Pengyuan, Zaniolo, Carlo, Gerla, Mario.  2018.  Demand-driven Cache Allocation Based on Context-aware Collaborative Filtering. Proceedings of the Eighteenth ACM International Symposium on Mobile Ad Hoc Networking and Computing. :302–303.
Many recent advances of network caching focus on i) more effectively modeling the preferences of a regional user group to different web contents, and ii) reducing the cost of content delivery by storing the most popular contents in regional caches. However, the context under which the users interact with the network system usually causes tremendous variations in a user group's preferences on the contents. To effectively leverage such contextual information for more efficient network caching, we propose a novel mechanism to incorporate context-aware collaborative filtering into demand-driven caching. By differentiating the characterization of user interests based on a priori contexts, our approach seeks to enhance the cache performance with a more dynamic and fine-grained cache allocation process. In particular, our approach is general and adapts to various types of context information. Our evaluation shows that this new approach significantly outperforms previous non-demand-driven caching strategies by offering much higher cached content rate, especially when utilizing the contextual information.
Nguyen, Hoai Viet, Lo Iacono, Luigi, Federrath, Hannes.  2018.  Systematic Analysis of Web Browser Caches. Proceedings of the 2Nd International Conference on Web Studies. :64–71.
The caching of frequently requested web resources is an integral part of the web ever since. Cacheability is the main pillar for the web's scalability and an important mechanism for optimizing resource consumption and performance. Caches exist in many variations and locations on the path between web client and server with the browser cache being ubiquitous to date. Web developers need to have a profound understanding of the concepts and policies of web caching even when exploiting these advantages is not relevant. Neglecting web caching may otherwise result in more serve consequences than the simple loss of scalability and efficiency. Recent misuse of web caching systems shows to affect the application's behavior as well as privacy and security. In this paper we introduce a tool-based approach to disburden web developers while keeping them informed about caching influences. Our first contribution is a structured test suite containing 397 web caching test cases. In order to make this collection easily adoptable we introduce an automated testing tool for executing the test cases against web browsers. Based on the developed testing tool we conduct a systematic analysis on the behavior of web browser caches and their compliance with relevant caching standards. Our findings on desktop and mobile versions of Chrome, Firefox, Safari and Edge show many diversities as well as discrepancies. Appropriate tooling supports web developers in uncovering such adversities. As our baseline of test cases is specified using a specification language that enables extensibility, developers as well as administrators and researchers can systematically add and empirically explore caching properties of interest even in non-browser scenarios.
2018-03-26
Ma, H., Tao, O., Zhao, C., Li, P., Wang, L..  2017.  Impact of Replacement Policies on Static-Dynamic Query Results Cache in Web Search Engines. 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). :137–139.

Caching query results is an efficient technique for Web search engines. A state-of-the-art approach named Static-Dynamic Cache (SDC) is widely used in practice. Replacement policy is the key factor on the performance of cache system, and has been widely studied such as LIRS, ARC, CLOCK, SKLRU and RANDOM in different research areas. In this paper, we discussed replacement policies for static-dynamic cache and conducted the experiments on real large scale query logs from two famous commercial Web search engine companies. The experimental results show that ARC replacement policy could work well with static-dynamic cache, especially for large scale query results cache.

Hasslinger, G., Kunbaz, M., Hasslinger, F., Bauschert, T..  2017.  Web Caching Evaluation from Wikipedia Request Statistics. 2017 15th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt). :1–6.

Wikipedia is one of the most popular information platforms on the Internet. The user access pattern to Wikipedia pages depends on their relevance in the current worldwide social discourse. We use publically available statistics about the top-1000 most popular pages on each day to estimate the efficiency of caches for support of the platform. While the data volumes are moderate, the main goal of Wikipedia caches is to reduce access times for page views and edits. We study the impact of most popular pages on the achievable cache hit rate in comparison to Zipf request distributions and we include daily dynamics in popularity.

Mihindukulasooriya, Nandana, Rico, Mariano, Santana-Pérez, Idafen, Garc\'ıa-Castro, Raúl, Gómez-Pérez, Asunción.  2017.  Repairing Hidden Links in Linked Data: Enhancing the Quality of RDF Knowledge Graphs. Proceedings of the Knowledge Capture Conference. :6:1–6:8.

Knowledge Graphs (KG) are becoming core components of most artificial intelligence applications. Linked Data, as a method of publishing KGs, allows applications to traverse within, and even out of, the graph thanks to global dereferenceable identifiers denoting entities, in the form of IRIs. However, as we show in this work, after analyzing several popular datasets (namely DBpedia, LOD Cache, and Web Data Commons JSON-LD data) many entities are being represented using literal strings where IRIs should be used, diminishing the advantages of using Linked Data. To remedy this, we propose an approach for identifying such strings and replacing them with their corresponding entity IRIs. The proposed approach is based on identifying relations between entities based on both ontological axioms as well as data profiling information and converting strings to entity IRIs based on the types of entities linked by each relation. Our approach showed 98% recall and 76% precision in identifying such strings and 97% precision in converting them to their corresponding IRI in the considered KG. Further, we analyzed how the connectivity of the KG is increased when new relevant links are added to the entities as a result of our method. Our experiments on a subset of the Spanish DBpedia data show that it could add 25% more links to the KG and improve the overall connectivity by 17%.

Kane, Andrew, Tompa, Frank Wm..  2017.  Small-Term Distribution for Disk-Based Search. Proceedings of the 2017 ACM Symposium on Document Engineering. :49–58.

A disk-based search system distributes a large index across multiple disks on one or more machines, where documents are typically assigned to disks at random in order to achieve load balancing. However, random distribution degrades clustering, which is required for efficient index compression. Using the GOV2 dataset, we demonstrate the effect of various ordering techniques on index compression, and then quantify the effect of various document distribution approaches on compression and load balancing. We explore runtime performance by simulating a disk-based search system for a scaled-out 10xGOV2 index over ten disks using two standard approaches, document and term distribution, as well as a hybrid approach: small-term distribution. We find that small-term distribution has the best performance, especially in the presence of list caching, and argue that this rarely discussed distribution approach can improve disk-based search performance for many real-world installations.

Sundarrajan, Aditya, Feng, Mingdong, Kasbekar, Mangesh, Sitaraman, Ramesh K..  2017.  Footprint Descriptors: Theory and Practice of Cache Provisioning in a Global CDN. Proceedings of the 13th International Conference on Emerging Networking EXperiments and Technologies. :55–67.

Modern CDNs cache and deliver a highly-diverse set of traffic classes, including web pages, images, videos and software downloads. It is economically advantageous for a CDN to cache and deliver all traffic classes using a shared distributed cache server infrastructure. However, such sharing of cache resources across multiple traffic classes poses significant cache provisioning challenges that are the focus of this paper. Managing a vast shared caching infrastructure requires careful modeling of user request sequences for each traffic class. Using extensive traces from Akamai's CDN, we show how each traffic class has drastically different object access patterns, object size distributions, and cache resource requirements. We introduce the notion of a footprint descriptor that is a succinct representation of the cache requirements of a request sequence. Leveraging novel connections to Fourier analysis, we develop a footprint descriptor calculus that allows us to predict the cache requirements when different traffic classes are added, subtracted and scaled to within a prediction error of 2.5%. We integrated our footprint calculus in the cache provisioning operations of the production CDN and show how it is used to solve key challenges in cache sizing, traffic mixing, and cache partitioning.

Jin, Boram, Kim, Daewoo, Yun, Se-Young, Shin, Jinwoo, Hong, Seongik, Lee, Byoung-Joon B.J., Yi, Yung.  2017.  On the Delay Scaling Laws of Cache Networks. Proceedings of the 12th International Conference on Future Internet Technologies. :3:1–3:6.

The Internet is becoming more and more content-oriented. CDN (Content Distribution Networks) has been a popular architecture compatible with the current Internet, and a new revolutionary paradigm such as ICN (Information Centric Networking) has studied. One of the main components in both CDN and ICN is considering cache on network. Despite a surge of extensive use of cache in the current and future Internet architectures, analysis on the performance of general cache networks are still quite limited due to complex inter-plays among various components and thus analytical intractability. Due to mathematical tractability, we consider 'static' cache policies and study asymptotic delay performance of those policies in cache networks, in particular, focusing on the impact of heterogeneous content popularities and nodes' geographical 'importances' in caching policies. Furthermore, our simulation results suggest that they perform quite similarly as popular 'dynamic' policies such as LFU (Least-Frequently-Used) and LRU (Least-Recently-Used). We believe that our theoretical findings provide useful engineering implications such as when and how various factors have impact on caching performance.

Nishioka, Chifumi, Scherp, Ansgar.  2017.  Keeping Linked Open Data Caches Up-to-Date by Predicting the Life-Time of RDF Triples. Proceedings of the International Conference on Web Intelligence. :73–80.

Many Linked Open Data applications require fresh copies of RDF data at their local repositories. Since RDF documents constantly change and those changes are not automatically propagated to the LOD applications, it is important to regularly visit the RDF documents to refresh the local copies and keep them up-to-date. For this purpose, crawling strategies determine which RDF documents should be preferentially fetched. Traditional crawling strategies rely only on how an RDF document has been modified in the past. In contrast, we predict on the triple level whether a change will occur in the future. We use the weekly snapshots of the DyLDO dataset as well as the monthly snapshots of the Wikidata dataset. First, we conduct an in-depth analysis of the life span of triples in RDF documents. Through the analysis, we identify which triples are stable and which are ephemeral. We introduce different features based on the triples and apply a simple but effective linear regression model. Second, we propose a novel crawling strategy based on the linear regression model. We conduct two experimental setups where we vary the amount of available bandwidth as well as iteratively observe the quality of the local copies over time. The results demonstrate that the novel crawling strategy outperforms the state of the art in both setups.

Kim, Taewoo, Thirumaraiselvan, Vidhyasagar, Jia, Jianfeng, Li, Chen.  2017.  Caching Geospatial Objects in Web Browsers. Proceedings of the 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. :92:1–92:4.

Map-based services are becoming increasingly important in many applications. These services often need to show geospatial objects (e.g., cities and parks) in Web browsers, and being able to retrieve such objects efficiently is critical to achieving a low response time for user queries. In this demonstration we present a browser-based caching technique to store and load geospatial objects on a map in a Web page. The technique employs a hierarchical structure to store and index polygons, and does intelligent prefetching and cache replacement by utilizing the information about the user's recent browser activities. We demonstrate the usage of the technique in an application called TwitterMap for visualizing more than 1 billion tweets in real time. We show its effectiveness by using different replacement policies. The technique is implemented as a general-purpose Javascript library, making it suitable for other applications as well.

Scully, Ziv, Chlipala, Adam.  2017.  A Program Optimization for Automatic Database Result Caching. Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages. :271–284.

Most popular Web applications rely on persistent databases based on languages like SQL for declarative specification of data models and the operations that read and modify them. As applications scale up in user base, they often face challenges responding quickly enough to the high volume of requests. A common aid is caching of database results in the application's memory space, taking advantage of program-specific knowledge of which caching schemes are sound and useful, embodied in handwritten modifications that make the program less maintainable. These modifications also require nontrivial reasoning about the read-write dependencies across operations. In this paper, we present a compiler optimization that automatically adds sound SQL caching to Web applications coded in the Ur/Web domain-specific functional language, with no modifications required to source code. We use a custom cache implementation that supports concurrent operations without compromising the transactional semantics of the database abstraction. Through experiments with microbenchmarks and production Ur/Web applications, we show that our optimization in many cases enables an easy doubling or more of an application's throughput, requiring nothing more than passing an extra command-line flag to the compiler.

Raza, Ali, Zaki, Yasir, Pötsch, Thomas, Chen, Jay, Subramanian, Lakshmi.  2017.  xCache: Rethinking Edge Caching for Developing Regions. Proceedings of the Ninth International Conference on Information and Communication Technologies and Development. :5:1–5:11.

End-users in emerging markets experience poor web performance due to a combination of three factors: high server response time, limited edge bandwidth and the complexity of web pages. The absence of cloud infrastructure in developing regions and the limited bandwidth experienced by edge nodes constrain the effectiveness of conventional caching solutions for these contexts. This paper describes the design, implementation and deployment of xCache, a cloud-managed Internet caching architecture that aims to proactively profile popular web pages and maintain the liveness of popular content at software defined edge caches to enhance the cache hit rate with minimal bandwidth overhead. xCache uses a Cloud Controller that continuously analyzes active cloud-managed web pages and derives an object-group representation of web pages based on the objects of a page. Using this object-group representation, xCache computes a bandwidth-aware utility measure to derive the most valuable configuration for each edge cache. Our preliminary real-world deployment across university campuses in three developing regions demonstrates its potential compared to conventional caching by improving cache hit rates by about 15%. Our evaluations of xCache have also shown that it can be applied in conjunction with other web optimizations solutions like Shandian, and can improve page load times by more than 50%.