Visible to the public Biblio

Filters: Keyword is customer experience  [Clear All Filters]
2019-12-16
Xue, Zijun, Ko, Ting-Yu, Yuchen, Neo, Wu, Ming-Kuang Daniel, Hsieh, Chu-Cheng.  2018.  Isa: Intuit Smart Agent, A Neural-Based Agent-Assist Chatbot. 2018 IEEE International Conference on Data Mining Workshops (ICDMW). :1423–1428.
Hiring seasonal workers in call centers to provide customer service is a common practice in B2C companies. The quality of service delivered by both contracting and employee customer service agents depends heavily on the domain knowledge available to them. When observing the internal group messaging channels used by agents, we found that similar questions are often asked repetitively by different agents, especially from less experienced ones. The goal of our work is to leverage the promising advances in conversational AI to provide a chatbot-like mechanism for assisting agents in promptly resolving a customer's issue. In this paper, we develop a neural-based conversational solution that employs BiLSTM with attention mechanism and demonstrate how our system boosts the effectiveness of customer support agents. In addition, we discuss the design principles and the necessary considerations for our system. We then demonstrate how our system, named "Isa" (Intuit Smart Agent), can help customer service agents provide a high-quality customer experience by reducing customer wait time and by applying the knowledge accumulated from customer interactions in future applications.
2018-12-10
Versluis, L., Neacsu, M., Iosup, A..  2018.  A Trace-Based Performance Study of Autoscaling Workloads of Workflows in Datacenters. 2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). :223–232.

To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated autoscaling policies, that is, policies that dynamically provision resources for the customer. Although selecting and tuning autoscaling policies is a challenging task for datacenter operators, so far relatively few studies investigate the performance of autoscaling for workloads of workflows. Complementing previous knowledge, in this work we propose the first comprehensive performance study in the field. Using trace-based simulation, we compare state-of-the-art autoscaling policies across multiple application domains, workload arrival patterns (e.g., burstiness), and system utilization levels. We further investigate the interplay between autoscaling and regular allocation policies, and the complexity cost of autoscaling. Our quantitative study focuses not only on traditional performance metrics and on state-of-the-art elasticity metrics, but also on time-and memory-related autoscaling-complexity metrics. Our main results give strong and quantitative evidence about previously unreported operational behavior, for example, that autoscaling policies perform differently across application domains and allocation and provisioning policies should be co-designed.

2018-02-02
Rotella, P., Chulani, S..  2017.  Predicting Release Reliability. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :39–46.

Customers need to know how reliable a new release is, and whether or not the new release has substantially different, either better or worse, reliability than the one currently in production. Customers are demanding quantitative evidence, based on pre-release metrics, to help them decide whether or not to upgrade (and thereby offer new features and capabilities to their customers). Finding ways to estimate future reliability performance is not easy - we have evaluated many prerelease development and test metrics in search of reliability predictors that are sufficiently accurate and also apply to a broad range of software products. This paper describes a successful model that has resulted from these efforts, and also presents both a functional extension and a further conceptual simplification of the extended model that enables us to better communicate key release information to internal stakeholders and customers, without sacrificing predictive accuracy or generalizability. Work remains to be done, but the results of the original model, the extended model, and the simplified version are encouraging and are currently being applied across a range of products and releases. To evaluate whether or not these early predictions are accurate, and also to compare releases that are available to customers, we use a field software reliability assessment mechanism that incorporates two types of customer experience metrics: field bug encounters normalized by usage, and field bug counts, also normalized by usage. Our 'release-overrelease' strategy combines the 'maturity assessment' component (i.e., estimating reliability prior to release to the field) and the 'reliability assessment' component (i.e., gauging actual reliability after release to the field). This overall approach enables us to both predict reliability and compare reliability results for recent releases for a product.