Visible to the public Biblio

Filters: Keyword is work factor metrics  [Clear All Filters]
2018-12-10
Widder, David Gray, Hilton, Michael, Kästner, Christian, Vasilescu, Bogdan.  2018.  I'm Leaving You, Travis: A Continuous Integration Breakup Story. Proceedings of the 15th International Conference on Mining Software Repositories. :165–169.

Continuous Integration (CI) services, which can automatically build, test, and deploy software projects, are an invaluable asset in distributed teams, increasing productivity and helping to maintain code quality. Prior work has shown that CI pipelines can be sophisticated, and choosing and configuring a CI system involves tradeoffs. As CI technology matures, new CI tool offerings arise to meet the distinct wants and needs of software teams, as they negotiate a path through these tradeoffs, depending on their context. In this paper, we begin to uncover these nuances, and tell the story of open-source projects falling out of love with Travis, the earliest and most popular cloud-based CI system. Using logistic regression, we quantify the effects that open-source community factors and project technical factors have on the rate of Travis abandonment. We find that increased build complexity reduces the chances of abandonment, that larger projects abandon at higher rates, and that a project's dominant language has significant but varying effects. Finally, we find the surprising result that metrics of configuration attempts and knowledge dispersion in the project do not affect the rate of abandonment.

Edge, Darren, Larson, Jonathan, White, Christopher.  2018.  Bringing AI to BI: Enabling Visual Analytics of Unstructured Data in a Modern Business Intelligence Platform. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. :CS02:1–CS02:9.

The Business Intelligence (BI) paradigm is challenged by emerging use cases such as news and social media analytics in which the source data are unstructured, the analysis metrics are unspecified, and the appropriate visual representations are unsupported by mainstream tools. This case study documents the work undertaken in Microsoft Research to enable these use cases in the Microsoft Power BI product. Our approach comprises: (a) back-end pipelines that use AI to infer navigable data structures from streams of unstructured text, media and metadata; and (b) front-end representations of these structures grounded in the Visual Analytics literature. Through our creation of multiple end-to-end data applications, we learned that representing the varying quality of inferred data structures was crucial for making the use and limitations of AI transparent to users. We conclude with reflections on BI in the age of AI, big data, and democratized access to data analytics.

Pasricha, Rajiv, McAuley, Julian.  2018.  Translation-based Factorization Machines for Sequential Recommendation. Proceedings of the 12th ACM Conference on Recommender Systems. :63–71.

Sequential recommendation algorithms aim to predict users' future behavior given their historical interactions. A recent line of work has achieved state-of-the-art performance on sequential recommendation tasks by adapting ideas from metric learning and knowledge-graph completion. These algorithms replace inner products with low-dimensional embeddings and distance functions, employing a simple translation dynamic to model user behavior over time. In this paper, we propose TransFM, a model that combines translation and metric-based approaches for sequential recommendation with Factorization Machines (FMs). Doing so allows us to reap the benefits of FMs (in particular, the ability to straightforwardly incorporate content-based features), while enhancing the state-of-the-art performance of translation-based models in sequential settings. Specifically, we learn an embedding and translation space for each feature dimension, replacing the inner product with the squared Euclidean distance to measure the interaction strength between features. Like FMs, we show that the model equation for TransFM can be computed in linear time and optimized using classical techniques. As TransFM operates on arbitrary feature vectors, additional content information can be easily incorporated without significant changes to the model itself. Empirically, the performance of TransFM significantly increases when taking content features into account, outperforming state-of-the-art models on sequential recommendation tasks for a wide variety of datasets.

Hashemi, Soheil, Tann, Hokchhay, Reda, Sherief.  2018.  BLASYS: Approximate Logic Synthesis Using Boolean Matrix Factorization. Proceedings of the 55th Annual Design Automation Conference. :55:1–55:6.

Approximate computing is an emerging paradigm where design accuracy can be traded off for benefits in design metrics such as design area, power consumption or circuit complexity. In this work, we present a novel paradigm to synthesize approximate circuits using Boolean matrix factorization (BMF). In our methodology the truth table of a sub-circuit of the design is approximated using BMF to a controllable approximation degree, and the results of the factorization are used to synthesize a less complex subcircuit. To scale our technique to large circuits, we devise a circuit decomposition method and a subcircuit design-space exploration technique to identify the best order for subcircuit approximations. Our method leads to a smooth trade-off between accuracy and full circuit complexity as measured by design area and power consumption. Using an industrial strength design flow, we extensively evaluate our methodology on a number of testcases, where we demonstrate that the proposed methodology can achieve up to 63% in power savings, while introducing an average relative error of 5%. We also compare our work to previous works in Boolean circuit synthesis and demonstrate significant improvements in design metrics for same accuracy targets.

Kala, Srikant Manas, Sathya, Vanlin, Reddy, M. Pavan Kumar, Tamma, Bheemarjuna Reddy.  2018.  iCALM: A Topology Agnostic Socio-inspired Channel Assignment Performance Prediction Metric for Mesh Networks. :702–704.

A multitude of Channel Assignment (CA) schemes have created a paradox of plenty, making CA selection for Wireless Mesh Networks (WMNs) an onerous task. CA performance prediction (CAPP) metrics are novel tools that address the problem of appropriate CA selection. However, most CAPP metrics depend upon a variety of factors such as the WMN topology, the type of CA scheme, and connectedness of the underlying graph. In this work, we propose an improved Channel Assignment Link-Weight Metric (iCALM) that is independent of these constraints. To the best of our knowledge, iCALM is the first universal CAPP metric for WMNs. To evaluate iCALM, we design two WMN topologies that conform to the attributes of real-world mesh network deployments, and run rigorous simulations in ns-3. We compare iCALM to four existing CAPP metrics, and demonstrate that it performs exceedingly well, regardless of the CA type, and the WMN layout.

Cui, Limeng, Chen, Zhensong, Zhang, Jiawei, He, Lifang, Shi, Yong, Yu, Philip S..  2018.  Multi-view Collective Tensor Decomposition for Cross-modal Hashing. Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval. :73–81.

Multimedia data available in various disciplines are usually heterogeneous, containing representations in multi-views, where the cross-modal search techniques become necessary and useful. It is a challenging problem due to the heterogeneity of data with multiple modalities, multi-views in each modality and the diverse data categories. In this paper, we propose a novel multi-view cross-modal hashing method named Multi-view Collective Tensor Decomposition (MCTD) to fuse these data effectively, which can exploit the complementary feature extracted from multi-modality multi-view while simultaneously discovering multiple separated subspaces by leveraging the data categories as supervision information. Our contributions are summarized as follows: 1) we exploit tensor modeling to get better representation of the complementary features and redefine a latent representation space; 2) a block-diagonal loss is proposed to explicitly pursue a more discriminative latent tensor space by exploring supervision information; 3) we propose a new feature projection method to characterize the data and to generate the latent representation for incoming new queries. An optimization algorithm is proposed to solve the objective function designed for MCTD, which works under an iterative updating procedure. Experimental results prove the state-of-the-art precision of MCTD compared with competing methods.

Ma, L. M., IJtsma, M., Feigh, K. M., Paladugu, A., Pritchett, A. R..  2018.  Modelling and evaluating failures in human-robot teaming using simulation. 2018 IEEE Aerospace Conference. :1–16.

As robotic capabilities improve and robots become more capable as team members, a better understanding of effective human-robot teaming is needed. In this paper, we investigate failures by robots in various team configurations in space EVA operations. This paper describes the methodology of extending and the application of Work Models that Compute (WMC), a computational simulation framework, to model robot failures, interruptions, and the resolutions they require. Using these models, we investigate how different team configurations respond to a robot's failure to correctly complete the task and overall mission. We also identify key factors that impact the teamwork metrics for team designers to keep in mind while assembling teams and assigning taskwork to the agents. We highlight different metrics that these failures impact on team performance through varying components of teaming and interaction that occur. Finally, we discuss the future implications of this work and the future work to be done to investigate function allocation in human-robot teams.

Versluis, L., Neacsu, M., Iosup, A..  2018.  A Trace-Based Performance Study of Autoscaling Workloads of Workflows in Datacenters. 2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). :223–232.

To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated autoscaling policies, that is, policies that dynamically provision resources for the customer. Although selecting and tuning autoscaling policies is a challenging task for datacenter operators, so far relatively few studies investigate the performance of autoscaling for workloads of workflows. Complementing previous knowledge, in this work we propose the first comprehensive performance study in the field. Using trace-based simulation, we compare state-of-the-art autoscaling policies across multiple application domains, workload arrival patterns (e.g., burstiness), and system utilization levels. We further investigate the interplay between autoscaling and regular allocation policies, and the complexity cost of autoscaling. Our quantitative study focuses not only on traditional performance metrics and on state-of-the-art elasticity metrics, but also on time-and memory-related autoscaling-complexity metrics. Our main results give strong and quantitative evidence about previously unreported operational behavior, for example, that autoscaling policies perform differently across application domains and allocation and provisioning policies should be co-designed.

Shathanaa, R., Ramasubramanian, N..  2018.  Improving Power amp; Latency Metrics for Hardware Trojan Detection During High Level Synthesis. 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–7.

The globalization and outsourcing of the semiconductor industry has raised serious concerns about the trustworthiness of the hardware. Importing Third Party IP cores in the Integrated Chip design has opened gates for new form of attacks on hardware. Hardware Trojans embedded in Third Party IPs has necessitated the need for secure IC design process. Design-for-Trust techniques aimed at detection of Hardware Trojans come with overhead in terms of area, latency and power consumption. In this work, we present a Cuckoo Search algorithm based Design Space Exploration process for finding low cost hardware solutions during High Level Synthesis. The exploration is conducted with respect to datapath resource allocation for single and nested loops. The proposed algorithm is compared with existing Hardware Trojan detection mechanisms and experimental results show that the proposed algorithm is able to achieve 3x improvement in Cost when compared existing algorithms.

Pulparambil, S., Baghdadi, Y., Al-Hamdani, A., Al-Badawi, M..  2018.  Service Design Metrics to Predict IT-Based Drivers of Service Oriented Architecture Adoption. 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–7.

The key factors for deploying successful services is centered on the service design practices adopted by an enterprise. The design level information should be validated and measures are required to quantify the structural attributes. The metrics at this stage will support an early discovery of design flaws and help designers to predict the capabilities of service oriented architecture (SOA) adoption. In this work, we take a deeper look at how we can forecast the key SOA capabilities infrastructure efficiency and service reuse from the service designs modeled by SOA modeling language. The proposed approach defines metrics based on the structural and domain level similarity of service operations. The proposed metrics are analytically validated with respect to software engineering metrics properties. Moreover, a tool has been developed to automate the proposed approach and the results indicate that the metrics predict the SOA capabilities at the service design stage. This work can be further extended to predict the business based capabilities of SOA adoption such as flexibility and agility.

2018-03-26
Chekuri, Chandra, Madan, Vivek.  2017.  Approximating Multicut and the Demand Graph. Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms. :855–874.

In the minimum Multicut problem, the input is an edge-weighted supply graph G = (V, E) and a demand graph H = (V, F). Either G and H are directed (Dir-MulC) or both are undirected (Undir-MulC). The goal is to remove a minimum weight set of supply edges E' $\subseteq$ E such that in G - E' there is no path from s to t for any demand edge (s, t) $ın$ F. Undir-MulC admits O(log k)-approximation where k is the number of edges in H while the best known approximation for Dir-MulC is min\k, Õ(textbarVtextbar11/23)\. These approximations are obtained by proving corresponding results on the multicommodity flow-cut gap. In this paper we consider the role that the structure of the demand graph plays in determining the approximability of Multicut. We obtain several new positive and negative results. In undirected graphs our main result is a 2-approximation in nO(t) time when the demand graph excludes an induced matching of size t. This gives a constant factor approximation for a specific demand graph that motivated this work, and is based on a reduction to uniform metric labeling and not via the flow-cut gap. In contrast to the positive result for undirected graphs, we prove that in directed graphs such approximation algorithms can not exist. We prove that, assuming the Unique Games Conjecture (UGC), that for a large class of fixed demand graphs Dir-MulC cannot be approximated to a factor better than the worst-case flow-cut gap. As a consequence we prove that for any fixed k, assuming UGC, Dir-MulC with k demand pairs is hard to approximate to within a factor better than k. On the positive side, we obtain a k approximation when the demand graph excludes certain graphs as an induced subgraph. This generalizes the known 2 approximation for directed Multiway Cut to a larger class of demand graphs.

Eskandanian, Farzad, Mobasher, Bamshad, Burke, Robin.  2017.  A Clustering Approach for Personalizing Diversity in Collaborative Recommender Systems. Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization. :280–284.

Much of the focus of recommender systems research has been on the accurate prediction of users' ratings for unseen items. Recent work has suggested that objectives such as diversity and novelty in recommendations are also important factors in the effectiveness of a recommender system. However, methods that attempt to increase diversity of recommendation lists for all users without considering each user's preference or tolerance for diversity may lead to monotony for some users and to poor recommendations for others. Our goal in this research is to evaluate the hypothesis that users' propensity towards diversity varies greatly and that the diversity of recommendation lists should be consistent with the level of user interest in diverse recommendations. We propose a pre-filtering clustering approach to group users with similar levels of tolerance for diversity. Our contributions are twofold. First, we propose a method for personalizing diversity by performing collaborative filtering independently on different segments of users based on the degree of diversity in their profiles. Secondly, we investigate the accuracy-diversity tradeoffs using the proposed method across different user segments. As part of this evaluation we propose new metrics, adapted from information retrieval, that help us measure the effectiveness of our approach in personalizing diversity. Our experimental evaluation is based on two different datasets: MovieLens movie ratings, and Yelp restaurant reviews.

Valiant, Gregory, Valiant, Paul.  2017.  Estimating the Unseen: Improved Estimators for Entropy and Other Properties. J. ACM. 64:37:1–37:41.

We show that a class of statistical properties of distributions, which includes such practically relevant properties as entropy, the number of distinct elements, and distance metrics between pairs of distributions, can be estimated given a sublinear sized sample. Specifically, given a sample consisting of independent draws from any distribution over at most k distinct elements, these properties can be estimated accurately using a sample of size O(k log k). For these estimation tasks, this performance is optimal, to constant factors. Complementing these theoretical results, we also demonstrate that our estimators perform exceptionally well, in practice, for a variety of estimation tasks, on a variety of natural distributions, for a wide range of parameters. The key step in our approach is to first use the sample to characterize the ``unseen'' portion of the distribution—effectively reconstructing this portion of the distribution as accurately as if one had a logarithmic factor larger sample. This goes beyond such tools as the Good-Turing frequency estimation scheme, which estimates the total probability mass of the unobserved portion of the distribution: We seek to estimate the shape of the unobserved portion of the distribution. This work can be seen as introducing a robust, general, and theoretically principled framework that, for many practical applications, essentially amplifies the sample size by a logarithmic factor; we expect that it may be fruitfully used as a component within larger machine learning and statistical analysis systems.

Jo, Changyeon, Cho, Youngsu, Egger, Bernhard.  2017.  A Machine Learning Approach to Live Migration Modeling. Proceedings of the 2017 Symposium on Cloud Computing. :351–364.

Live migration is one of the key technologies to improve data center utilization, power efficiency, and maintenance. Various live migration algorithms have been proposed; each exhibiting distinct characteristics in terms of completion time, amount of data transferred, virtual machine (VM) downtime, and VM performance degradation. To make matters worse, not only the migration algorithm but also the applications running inside the migrated VM affect the different performance metrics. With service-level agreements and operational constraints in place, choosing the optimal live migration technique has so far been an open question. In this work, we propose an adaptive machine learning-based model that is able to predict with high accuracy the key characteristics of live migration in dependence of the migration algorithm and the workload running inside the VM. We discuss the important input parameters for accurately modeling the target metrics, and describe how to profile them with little overhead. Compared to existing work, we are not only able to model all commonly used migration algorithms but also predict important metrics that have not been considered so far such as the performance degradation of the VM. In a comparison with the state-of-the-art, we show that the proposed model outperforms existing work by a factor 2 to 5.

Naor, Assaf, Young, Robert.  2017.  The Integrality Gap of the Goemans-Linial SDP Relaxation for Sparsest Cut Is at Least a Constant Multiple of $\surd$Log N. Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing. :564–575.

We prove that the integrality gap of the Goemans–Linial semidefinite programming relaxation for the Sparsest Cut Problem is Î\textcopyright(√logn) on inputs with n vertices, thus matching the previously best known upper bound (logn)1/2+o(1) up to lower-order factors. This statement is a consequence of the following new isoperimetric-type inequality. Consider the 8-regular graph whose vertex set is the 5-dimensional integer grid â„\textcurrency5 and where each vertex (a,b,c,d,e)∈ â„\textcurrency5 is connected to the 8 vertices (aÂ$\pm$ 1,b,c,d,e), (a,bÂ$\pm$ 1,c,d,e), (a,b,cÂ$\pm$ 1,d,eÂ$\pm$ a), (a,b,c,dÂ$\pm$ 1,eÂ$\pm$ b). This graph is known as the Cayley graph of the 5-dimensional discrete Heisenberg group. Given Î\textcopyright⊂ â„\textcurrency5, denote the size of its edge boundary in this graph (a.k.a. the horizontal perimeter of Î\textcopyright) by textbar∂hÎ\textcopyrighttextbar. For t∈ ℕ, denote by textbar∂vtÎ\textcopyrighttextbar the number of (a,b,c,d,e)∈ â„\textcurrency5 such that exactly one of the two vectors (a,b,c,d,e),(a,b,c,d,e+t) is in Î\textcopyright. The vertical perimeter of Î\textcopyright is defined to be textbar∂vÎ\textcopyrighttextbar= √∑t=1∞textbar∂vtÎ\textcopyrighttextbar2/t2. We show that every subset Î\textcopyright⊂ â„\textcurrency5 satisfies textbar∂vÎ\textcopyrighttextbar=O(textbar∂hÎ\textcopyrighttextbar). This vertical-versus-horizontal isoperimetric inequality yields the above-stated integrality gap for Sparsest Cut and answers several geometric and analytic questions of independent interest. The theorem stated above is the culmination of a program whose aim is to understand the performance of the Goemans–Linial semidefinite program through the embeddability properties of Heisenberg groups. These investigations have mathematical significance even beyond their established relevance to approximation algorithms and combinatorial optimization. In particular they contribute to a range of mathematical disciplines including functional analysis, geometric group theory, harmonic analysis, sub-Riemannian geometry, geometric measure theory, ergodic theory, group representations, and metric differentiation. This article builds on the above cited works, with the “twist” that while those works were equally valid for any finite dimensional Heisenberg group, our result holds for the Heisenberg group of dimension 5 (or higher) but fails for the 3-dimensional Heisenberg group. This insight leads to our core contribution, which is a deduction of an endpoint L1-boundedness of a certain singular integral on ℝ5 from the (local) L2-boundedness of the corresponding singular integral on ℝ3. To do this, we devise a corona-type decomposition of subsets of a Heisenberg group, in the spirit of the construction that David and Semmes performed in ℝn, but with two main conceptual differences (in addition to more technical differences that arise from the peculiarities of the geometry of Heisenberg group). Firstly, the“atoms” of our decomposition are perturbations of intrinsic Lipschitz graphs in the sense of Franchi, Serapioni, and Serra Cassano (plus the requisite “wild” regions that satisfy a Carleson packing condition). Secondly, we control the local overlap of our corona decomposition by using quantitative monotonicity rather than Jones-type Î$^2$-numbers.

Chalkley, Joe D., Ranji, Thomas T., Westling, Carina E. I., Chockalingam, Nachiappan, Witchel, Harry J..  2017.  Wearable Sensor Metric for Fidgeting: Screen Engagement Rather Than Interest Causes NIMI of Wrists and Ankles. Proceedings of the European Conference on Cognitive Ergonomics 2017. :158–161.

Measuring fidgeting is an important goal for the psychology of mind-wandering and for human computer interaction (HCI). Previous work measuring the movement of the head, torso and thigh during HCI has shown that engaging screen content leads to non-instrumental movement inhibition (NIMI). Camera-based methods for measuring wrist movements are limited by occlusions. Here we used a high pass filtered magnitude of wearable tri-axial accelerometer recordings during 2-minute passive HCI stimuli as a surrogate for movement of the wrists and ankles. With 24 seated, healthy volunteers experiencing HCI, this metric showed that wrists moved significantly more than ankles. We found that NIMI could be detected in the wrists and ankles; it distinguished extremes of interest and boredom via restlessness. We conclude that both free-willed and forced screen engagement can elicit NIMI of the wrists and ankles.

Afshar, Ardavan, Ho, Joyce C., Dilkina, Bistra, Perros, Ioakeim, Khalil, Elias B., Xiong, Li, Sunderam, Vaidy.  2017.  CP-ORTHO: An Orthogonal Tensor Factorization Framework for Spatio-Temporal Data. Proceedings of the 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. :67:1–67:4.

Extracting patterns and deriving insights from spatio-temporal data finds many target applications in various domains, such as in urban planning and computational sustainability. Due to their inherent capability of simultaneously modeling the spatial and temporal aspects of multiple instances, tensors have been successfully used to analyze such spatio-temporal data. However, standard tensor factorization approaches often result in components that are highly overlapping, which hinders the practitioner's ability to interpret them without advanced domain knowledge. In this work, we tackle this challenge by proposing a tensor factorization framework, called CP-ORTHO, to discover distinct and easily-interpretable patterns from multi-modal, spatio-temporal data. We evaluate our approach on real data reflecting taxi drop-off activity. CP-ORTHO provides more distinct and interpretable patterns than prior art, as measured via relevant quantitative metrics, without compromising the solution's accuracy. We observe that CP-ORTHO is fast, in that it achieves this result in 5x less time than the most accurate competing approach.

Xin, Doris, Mayoraz, Nicolas, Pham, Hubert, Lakshmanan, Karthik, Anderson, John R..  2017.  Folding: Why Good Models Sometimes Make Spurious Recommendations. Proceedings of the Eleventh ACM Conference on Recommender Systems. :201–209.

In recommender systems based on low-rank factorization of a partially observed user-item matrix, a common phenomenon that plagues many otherwise effective models is the interleaving of good and spurious recommendations in the top-K results. A single spurious recommendation can dramatically impact the perceived quality of a recommender system. Spurious recommendations do not result in serendipitous discoveries but rather cognitive dissonance. In this work, we investigate folding, a major contributing factor to spurious recommendations. Folding refers to the unintentional overlap of disparate groups of users and items in the low-rank embedding vector space, induced by improper handling of missing data. We formally define a metric that quantifies the severity of folding in a trained system, to assist in diagnosing its potential to make inappropriate recommendations. The folding metric complements existing information retrieval metrics that focus on the number of good recommendations and their ranks but ignore the impact of undesired recommendations. We motivate the folding metric definition on synthetic data and evaluate its effectiveness on both synthetic and real world datasets. In studying the relationship between the folding metric and other characteristics of recommender systems, we observe that optimizing for goodness metrics can lead to high folding and thus more spurious recommendations.

Mäenpää, Hanna, Munezero, Myriam, Fagerholm, Fabian, Mikkonen, Tommi.  2017.  The Many Hats and the Broken Binoculars: State of the Practice in Developer Community Management. Proceedings of the 13th International Symposium on Open Collaboration. :18:1–18:9.

Open Source Software developer communities are susceptible to challenges related to volatility, distributed coordination and the interplay between commercial and ideological interests. Here, community managers play a vital role in growing, shepherding, and coordinating the developers' work. This study investigates the varied tasks that community managers perform to ensure the health and vitality of their communities. We describe the challenges managers face while directing the community and seeking support for their work from the analysis tools provided by state-of-the-art software platforms. Our results describe seven roles that community managers may play, highlighting the versatile and people-centric nature of the community manager's work. Managers experience hardship of connecting their goals, questions and metrics that define a community's health and effects of their actions. Our results voice common concerns among community managers, and can be used to help them structure the management activity and to find a theoretical frame for further research on how health of developer communities could be understood.

2017-05-18
Ross, Caitlin, Carothers, Christopher D., Mubarak, Misbah, Carns, Philip, Ross, Robert, Li, Jianping Kelvin, Ma, Kwan-Liu.  2016.  Visual Data-analytics of Large-scale Parallel Discrete-event Simulations. Proceedings of the 7th International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computing Systems. :87–97.

Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of high-performance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.

Li, Shi.  2016.  Approximating Capacitated K-median with (1 + Ε)K Open Facilities. Proceedings of the Twenty-seventh Annual ACM-SIAM Symposium on Discrete Algorithms. :786–796.

In the capacitated k-median (CKM) problem, we are given a set F of facilities, each facility i ∈ F with a capacity ui, a set C of clients, a metric d over F ∪ C and an integer k. The goal is to open k facilities in F and connect the clients C to the open facilities such that each facility i is connected by at most ui clients, so as to minimize the total connection cost. In this paper, we give the first constant approximation for CKM, that only violates the cardinality constraint by a factor of 1 + ε. This generalizes the result of [Li15], which only works for the uniform capacitated case. Moreover, the approximation ratio we obtain is O([EQUATION] log [EQUATION]), which is an exponential improvement over the ratio of exp (O([EQUATION])) in [Li15]. The natural LP relaxation for the problem, which almost all previous algorithms for CKM are based on, has unbounded integrality gap even if (2 – ε)k facilities can be opened. We introduce a novel configuration LP for the problem, that overcomes this integrality gap. On the downside, each facility may be opened twice by our algorithm.

Solomonik, Edgar, Carson, Erin, Knight, Nicholas, Demmel, James.  2017.  Trade-Offs Between Synchronization, Communication, and Computation in Parallel Linear Algebra Computations. ACM Trans. Parallel Comput.. 3:3:1–3:47.

This article derives trade-offs between three basic costs of a parallel algorithm: synchronization, data movement, and computational cost. These trade-offs are lower bounds on the execution time of the algorithm that are independent of the number of processors but dependent on the problem size. Therefore, they provide lower bounds on the execution time of any parallel schedule of an algorithm computed by a system composed of any number of homogeneous processors, each with associated computational, communication, and synchronization costs. We employ a theoretical model that measures the amount of work and data movement as a maximum over that incurred along any execution path during the parallel computation. By considering this metric rather than the total communication volume over the whole machine, we obtain new insights into the characteristics of parallel schedules for algorithms with nontrivial dependency structures. We also present reductions from BSP and LogGP algorithms to our execution model, extending our lower bounds to these two models of parallel computation. We first develop our results for general dependency graphs and hypergraphs based on their expansion properties, and then we apply the theorem to a number of specific algorithms in numerical linear algebra, namely triangular substitution, Cholesky factorization, and stencil computations. We represent some of these algorithms as families of dependency graphs. We derive their communication lower bounds by studying the communication requirements of the hypergraph structures shared by these dependency graphs. In addition to these lower bounds, we introduce a new communication-efficient parallelization for stencil computation algorithms, which is motivated by results of our lower bound analysis and the properties of previously existing parallelizations of the algorithms.

Tan, Li, Chen, Zizhong, Song, Shuaiwen Leon.  2015.  Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology. ACM Trans. Archit. Code Optim.. 12:35:1–35:27.

Ever-growing performance of supercomputers nowadays brings demanding requirements of energy efficiency and resilience, due to rapidly expanding size and duration in use of the large-scale computing systems. Many application/architecture-dependent parameters that determine energy efficiency and resilience individually have causal effects with each other, which directly affect the trade-offs among performance, energy efficiency and resilience at scale. To enable high-efficiency management for large-scale High-Performance Computing (HPC) systems nowadays, quantitatively understanding the entangled effects among performance, energy efficiency, and resilience is thus required. While previous work focuses on exploring energy-saving and resilience-enhancing opportunities separately, little has been done to theoretically and empirically investigate the interplay between energy efficiency and resilience at scale. In this article, by extending the Amdahl’s Law and the Karp-Flatt Metric, taking resilience into consideration, we quantitatively model the integrated energy efficiency in terms of performance per Watt and showcase the trade-offs among typical HPC parameters, such as number of cores, frequency/voltage, and failure rates. Experimental results for a wide spectrum of HPC benchmarks on two HPC systems show that the proposed models are accurate in extrapolating resilience-aware performance and energy efficiency, and capable of capturing the interplay among various energy-saving and resilience factors. Moreover, the models can help find the optimal HPC configuration for the highest integrated energy efficiency, in the presence of failures and applied resilience techniques.

Ahsan, Muhammad, Meter, Rodney Van, Kim, Jungsang.  2015.  Designing a Million-Qubit Quantum Computer Using a Resource Performance Simulator. J. Emerg. Technol. Comput. Syst.. 12:39:1–39:25.

The optimal design of a fault-tolerant quantum computer involves finding an appropriate balance between the burden of large-scale integration of noisy components and the load of improving the reliability of hardware technology. This balance can be evaluated by quantitatively modeling the execution of quantum logic operations on a realistic quantum hardware containing limited computational resources. In this work, we report a complete performance simulation software tool capable of (1) searching the hardware design space by varying resource architecture and technology parameters, (2) synthesizing and scheduling a fault-tolerant quantum algorithm within the hardware constraints, (3) quantifying the performance metrics such as the execution time and the failure probability of the algorithm, and (4) analyzing the breakdown of these metrics to highlight the performance bottlenecks and visualizing resource utilization to evaluate the adequacy of the chosen design. Using this tool, we investigate a vast design space for implementing key building blocks of Shor’s algorithm to factor a 1,024-bit number with a baseline budget of 1.5 million qubits. We show that a trapped-ion quantum computer designed with twice as many qubits and one-tenth of the baseline infidelity of the communication channel can factor a 2,048-bit integer in less than 5 months.

Hsu, Daniel, Sabato, Sivan.  2016.  Loss Minimization and Parameter Estimation with Heavy Tails. J. Mach. Learn. Res.. 17:543–582.

This work studies applications and generalizations of a simple estimation technique that provides exponential concentration under heavy-tailed distributions, assuming only bounded low-order moments. We show that the technique can be used for approximate minimization of smooth and strongly convex losses, and specifically for least squares linear regression. For instance, our d-dimensional estimator requires just O(d log(1/δ)) random samples to obtain a constant factor approximation to the optimal least squares loss with probability 1-δ, without requiring the covariates or noise to be bounded or subgaussian. We provide further applications to sparse linear regression and low-rank covariance matrix estimation with similar allowances on the noise and covariate distributions. The core technique is a generalization of the median-of-means estimator to arbitrary metric spaces.