Biblio

Found 4176 results

Filters: First Letter Of Last Name is M  [Clear All Filters]
2017-07-24
Chen, Jing, McCauley, Samuel, Singh, Shikha.  2016.  Rational Proofs with Multiple Provers. Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science. :237–248.

Interactive proofs model a world where a verifier delegates computation to an untrustworthy prover, verifying the prover's claims before accepting them. These proofs have applications to delegation of computation, probabilistically checkable proofs, crowdsourcing, and more. In some of these applications, the verifier may pay the prover based on the quality of his work. Rational proofs, introduced by Azar and Micali (2012), are an interactive proof model in which the prover is rational rather than untrustworthy–-he may lie, but only to increase his payment. This allows the verifier to leverage the greed of the prover to obtain better protocols: while rational proofs are no more powerful than interactive proofs, the protocols are simpler and more efficient. Azar and Micali posed as an open problem whether multiple provers are more powerful than one for rational proofs. We provide a model that extends rational proofs to allow multiple provers. In this model, a verifier can cross-check the answers received by asking several provers. The verifier can pay the provers according to the quality of their work, incentivizing them to provide correct information. We analyze rational proofs with multiple provers from a complexity-theoretic point of view. We fully characterize this model by giving tight upper and lower bounds on its power. On the way, we resolve Azar and Micali's open problem in the affirmative, showing that multiple rational provers are strictly more powerful than one (under standard complexity-theoretic assumptions). We further show that the full power of rational proofs with multiple provers can be achieved using only two provers and five rounds of interaction. Finally, we consider more demanding models where the verifier wants the provers' payment to decrease significantly when they are lying, and fully characterize the power of the model when the payment gap must be noticeable (i.e., at least 1/p where p is a polynomial).

2017-05-19
Pan, Weike, Yang, Qiang, Duan, Yuchao, Ming, Zhong.  2016.  Transfer Learning for Semisupervised Collaborative Recommendation. ACM Trans. Interact. Intell. Syst.. 6:10:1–10:21.

Users’ online behaviors such as ratings and examination of items are recognized as one of the most valuable sources of information for learning users’ preferences in order to make personalized recommendations. But most previous works focus on modeling only one type of users’ behaviors such as numerical ratings or browsing records, which are referred to as explicit feedback and implicit feedback, respectively. In this article, we study a Semisupervised Collaborative Recommendation (SSCR) problem with labeled feedback (for explicit feedback) and unlabeled feedback (for implicit feedback), in analogy to the well-known Semisupervised Learning (SSL) setting with labeled instances and unlabeled instances. SSCR is associated with two fundamental challenges, that is, heterogeneity of two types of users’ feedback and uncertainty of the unlabeled feedback. As a response, we design a novel Self-Transfer Learning (sTL) algorithm to iteratively identify and integrate likely positive unlabeled feedback, which is inspired by the general forward/backward process in machine learning. The merit of sTL is its ability to learn users’ preferences from heterogeneous behaviors in a joint and selective manner. We conduct extensive empirical studies of sTL and several very competitive baselines on three large datasets. The experimental results show that our sTL is significantly better than the state-of-the-art methods.

2018-05-17
J. C. Gallagher, M. Sam, S. Boddhu, E. T. Matson, G. Greenwood.  2016.  Drag force fault extension to evolutionary model consistency checking for a flapping-wing micro air vehicle. 2016 IEEE Congress on Evolutionary Computation (CEC). :3961-3968.

Previously, we introduced Evolutionary Model Consistency Checking (EMCC) as an adjunct to Evolvable and Adaptive Hardware (EAH) methods. The core idea was to dual-purpose objective function evaluations to simultaneously enable EA search of hardware configurations while simultaneously enabling a model-based inference of the nature of the damage that necessitated the hardware adaptation. We demonstrated the efficacy of this method by modifying a pair of EAH oscillators inside a simulated Flapping-Wing Micro Air Vehicle (FW-MAV). In that work, we were able to show that one could, while online in normal service, evolve wing gait patterns that corrected altitude control errors cause by mechanical wing damage while simultaneously determining, with high precision, what the wing lift force deficits that necessitated the adaptation. In this work, we extend the method to be able to also determine wing drag force deficits. Further, we infer the now extended set of four unknown damage estimates without substantially increasing the number of objective function evaluations required. In this paper we will provide the outlines of a formal derivation of the new inference method plus experimental validation of efficacy. The paper will conclude with commentary on several practical issues, including better containment of estimation error by introducing more in-flight learning trials and why one might argue that these techniques could eventually be used on a true free-flying flapping wing vehicle.

2018-05-11
2018-05-17
2018-05-15
2017-08-18
Clark, Ruaridh, Punzo, Giuliano, Baumanis, Kristaps, Macdonald, Malcolm.  2016.  Consensus Speed Maximisation in Engineered Swarms with Autocratic Leaders. Proceedings of the International Conference on Artificial Intelligence and Robotics and the International Conference on Automation, Control and Robotics Engineering. :8:1–8:5.

Control of a large engineered swarm can be achieved by influencing key agents within the swarm. The swarm can rely on its communication network to spread the external perturbation and transition to a new state when all agents reach a consensus. Maximising this consensus speed is a vital design parameter when fast response is desirable. The systems analysed consist of N interacting agents that have the same number of outward, observing, connections that follow k-nearest neighbour rules and are represented by a directed graph Laplacian. The spectral properties of this graph are exploited to identify leaders with a newly presented semi-analytical approach referred to as the Leaders of Influence (LoI) method. This method is demonstrated on k-NNR graphs for a set number of leaders. These methods are compared with a genetic algorithm and are shown to be efficient and effective at leader identification. A focus of this work is the effect of leadership style on consensus speed where an autocratic approach (leaders that are not influenced by other nodes in the graph) is shown to always produce faster consensus than a democratic leadership model.

Fernández, Silvino, Valledor, Pablo, Diaz, Diego, Malatsetxebarria, Eneko, Iglesias, Miguel.  2016.  Criticality of Response Time in the Usage of Metaheuristics in Industry. Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion. :937–940.

Metaheuristics include a wide range of optimization algorithms. Some of them are very well known and with proven value, as they solve successfully many examples of combinatorial NP-hard problems. Some examples of Metaheuristics are Genetic Algorithms (GA), Simulated Annealing (SA) or Ant Colony Optimization (ACO). Our company is devoted to making steel and is the biggest steelmaker in the world. Combining several industrial processes to produce 84.6 million tones (public official data of 2015) involves huge effort. Metaheuristics are applied to different scenarios inside our operations to optimize different areas: logistics, production scheduling or resource assignment, saving costs and helping to reach operational excellence, critical for our survival in a globalized world. Rather than obtaining the global optimal solution, the main interest of an industrial company is to have "good solutions", close to the optimal, but within a very short response time, and this latter requirement is the main difference with respect to the traditional research approach from the academic world. Production is continuous and it cannot be stopped or wait for calculations, in addition, reducing production speed implies decreasing productivity and making the facilities less competitive. Disruptions are common events, making rescheduling imperative while foremen wait for new instructions to operate. This position paper explains the problem of the time response in our industrial environment, the solutions we have investigated and some results already achieved.

Zapotecas-Martinez, Saul, Moraglio, Alberto, Aguirre, Hernan E., Tanaka, Kiyoshi.  2016.  Geometric Particle Swarm Optimization for Multi-objective Optimization Using Decomposition. Proceedings of the Genetic and Evolutionary Computation Conference 2016. :69–76.

Multi-objective evolutionary algorithms (MOEAs) based on decomposition are aggregation-based algorithms which transform a multi-objective optimization problem (MOP) into several single-objective subproblems. Being effective, efficient, and easy to implement, Particle Swarm Optimization (PSO) has become one of the most popular single-objective optimizers for continuous problems, and recently it has been successfully extended to the multi-objective domain. However, no investigation on the application of PSO within a multi-objective decomposition framework exists in the context of combinatorial optimization. This is precisely the focus of the paper. More specifically, we study the incorporation of Geometric Particle Swarm Optimization (GPSO), a discrete generalization of PSO that has proven successful on a number of single-objective combinatorial problems, into a decomposition approach. We conduct experiments on many-objective 1/0 knapsack problems i.e. problems with more than three objectives functions, substantially harder than multi-objective problems with fewer objectives. The results indicate that the proposed multi-objective GPSO based on decomposition is able to outperform two version of the well-know MOEA based on decomposition (MOEA/D) and the most recent version of the non-dominated sorting genetic algorithm (NSGA-III), which are state-of-the-art multi-objec\textbackslash-tive evolutionary approaches based on decomposition.

2017-12-04
Idayanti, N., Dedi, Nanang, T. K., Sudrajat, Septiani, A., Mulyadi, D., Irasari, P..  2016.  The implementation of hybrid bonded permanent magnet on permanent magnet generator for renewable energy power plants. 2016 International Seminar on Intelligent Technology and Its Applications (ISITIA). :557–560.

{This paper describes application of permanent magnet on permanent magnet generator (PMG) for renewable energy power plants. Permanent magnet used are bonded hybrid magnet that was a mixture of barium ferrite magnetic powders 50 wt % and NdFeB magnetic powders 50 wt % with 15 wt % of adhesive polymer as a binder. Preparation of bonded hybrid magnets by hot press method at a pressure of 2 tons and temperature of 200°C for 15 minutes. The magnetic properties obtained were remanence induction (Br) =1.54 kG, coercivity (Hc) = 1.290 kOe, product energy maximum (BHmax) = 0.28 MGOe, surface remanence induction (Br) = 1200 gauss

2017-03-07
Almeida, Ricardo, Maio, Paulo, Oliveira, Paulo, Barroso, João.  2016.  Ontology Based Rewriting Data Cleaning Operations. Proceedings of the Ninth International C* Conference on Computer Science & Software Engineering. :85–88.

Dealing with increasing amounts of data creates the need to deal with redundant, inconsistent and/or complementary repositories which may be different in their data models and/or in their schema. Current data cleaning techniques developed to tackle data quality problems are just suitable for scenarios were all repositories share the same model and schema. Recently, an ontology-based methodology was proposed to overcome this limitation. In this paper, this methodology is briefly described and applied to a real scenario in the health domain with data quality problems.

2017-05-18
Musto, Cataldo, Lops, Pasquale, Basile, Pierpaolo, de Gemmis, Marco, Semeraro, Giovanni.  2016.  Semantics-aware Graph-based Recommender Systems Exploiting Linked Open Data. Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization. :229–237.

The ever increasing interest in semantic technologies and the availability of several open knowledge sources have fueled recent progress in the field of recommender systems. In this paper we feed recommender systems with features coming from the Linked Open Data (LOD) cloud - a huge amount of machine-readable knowledge encoded as RDF statements - with the aim of improving recommender systems effectiveness. In order to exploit the natural graph-based structure of RDF data, we study the impact of the knowledge coming from the LOD cloud on the overall performance of a graph-based recommendation algorithm. In more detail, we investigate whether the integration of LOD-based features improves the effectiveness of the algorithm and to what extent the choice of different feature selection techniques influences its performance in terms of accuracy and diversity. The experimental evaluation on two state of the art datasets shows a clear correlation between the feature selection technique and the ability of the algorithm to maximize a specific evaluation metric. Moreover, the graph-based algorithm leveraging LOD-based features is able to overcome several state of the art baselines, such as collaborative filtering and matrix factorization, thus confirming the effectiveness of the proposed approach.

2023-03-31
Navuluri, Karthik, Mukkamala, Ravi, Ahmad, Aftab.  2016.  Privacy-Aware Big Data Warehouse Architecture. 2016 IEEE International Congress on Big Data (BigData Congress). :341–344.
Along with the ever increasing growth in data collection and its mining, there is an increasing fear of compromising individual and population privacy. Several techniques have been proposed in literature to preserve privacy of collected data while storing and processing. In this paper, we propose a privacy-aware architecture for storing and processing data in a Big Data warehouse. In particular, we propose a flexible, extendable, and adaptable architecture that enforces user specified privacy requirements in the form of Embedded Privacy Agreements. The paper discusses the details of the architecture with some implementation details.
2017-03-07
Santoro, Donatello, Arocena, Patricia C., Glavic, Boris, Mecca, Giansalvatore, Miller, Renée J., Papotti, Paolo.  2016.  BART in Action: Error Generation and Empirical Evaluations of Data-Cleaning Systems. Proceedings of the 2016 International Conference on Management of Data. :2161–2164.

Repairing erroneous or conflicting data that violate a set of constraints is an important problem in data management. Many automatic or semi-automatic data-repairing algorithms have been proposed in the last few years, each with its own strengths and weaknesses. Bart is an open-source error-generation system conceived to support thorough experimental evaluations of these data-repairing systems. The demo is centered around three main lessons. To start, we discuss how generating errors in data is a complex problem, with several facets. We introduce the important notions of detectability and repairability of an error, that stand at the core of Bart. Then, we show how, by changing the features of errors, it is possible to influence quite significantly the performance of the tools. Finally, we concretely put to work five data-repairing algorithms on dirty data of various kinds generated using Bart, and discuss their performance.

2017-09-19
Hu, Xuan, Li, Banghuai, Zhang, Yang, Zhou, Changling, Ma, Hao.  2016.  Detecting Compromised Email Accounts from the Perspective of Graph Topology. Proceedings of the 11th International Conference on Future Internet Technologies. :76–82.

While email plays a growingly important role on the Internet, we are faced with more severe challenges brought by compromised email accounts, especially for the administrators of institutional email service providers. Inspired by the previous experience on spam filtering and compromised accounts detection, we propose several criteria, like Success Outdegree Proportion, Reverse Pagerank, Recipient Clustering Coefficient and Legitimate Recipient Proportion, for compromised email accounts detection from the perspective of graph topology in this paper. Specifically, several widely used social network analysis metrics are used and adapted according to the characteristics of mail log analysis. We evaluate our methods on a dataset constructed by mining the one month (30 days) mail log from an university with 118,617 local users and 11,460,399 mail log entries. The experimental results demonstrate that our methods achieve very positive performance, and we also prove that these methods can be efficiently applied on even larger datasets.

2017-03-07
Chung, Yeounoh, Mortensen, Michael Lind, Binnig, Carsten, Kraska, Tim.  2016.  Estimating the Impact of Unknown Unknowns on Aggregate Query Results. Proceedings of the 2016 International Conference on Management of Data. :861–876.

It is common practice for data scientists to acquire and integrate disparate data sources to achieve higher quality results. But even with a perfectly cleaned and merged data set, two fundamental questions remain: (1) is the integrated data set complete and (2) what is the impact of any unknown (i.e., unobserved) data on query results? In this work, we develop and analyze techniques to estimate the impact of the unknown data (a.k.a., unknown unknowns) on simple aggregate queries. The key idea is that the overlap between different data sources enables us to estimate the number and values of the missing data items. Our main techniques are parameter-free and do not assume prior knowledge about the distribution. Through a series of experiments, we show that estimating the impact of unknown unknowns is invaluable to better assess the results of aggregate queries over integrated data sources.

He, Jian, Veltri, Enzo, Santoro, Donatello, Li, Guoliang, Mecca, Giansalvatore, Papotti, Paolo, Tang, Nan.  2016.  Interactive and Deterministic Data Cleaning. Proceedings of the 2016 International Conference on Management of Data. :893–907.

We present Falcon, an interactive, deterministic, and declarative data cleaning system, which uses SQL update queries as the language to repair data. Falcon does not rely on the existence of a set of pre-defined data quality rules. On the contrary, it encourages users to explore the data, identify possible problems, and make updates to fix them. Bootstrapped by one user update, Falcon guesses a set of possible sql update queries that can be used to repair the data. The main technical challenge addressed in this paper consists in finding a set of sql update queries that is minimal in size and at the same time fixes the largest number of errors in the data. We formalize this problem as a search in a lattice-shaped space. To guarantee that the chosen updates are semantically correct, Falcon navigates the lattice by interacting with users to gradually validate the set of sql update queries. Besides using traditional one-hop based traverse algorithms (e.g., BFS or DFS), we describe novel multi-hop search algorithms such that Falcon can dive over the lattice and conduct the search efficiently. Our novel search strategy is coupled with a number of optimization techniques to further prune the search space and efficiently maintain the lattice. We have conducted extensive experiments using both real-world and synthetic datasets to show that Falcon can effectively communicate with users in data repairing.

2017-12-28
Ouffoué, G., Ortiz, A. M., Cavalli, A. R., Mallouli, W., Domingo-Ferrer, J., Sánchez, D., Zaidi, F..  2016.  Intrusion Detection and Attack Tolerance for Cloud Environments: The CLARUS Approach. 2016 IEEE 36th International Conference on Distributed Computing Systems Workshops (ICDCSW). :61–66.

The cloud has become an established and widespread paradigm. This success is due to the gain of flexibility and savings provided by this technology. However, the main obstacle to full cloud adoption is security. The cloud, as many other systems taking advantage of the Internet, is also facing threats that compromise data confidentiality and availability. In addition, new cloud-specific attacks have emerged and current intrusion detection and prevention mechanisms are not enough to protect the complex infrastructure of the cloud from these vulnerabilities. Furthermore, one of the promises of the cloud is the Quality of Service (QoS) by continuous delivery, which must be ensured even in case of intrusion. This work presents an overview of the main cloud vulnerabilities, along with the solutions proposed in the context of the H2020 CLARUS project in terms of monitoring techniques for intrusion detection and prevention, including attack-tolerance mechanisms.

2017-05-19
Zhou, Mengyu, Sui, Kaixin, Ma, Minghua, Zhao, Youjian, Pei, Dan, Moscibroda, Thomas.  2016.  MobiCamp: A Campus-wide Testbed for Studying Mobile Physical Activities. Proceedings of the 3rd International on Workshop on Physical Analytics. :1–6.

Ubiquitous WiFi infrastructure and smart phones offer a great opportunity to study physical activities. In this paper, we present MobiCamp, a large-scale testbed for studying mobility-related activities of residents on a campus. MobiCamp consists of \textasciitilde2,700 APs, \textasciitilde95,000 smart phones, and an App with \textasciitilde2,300 opt-in volunteer users. More specifically, we capture how mobile users interact with different types of buildings, with other users, and with classroom courses, etc. To achieve this goal, we first obtain a relatively complete coverage of the users' mobility traces by utilizing four types of information from SNMP and by relaxing the location granularity to roughly at the room level. Then the popular App provides user attributes (grade, gender, etc.) and fine-grained behavior information (phone usages, course timetables, etc.) of the sampled population. These detailed mobile data is then correlated with the mobility traces from the SNMP to estimate the entire campus population's physical activities. We use two applications to show the power of MobiCamp.

2017-03-07
Yashiro, Hisashi, Terai, Masaaki, Yoshida, Ryuji, Iga, Shin-ichi, Minami, Kazuo, Tomita, Hirofumi.  2016.  Performance Analysis and Optimization of Nonhydrostatic ICosahedral Atmospheric Model (NICAM) on the K Computer and TSUBAME2.5. Proceedings of the Platform for Advanced Scientific Computing Conference. :3:1–3:8.

We summarize the optimization and performance evaluation of the Nonhydrostatic ICosahedral Atmospheric Model (NICAM) on two different types of supercomputers: the K computer and TSUBAME2.5. First, we evaluated and improved several kernels extracted from the model code on the K computer. We did not significantly change the loop and data ordering for sufficient usage of the features of the K computer, such as the hardware-aided thread barrier mechanism and the relatively high bandwidth of the memory, i.e., a 0.5 Byte/FLOP ratio. Loop optimizations and code cleaning for a reduction in memory transfer contributed to a speed-up of the model execution time. The sustained performance ratio of the main loop of the NICAM reached 0.87 PFLOPS with 81,920 nodes on the K computer. For GPU-based calculations, we applied OpenACC to the dynamical core of NICAM. The performance and scalability were evaluated using the TSUBAME2.5 supercomputer. We achieved good performance results, which showed efficient use of the memory throughput performance of the GPU as well as good weak scalability. A dry dynamical core experiment was carried out using 2560 GPUs, which achieved 60 TFLOPS of sustained performance.

2017-05-22
Hay, Michael, Machanavajjhala, Ashwin, Miklau, Gerome, Chen, Yan, Zhang, Dan.  2016.  Principled Evaluation of Differentially Private Algorithms Using DPBench. Proceedings of the 2016 International Conference on Management of Data. :139–154.

Differential privacy has become the dominant standard in the research community for strong privacy protection. There has been a flood of research into query answering algorithms that meet this standard. Algorithms are becoming increasingly complex, and in particular, the performance of many emerging algorithms is data dependent, meaning the distribution of the noise added to query answers may change depending on the input data. Theoretical analysis typically only considers the worst case, making empirical study of average case performance increasingly important. In this paper we propose a set of evaluation principles which we argue are essential for sound evaluation. Based on these principles we propose DPBench, a novel evaluation framework for standardized evaluation of privacy algorithms. We then apply our benchmark to evaluate algorithms for answering 1- and 2-dimensional range queries. The result is a thorough empirical study of 15 published algorithms on a total of 27 datasets that offers new insights into algorithm behavior–-in particular the influence of dataset scale and shape–-and a more complete characterization of the state of the art. Our methodology is able to resolve inconsistencies in prior empirical studies and place algorithm performance in context through comparison to simple baselines. Finally, we pose open research questions which we hope will guide future algorithm design.

2017-03-07
Bortoli, Stefano, Bouquet, Paolo, Pompermaier, Flavio, Molinari, Andrea.  2016.  Semantic Big Data for Tax Assessment. Proceedings of the International Workshop on Semantic Big Data. :5:1–5:6.

Semantic Big Data is about the creation of new applications exploiting the richness and flexibility of declarative semantics combined with scalable and highly distributed data management systems. In this work, we present an application scenario in which a domain ontology, Open Refine and the Okkam Entity Name System enable a frictionless and scalable data integration process leading to a knowledge base for tax assessment. Further, we introduce the concept of Entiton as a flexible and efficient data model suitable for large scale data inference and analytic tasks. We successfully tested our data processing pipeline on a real world dataset, supporting ACI Informatica in the investigation for Vehicle Excise Duty (VED) evasion in Aosta Valley region (Italy). Besides useful business intelligence indicators, we implemented a distributed temporal inference engine to unveil VED evasion and circulation ban violations. The results of the integration are presented to the tax agents in a powerful Siren Solution KiBi dashboard, enabling seamless data exploration and business intelligence.

Jain, Shrainik, Moritz, Dominik, Halperin, Daniel, Howe, Bill, Lazowska, Ed.  2016.  SQLShare: Results from a Multi-Year SQL-as-a-Service Experiment. Proceedings of the 2016 International Conference on Management of Data. :281–293.

We analyze the workload from a multi-year deployment of a database-as-a-service platform targeting scientists and data scientists with minimal database experience. Our hypothesis was that relatively minor changes to the way databases are delivered can increase their use in ad hoc analysis environments. The web-based SQLShare system emphasizes easy dataset-at-a-time ingest, relaxed schemas and schema inference, easy view creation and sharing, and full SQL support. We find that these features have helped attract workloads typically associated with scripts and files rather than relational databases: complex analytics, routine processing pipelines, data publishing, and collaborative analysis. Quantitatively, these workloads are characterized by shorter dataset "lifetimes", higher query complexity, and higher data complexity. We report on usage scenarios that suggest SQL is being used in place of scripts for one-off data analysis and ad hoc data sharing. The workload suggests that a new class of relational systems emphasizing short-term, ad hoc analytics over engineered schemas may improve uptake of database technology in data science contexts. Our contributions include a system design for delivering databases into these contexts, a description of a public research query workload dataset released to advance research in analytic data systems, and an initial analysis of the workload that provides evidence of new use cases under-supported in existing systems.

2017-10-10
Mishra, Dharmendra Kumar, Vankar, Pranav, Tahiliani, Mohit P..  2016.  TCP Evaluation Suite for Ns-3. Proceedings of the Workshop on Ns-3. :25–32.

Congestion Control (CC) algorithms are essential to quickly restore the network performance back to stable whenever congestion occurs. A majority of the existing CC algorithms are implemented at the transport layer, mostly coupled with TCP. Over the past three decades, CC algorithms have incrementally evolved, resulting in many extensions of TCP. A thorough evaluation of a new TCP extension is a huge task. Hence, the Internet Congestion Control Research Group (ICCRG) has proposed a common TCP evaluation suite that helps researchers to gain an initial insight into the working of their proposed TCP extension. This paper presents an implementation of the TCP evaluation suite in ns-3, that automates the simulation setup, topology creation, traffic generation, execution, and results collection. We also describe the internals of our implementation and demonstrate its usage for evaluating the performance of five TCP extensions available in ns-3, by automatically setting up the following simulation scenarios: (i) single and multiple bottleneck topologies, (ii) varying bottleneck bandwidth, (iii) varying bottleneck RTT and (iv) varying the number of long flows.