Visible to the public Biblio

Found 934 results

Filters: Keyword is Servers  [Clear All Filters]
2015-05-05
Manning, F.J., Mitropoulos, F.J..  2014.  Utilizing Attack Graphs to Measure the Efficacy of Security Frameworks across Multiple Applications. System Sciences (HICSS), 2014 47th Hawaii International Conference on. :4915-4920.

One of the primary challenges when developing or implementing a security framework for any particular environment is determining the efficacy of the implementation. Does the implementation address all of the potential vulnerabilities in the environment, or are there still unaddressed issues? Further, if there is a choice between two frameworks, what objective measure can be used to compare the frameworks? To address these questions, we propose utilizing a technique of attack graph analysis to map the attack surface of the environment and identify the most likely avenues of attack. We show that with this technique we can quantify the baseline state of an application and compare that to the attack surface after implementation of a security framework, while simultaneously allowing for comparison between frameworks in the same environment or a single framework across multiple applications.

Veugen, T., de Haan, R., Cramer, R., Muller, F..  2015.  A Framework for Secure Computations With Two Non-Colluding Servers and Multiple Clients, Applied to Recommendations. Information Forensics and Security, IEEE Transactions on. 10:445-457.

We provide a generic framework that, with the help of a preprocessing phase that is independent of the inputs of the users, allows an arbitrary number of users to securely outsource a computation to two non-colluding external servers. Our approach is shown to be provably secure in an adversarial model where one of the servers may arbitrarily deviate from the protocol specification, as well as employ an arbitrary number of dummy users. We use these techniques to implement a secure recommender system based on collaborative filtering that becomes more secure, and significantly more efficient than previously known implementations of such systems, when the preprocessing efforts are excluded. We suggest different alternatives for preprocessing, and discuss their merits and demerits.

Xinyi Huang, Yang Xiang, Bertino, E., Jianying Zhou, Li Xu.  2014.  Robust Multi-Factor Authentication for Fragile Communications. Dependable and Secure Computing, IEEE Transactions on. 11:568-581.

In large-scale systems, user authentication usually needs the assistance from a remote central authentication server via networks. The authentication service however could be slow or unavailable due to natural disasters or various cyber attacks on communication channels. This has raised serious concerns in systems which need robust authentication in emergency situations. The contribution of this paper is two-fold. In a slow connection situation, we present a secure generic multi-factor authentication protocol to speed up the whole authentication process. Compared with another generic protocol in the literature, the new proposal provides the same function with significant improvements in computation and communication. Another authentication mechanism, which we name stand-alone authentication, can authenticate users when the connection to the central server is down. We investigate several issues in stand-alone authentication and show how to add it on multi-factor authentication protocols in an efficient and generic way.

Fink, G.A., Griswold, R.L., Beech, Z.W..  2014.  Quantifying cyber-resilience against resource-exhaustion attacks. Resilient Control Systems (ISRCS), 2014 7th International Symposium on. :1-8.

Resilience in the information sciences is notoriously difficult to define much less to measure. But in mechanical engineering, the resilience of a substance is mathematically well-defined as an area under the stress-strain curve. We combined inspiration from mechanics of materials and axioms from queuing theory in an attempt to define resilience precisely for information systems. We first examine the meaning of resilience in linguistic and engineering terms and then translate these definitions to information sciences. As a general assessment of our approach's fitness, we quantify how resilience may be measured in a simple queuing system. By using a very simple model we allow clear application of established theory while being flexible enough to apply to many other engineering contexts in information science and cyber security. We tested our definitions of resilience via simulation and analysis of networked queuing systems. We conclude with a discussion of the results and make recommendations for future work.
 

Pal, S.K., Sardana, P., Sardana, A..  2014.  Efficient search on encrypted data using bloom filter. Computing for Sustainable Global Development (INDIACom), 2014 International Conference on. :412-416.

Efficient and secure search on encrypted data is an important problem in computer science. Users having large amount of data or information in multiple documents face problems with their storage and security. Cloud services have also become popular due to reduction in cost of storage and flexibility of use. But there is risk of data loss, misuse and theft. Reliability and security of data stored in the cloud is a matter of concern, specifically for critical applications and ones for which security and privacy of the data is important. Cryptographic techniques provide solutions for preserving the confidentiality of data but make the data unusable for many applications. In this paper we report a novel approach to securely store the data on a remote location and perform search in constant time without the need for decryption of documents. We use bloom filters to perform simple as well advanced search operations like case sensitive search, sentence search and approximate search.
 

Quan Jia, Huangxin Wang, Fleck, D., Fei Li, Stavrou, A., Powell, W..  2014.  Catch Me If You Can: A Cloud-Enabled DDoS Defense. Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on. :264-275.

We introduce a cloud-enabled defense mechanism for Internet services against network and computational Distributed Denial-of-Service (DDoS) attacks. Our approach performs selective server replication and intelligent client re-assignment, turning victim servers into moving targets for attack isolation. We introduce a novel system architecture that leverages a "shuffling" mechanism to compute the optimal re-assignment strategy for clients on attacked servers, effectively separating benign clients from even sophisticated adversaries that persistently follow the moving targets. We introduce a family of algorithms to optimize the runtime client-to-server re-assignment plans and minimize the number of shuffles to achieve attack mitigation. The proposed shuffling-based moving target mechanism enables effective attack containment using fewer resources than attack dilution strategies using pure server expansion. Our simulations and proof-of-concept prototype using Amazon EC2 [1] demonstrate that we can successfully mitigate large-scale DDoS attacks in a small number of shuffles, each of which incurs a few seconds of user-perceived latency.
 

Thompson, M., Evans, N., Kisekka, V..  2014.  Multiple OS rotational environment an implemented Moving Target Defense. Resilient Control Systems (ISRCS), 2014 7th International Symposium on. :1-6.

Cyber-attacks continue to pose a major threat to existing critical infrastructure. Although suggestions for defensive strategies abound, Moving Target Defense (MTD) has only recently gained attention as a possible solution for mitigating cyber-attacks. The current work proposes a MTD technique that provides enhanced security through a rotation of multiple operating systems. The MTD solution developed in this research utilizes existing technology to provide a feasible dynamic defense solution that can be deployed easily in a real networking environment. In addition, the system we developed was tested extensively for effectiveness using CORE Impact Pro (CORE), Nmap, and manual penetration tests. The test results showed that platform diversity and rotation offer improved security. In addition, the likelihood of a successful attack decreased proportionally with time between rotations.
 

Kampanakis, P., Perros, H., Beyene, T..  2014.  SDN-based solutions for Moving Target Defense network protection. A World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2014 IEEE 15th International Symposium on. :1-6.

Software-Defined Networking (SDN) allows network capabilities and services to be managed through a central control point. Moving Target Defense (MTD) on the other hand, introduces a constantly adapting environment in order to delay or prevent attacks on a system. MTD is a use case where SDN can be leveraged in order to provide attack surface obfuscation. In this paper, we investigate how SDN can be used in some network-based MTD techniques. We first describe the advantages and disadvantages of these techniques, the potential countermeasures attackers could take to circumvent them, and the overhead of implementing MTD using SDN. Subsequently, we study the performance of the SDN-based MTD methods using Cisco's One Platform Kit and we show that they significantly increase the attacker's overheads.

Morrell, C., Ransbottom, J.S., Marchany, R., Tront, J.G..  2014.  Scaling IPv6 address bindings in support of a moving target defense. Internet Technology and Secured Transactions (ICITST), 2014 9th International Conference for. :440-445.

Moving target defense is an area of network security research in which machines are moved logically around a network in order to avoid detection. This is done by leveraging the immense size of the IPv6 address space and the statistical improbability of two machines selecting the same IPv6 address. This defensive technique forces a malicious actor to focus on the reconnaissance phase of their attack rather than focusing only on finding holes in a machine's static defenses. We have a current implementation of an IPv6 moving target defense entitled MT6D, which works well although is limited to functioning in a peer to peer scenario. As we push our research forward into client server networks, we must discover what the limits are in reference to the client server ratio. In our current implementation of a simple UDP echo server that binds large numbers of IPv6 addresses to the ethernet interface, we discover limits in both the number of addresses that we can successfully bind to an interface and the speed at which UDP requests can be successfully handled across a large number of bound interfaces.
 

Yue-Bin Luo, Bao-Sheng Wang, Gui-Lin Cai.  2014.  Effectiveness of Port Hopping as a Moving Target Defense. Security Technology (SecTech), 2014 7th International Conference on. :7-10.

Port hopping is a typical moving target defense, which constantly changes service port number to thwart reconnaissance attack. It is effective in hiding service identities and confusing potential attackers, but it is still unknown how effective port hopping is and under what circumstances it is a viable proactive defense because the existed works are limited and they usually discuss only a few parameters and give some empirical studies. This paper introduces urn model and quantifies the likelihood of attacker success in terms of the port pool size, number of probes, number of vulnerable services, and hopping frequency. Theoretical analysis shows that port hopping is an effective and promising proactive defense technology in thwarting network attacks.
 

SHAR, L., Briand, L., Tan, H..  2014.  Web Application Vulnerability Prediction using Hybrid Program Analysis and Machine Learning. Dependable and Secure Computing, IEEE Transactions on. PP:1-1.

Due to limited time and resources, web software engineers need support in identifying vulnerable code. A practical approach to predicting vulnerable code would enable them to prioritize security auditing efforts. In this paper, we propose using a set of hybrid (static+dynamic) code attributes that characterize input validation and input sanitization code patterns and are expected to be significant indicators of web application vulnerabilities. Because static and dynamic program analyses complement each other, both techniques are used to extract the proposed attributes in an accurate and scalable way. Current vulnerability prediction techniques rely on the availability of data labeled with vulnerability information for training. For many real world applications, past vulnerability data is often not available or at least not complete. Hence, to address both situations where labeled past data is fully available or not, we apply both supervised and semi-supervised learning when building vulnerability predictors based on hybrid code attributes. Given that semi-supervised learning is entirely unexplored in this domain, we describe how to use this learning scheme effectively for vulnerability prediction. We performed empirical case studies on seven open source projects where we built and evaluated supervised and semi-supervised models. When cross validated with fully available labeled data, the supervised models achieve an average of 77 percent recall and 5 percent probability of false alarm for predicting SQL injection, cross site scripting, remote code execution and file inclusion vulnerabilities. With a low amount of labeled data, when compared to the supervised model, the semi-supervised model showed an average improvement of 24 percent higher recall and 3 percent lower probability of false alarm, thus suggesting semi-supervised learning may be a preferable solution for many real world applications where vulnerability data is missing.
 

Blankstein, A., Freedman, M.J..  2014.  Automating Isolation and Least Privilege in Web Services. Security and Privacy (SP), 2014 IEEE Symposium on. :133-148.

In many client-facing applications, a vulnerability in any part can compromise the entire application. This paper describes the design and implementation of Passe, a system that protects a data store from unintended data leaks and unauthorized writes even in the face of application compromise. Passe automatically splits (previously shared-memory-space) applications into sandboxed processes. Passe limits communication between those components and the types of accesses each component can make to shared storage, such as a backend database. In order to limit components to their least privilege, Passe uses dynamic analysis on developer-supplied end-to-end test cases to learn data and control-flow relationships between database queries and previous query results, and it then strongly enforces those relationships. Our prototype of Passe acts as a drop-in replacement for the Django web framework. By running eleven unmodified, off-the-shelf applications in Passe, we demonstrate its ability to provide strong security guarantees-Passe correctly enforced 96% of the applications' policies-with little additional overhead. Additionally, in the web-specific setting of the prototype, we also mitigate the cross-component effects of cross-site scripting (XSS) attacks by combining browser HTML5 sandboxing techniques with our automatic component separation.

Sayed, B., Traore, I..  2014.  Protection against Web 2.0 Client-Side Web Attacks Using Information Flow Control. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :261-268.

The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.

Gupta, M.K., Govil, M.C., Singh, G..  2014.  A context-sensitive approach for precise detection of cross-site scripting vulnerabilities. Innovations in Information Technology (INNOVATIONS), 2014 10th International Conference on. :7-12.

Currently, dependence on web applications is increasing rapidly for social communication, health services, financial transactions and many other purposes. Unfortunately, the presence of cross-site scripting vulnerabilities in these applications allows malicious user to steals sensitive information, install malware, and performs various malicious operations. Researchers proposed various approaches and developed tools to detect XSS vulnerability from source code of web applications. However, existing approaches and tools are not free from false positive and false negative results. In this paper, we propose a taint analysis and defensive programming based HTML context-sensitive approach for precise detection of XSS vulnerability from source code of PHP web applications. It also provides automatic suggestions to improve the vulnerable source code. Preliminary experiments and results on test subjects show that proposed approach is more efficient than existing ones.

Gupta, M.K., Govil, M.C., Singh, G..  2014.  Static analysis approaches to detect SQL injection and cross site scripting vulnerabilities in web applications: A survey. Recent Advances and Innovations in Engineering (ICRAIE), 2014. :1-5.

Dependence on web applications is increasing very rapidly in recent time for social communications, health problem, financial transaction and many other purposes. Unfortunately, presence of security weaknesses in web applications allows malicious user's to exploit various security vulnerabilities and become the reason of their failure. Currently, SQL Injection (SQLI) and Cross-Site Scripting (XSS) vulnerabilities are most dangerous security vulnerabilities exploited in various popular web applications i.e. eBay, Google, Facebook, Twitter etc. Research on defensive programming, vulnerability detection and attack prevention techniques has been quite intensive in the past decade. Defensive programming is a set of coding guidelines to develop secure applications. But, mostly developers do not follow security guidelines and repeat same type of programming mistakes in their code. Attack prevention techniques protect the applications from attack during their execution in actual environment. The difficulties associated with accurate detection of SQLI and XSS vulnerabilities in coding phase of software development life cycle. This paper proposes a classification of software security approaches used to develop secure software in various phase of software development life cycle. It also presents a survey of static analysis based approaches to detect SQL Injection and cross-site scripting vulnerabilities in source code of web applications. The aim of these approaches is to identify the weaknesses in source code before their exploitation in actual environment. This paper would help researchers to note down future direction for securing legacy web applications in early phases of software development life cycle.

Crisan, D., Birke, R., Barabash, K., Cohen, R., Gusat, M..  2014.  Datacenter Applications in Virtualized Networks: A Cross-Layer Performance Study. Selected Areas in Communications, IEEE Journal on. 32:77-87.

Datacenter-based Cloud computing has induced new disruptive trends in networking, key among which is network virtualization. Software-Defined Networking overlays aim to improve the efficiency of the next generation multitenant datacenters. While early overlay prototypes are already available, they focus mainly on core functionality, with little being known yet about their impact on the system level performance. Using query completion time as our primary performance metric, we evaluate the overlay network impact on two representative datacenter workloads, Partition/Aggregate and 3-Tier. We measure how much performance is traded for overlay's benefits in manageability, security and policing. Finally, we aim to assist the datacenter architects by providing a detailed evaluation of the key overlay choices, all made possible by our accurate cross-layer hybrid/mesoscale simulation platform.
 

Riggio, R., De Pellegrini, F., Siracusa, D..  2014.  The price of virtualization: Performance isolation in multi-tenants networks. Network Operations and Management Symposium (NOMS), 2014 IEEE. :1-7.

Network virtualization sits firmly on the Internet evolutionary path allowing researchers to experiment with novel clean-slate designs over the production network and practitioners to manage multi-tenants infrastructures in a flexible and scalable manner. In such scenarios, isolation between virtual networks is often intended as purely logical: this is the case of address space isolation or flow space isolation. This approach neglects the effect that network virtualization has on resource allocation network-wide. In this work we investigate the price paid by a purely logical approach in terms of performance degradation. This performance loss is paid by the actual users of a multi-tenants datacenter network. We propose a solution to this problem leveraging on a new network virtualization primitive, namely an online link utilization feedback mechanism. It provides each tenant with the necessary information to make efficient use of network resources. We evaluate our solution trough a real implementation exploiting the OpenFlow protocol. Empirical results confirm that the proposed scheme is able to support tenants in exploiting virtualized network resources effectively.
 

Bronzino, F., Chao Han, Yang Chen, Nagaraja, K., Xiaowei Yang, Seskar, I., Raychaudhuri, D..  2014.  In-Network Compute Extensions for Rate-Adaptive Content Delivery in Mobile Networks. Network Protocols (ICNP), 2014 IEEE 22nd International Conference on. :511-517.

Traffic from mobile wireless networks has been growing at a fast pace in recent years and is expected to surpass wired traffic very soon. Service providers face significant challenges at such scales including providing seamless mobility, efficient data delivery, security, and provisioning capacity at the wireless edge. In the Mobility First project, we have been exploring clean slate enhancements to the network protocols that can inherently provide support for at-scale mobility and trustworthiness in the Internet. An extensible data plane using pluggable compute-layer services is a key component of this architecture. We believe these extensions can be used to implement in-network services to enhance mobile end-user experience by either off-loading work and/or traffic from mobile devices, or by enabling en-route service-adaptation through context-awareness (e.g., Knowing contemporary access bandwidth). In this work we present details of the architectural support for in-network services within Mobility First, and propose protocol and service-API extensions to flexibly address these pluggable services from end-points. As a demonstrative example, we implement an in network service that does rate adaptation when delivering video streams to mobile devices that experience variable connection quality. We present details of our deployment and evaluation of the non-IP protocols along with compute-layer extensions on the GENI test bed, where we used a set of programmable nodes across 7 distributed sites to configure a Mobility First network with hosts, routers, and in-network compute services.

Sourlas, V., Tassiulas, L..  2014.  Replication management and cache-aware routing in information-centric networks. Network Operations and Management Symposium (NOMS), 2014 IEEE. :1-7.

Content distribution in the Internet places content providers in a dominant position, with delivery happening directly between two end-points, that is, from content providers to consumers. Information-Centrism has been proposed as a paradigm shift from the host-to-host Internet to a host-to-content one, or in other words from an end-to-end communication system to a native distribution network. This trend has attracted the attention of the research community, which has argued that content, instead of end-points, must be at the center stage of attention. Given this emergence of information-centric solutions, the relevant management needs in terms of performance have not been adequately addressed, yet they are absolutely essential for relevant network operations and crucial for the information-centric approaches to succeed. Performance management and traffic engineering approaches are also required to control routing, to configure the logic for replacement policies in caches and to control decisions where to cache, for instance. Therefore, there is an urgent need to manage information-centric resources and in fact to constitute their missing management and control plane which is essential for their success as clean-slate technologies. In this thesis we aim to provide solutions to crucial problems that remain, such as the management of information-centric approaches which has not yet been addressed, focusing on the key aspect of route and cache management.
 

Rashad Al-Dhaqm, A.M., Othman, S.H., Abd Razak, S., Ngadi, A..  2014.  Towards adapting metamodelling technique for database forensics investigation domain. Biometrics and Security Technologies (ISBAST), 2014 International Symposium on. :322-327.

Threats which come from database insiders or database outsiders have formed a big challenge to the protection of integrity and confidentiality in many database systems. To overcome this situation a new domain called a Database Forensic (DBF) has been introduced to specifically investigate these dynamic threats which have posed many problems in Database Management Systems (DBMS) of many organizations. DBF is a process to identify, collect, preserve, analyse, reconstruct and document all digital evidences caused by this challenge. However, until today, this domain is still lacks having a standard and generic knowledge base for its forensic investigation methods / tools due to many issues and challenges in its complex processes. Therefore, this paper will reveal an approach adapted from a software engineering domain called metamodelling which will unify these DBF complex knowledge processes into an artifact, a metamodel (DBF Metamodel). In future, the DBF Metamodel could benefit many DBF investigation users such as database investigators, stockholders, and other forensic teams in offering various possible solutions for their problem domain.
 

Silva Ferraz, F., Guimaraes Ferraz, C.A..  2014.  Smart City Security Issues: Depicting Information Security Issues in the Role of an Urban Environment. Utility and Cloud Computing (UCC), 2014 IEEE/ACM 7th International Conference on. :842-847.

For the first time in the history of humanity, more them half of the population is now living in big cities. This scenario has raised concerns related systems that provide basic services to citizens. Even more, those systems has now the responsibility to empower the citizen with information and values that may aid people on daily decisions, such as related to education, transport, healthy and others. This environment creates a set of services that, interconnected, can develop a brand new range of solutions that refers to a term often called System of Systems. In this matter, focusing in a smart city, new challenges related to information security raises, those concerns may go beyond the concept of privacy issues exploring situations where the entire environment could be affected by issues different them only break the confidentiality of a data. This paper intends to discuss and propose 9 security issues that can be part of a smart city environment, and that explores more them just citizens privacy violations.
 

Chandrasekaran, S., Nandita, S., Nikhil Arvind, R..  2014.  Social network security management model using Unified Communications as a Service. Computer Applications and Information Systems (WCCAIS), 2014 World Congress on. :1-5.

The objective of the paper is to propose a social network security management model for a multi-tenancy SaaS application using Unified Communications as a Service (UCaaS) approach. The earlier security management models do not cover the issues when data inadvertently get exposed to other users due to poor implementation of the access management processes. When a single virtual machine moves or dissolves in the network, many separate machines may bypass the security conditions that had been implemented for its neighbors which lead to vulnerability of the hosted services. When the services are multi-tenant, the issue becomes very critical due to lack of asynchronous asymmetric communications between virtual when more number of applications and users are added into the network creating big data issues and its identity. The TRAIN model for the security management using PC-FAST algorithm is proposed in order to detect and identify the communication errors between the hosted services.
 

Kan Yang, Xiaohua Jia, Kui Ren, Ruitao Xie, Liusheng Huang.  2014.  Enabling efficient access control with dynamic policy updating for big data in the cloud. INFOCOM, 2014 Proceedings IEEE. :2013-2021.

Due to the high volume and velocity of big data, it is an effective option to store big data in the cloud, because the cloud has capabilities of storing big data and processing high volume of user access requests. Attribute-Based Encryption (ABE) is a promising technique to ensure the end-to-end security of big data in the cloud. However, the policy updating has always been a challenging issue when ABE is used to construct access control schemes. A trivial implementation is to let data owners retrieve the data and re-encrypt it under the new access policy, and then send it back to the cloud. This method incurs a high communication overhead and heavy computation burden on data owners. In this paper, we propose a novel scheme that enabling efficient access control with dynamic policy updating for big data in the cloud. We focus on developing an outsourced policy updating method for ABE systems. Our method can avoid the transmission of encrypted data and minimize the computation work of data owners, by making use of the previously encrypted data with old access policies. Moreover, we also design policy updating algorithms for different types of access policies. The analysis show that our scheme is correct, complete, secure and efficient.
 

Peng Li, Song Guo.  2014.  Load balancing for privacy-preserving access to big data in cloud. Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on. :524-528.

In the era of big data, many users and companies start to move their data to cloud storage to simplify data management and reduce data maintenance cost. However, security and privacy issues become major concerns because third-party cloud service providers are not always trusty. Although data contents can be protected by encryption, the access patterns that contain important information are still exposed to clouds or malicious attackers. In this paper, we apply the ORAM algorithm to enable privacy-preserving access to big data that are deployed in distributed file systems built upon hundreds or thousands of servers in a single or multiple geo-distributed cloud sites. Since the ORAM algorithm would lead to serious access load unbalance among storage servers, we study a data placement problem to achieve a load balanced storage system with improved availability and responsiveness. Due to the NP-hardness of this problem, we propose a low-complexity algorithm that can deal with large-scale problem size with respect to big data. Extensive simulations are conducted to show that our proposed algorithm finds results close to the optimal solution, and significantly outperforms a random data placement algorithm.
 

Yanfei Guo, Lama, P., Changjun Jiang, Xiaobo Zhou.  2014.  Automated and Agile Server ParameterTuning by Coordinated Learning and Control. Parallel and Distributed Systems, IEEE Transactions on. 25:876-886.

Automated server parameter tuning is crucial to performance and availability of Internet applications hosted in cloud environments. It is challenging due to high dynamics and burstiness of workloads, multi-tier service architecture, and virtualized server infrastructure. In this paper, we investigate automated and agile server parameter tuning for maximizing effective throughput of multi-tier Internet applications. A recent study proposed a reinforcement learning based server parameter tuning approach for minimizing average response time of multi-tier applications. Reinforcement learning is a decision making process determining the parameter tuning direction based on trial-and-error, instead of quantitative values for agile parameter tuning. It relies on a predefined adjustment value for each tuning action. However it is nontrivial or even infeasible to find an optimal value under highly dynamic and bursty workloads. We design a neural fuzzy control based approach that combines the strengths of fast online learning and self-adaptiveness of neural networks and fuzzy control. Due to the model independence, it is robust to highly dynamic and bursty workloads. It is agile in server parameter tuning due to its quantitative control outputs. We implemented the new approach on a testbed of virtualized data center hosting RUBiS and WikiBench benchmark applications. Experimental results demonstrate that the new approach significantly outperforms the reinforcement learning based approach for both improving effective system throughput and minimizing average response time.