Visible to the public Biblio

Filters: Keyword is computer centres  [Clear All Filters]
2021-02-16
Mujib, M., Sari, R. F..  2020.  Performance Evaluation of Data Center Network with Network Micro-segmentation. 2020 12th International Conference on Information Technology and Electrical Engineering (ICITEE). :27—32.

Research on the design of data center infrastructure is increasing, both from academia and industry, due to the rapid development of cloud-based applications such as search engines, social networks, and large-scale computing. On a large scale, data centers can consist of hundreds to thousands of servers that require systems with high-performance requirements and low downtime. To meet the network's needs in a dynamic data center, infrastructure of applications and services are growing. It takes a process of designing a network topology so that it can guarantee availability and security. One way to surmount this is by implementing the zero trust security model based on micro-segmentation. Zero trust is a security idea based on the principle of "never trust, always verify" in which no concepts of trust and untrust in network traffic. The zero trust security model implemented network traffic in the form of untrust. Micro-segmentation is a way to achieve zero trust by dividing a network into smaller logical segments to restrict the traffic. In this research, data center network performance based on software-defined networking with zero trust security model using micro-segmentation has been evaluated using a testbed simulation of Cisco Application Centric Infrastructure by measuring the round trip time, jitter, and packet loss during experiments. Performance evaluation results show that micro-segmentation adds an average round trip time of 4 μs and jitter of 11 μs without packet loss so that the security can be improved without significantly affecting network performance on the data center.

2020-12-11
Sabek, I., Chandramouli, B., Minhas, U. F..  2019.  CRA: Enabling Data-Intensive Applications in Containerized Environments. 2019 IEEE 35th International Conference on Data Engineering (ICDE). :1762—1765.
Today, a modern data center hosts a wide variety of applications comprising batch, interactive, machine learning, and streaming applications. In this paper, we factor out the commonalities in a large majority of these applications, into a generic dataflow layer called Common Runtime for Applications (CRA). In parallel, another trend, with containerization technologies (e.g., Docker), has taken a serious hold on cloud-scale data centers, with direct implications on building next generation of data center applications. Container orchestrators (e.g., Kubernetes) have made deployment a lot easy, and they solve many infrastructure level problems, e.g., service discovery, auto-restart, and replication. For best in class performance, there is a need to marry the next generation applications with containerization technologies. To that end, CRA leverages and builds upon the containerization and resource orchestration capabilities of Kubernetes/Docker, and makes it easy to build a wide range of cloud-edge applications on top. To the best of our knowledge, we are the first to present a cloud native runtime for building data center applications. We show the efficiency of CRA through various micro-benchmarking experiments.
2020-12-02
Islam, S., Welzl, M., Gjessing, S..  2019.  How to Control a TCP: Minimally-Invasive Congestion Management for Datacenters. 2019 International Conference on Computing, Networking and Communications (ICNC). :121—125.

In multi-tenant datacenters, the hardware may be homogeneous but the traffic often is not. For instance, customers who pay an equal amount of money can get an unequal share of the bottleneck capacity when they do not open the same number of TCP connections. To address this problem, several recent proposals try to manipulate the traffic that TCP sends from the VMs. VCC and AC/DC are two new mechanisms that let the hypervisor control traffic by influencing the TCP receiver window (rwnd). This avoids changing the guest OS, but has limitations (it is not possible to make TCP increase its rate faster than it normally would). Seawall, on the other hand, completely rewrites TCP's congestion control, achieving fairness but requiring significant changes to both the hypervisor and the guest OS. There seems to be a need for a middle ground: a method to control TCP's sending rate without requiring a complete redesign of its congestion control. We introduce a minimally-invasive solution that is flexible enough to cater for needs ranging from weighted fairness in multi-tenant datacenters to potentially offering Internet-wide benefits from reduced interflow competition.

2020-12-01
Li, W., Guo, D., Li, K., Qi, H., Zhang, J..  2018.  iDaaS: Inter-Datacenter Network as a Service. IEEE Transactions on Parallel and Distributed Systems. 29:1515—1529.

Increasing number of Internet-scale applications, such as video streaming, incur huge amount of wide area traffic. Such traffic over the unreliable Internet without bandwidth guarantee suffers unpredictable network performance. This result, however, is unappealing to the application providers. Fortunately, Internet giants like Google and Microsoft are increasingly deploying their private wide area networks (WANs) to connect their global datacenters. Such high-speed private WANs are reliable, and can provide predictable network performance. In this paper, we propose a new type of service-inter-datacenter network as a service (iDaaS), where traditional application providers can reserve bandwidth from those Internet giants to guarantee their wide area traffic. Specifically, we design a bandwidth trading market among multiple iDaaS providers and application providers, and concentrate on the essential bandwidth pricing problem. The involved challenging issue is that the bandwidth price of each iDaaS provider is not only influenced by other iDaaS providers, but also affected by the application providers. To address this issue, we characterize the interaction between iDaaS providers and application providers using a Stackelberg game model, and analyze the existence and uniqueness of the equilibrium. We further present an efficient bandwidth pricing algorithm by blending the advantage of a geometrical Nash bargaining solution and the demand segmentation method. For comparison, we present two bandwidth reservation algorithms, where each iDaaS provider's bandwidth is reserved in a weighted fair manner and a max-min fair manner, respectively. Finally, we conduct comprehensive trace-driven experiments. The evaluation results show that our proposed algorithms not only ensure the revenue of iDaaS providers, but also provide bandwidth guarantee for application providers with lower bandwidth price per unit.

Zhang, Y., Deng, L., Chen, M., Wang, P..  2018.  Joint Bidding and Geographical Load Balancing for Datacenters: Is Uncertainty a Blessing or a Curse? IEEE/ACM Transactions on Networking. 26:1049—1062.

We consider the scenario where a cloud service provider (CSP) operates multiple geo-distributed datacenters to provide Internet-scale service. Our objective is to minimize the total electricity and bandwidth cost by jointly optimizing electricity procurement from wholesale markets and geographical load balancing (GLB), i.e., dynamically routing workloads to locations with cheaper electricity. Under the ideal setting where exact values of market prices and workloads are given, this problem reduces to a simple linear programming and is easy to solve. However, under the realistic setting where only distributions of these variables are available, the problem unfolds into a non-convex infinite-dimensional one and is challenging to solve. One of our main contributions is to develop an algorithm that is proven to solve the challenging problem optimally, by exploring the full design space of strategic bidding. Trace-driven evaluations corroborate our theoretical results, demonstrate fast convergence of our algorithm, and show that it can reduce the cost for the CSP by up to 20% as compared with baseline alternatives. This paper highlights the intriguing role of uncertainty in workloads and market prices, measured by their variances. While uncertainty in workloads deteriorates the cost-saving performance of joint electricity procurement and GLB, counter-intuitively, uncertainty in market prices can be exploited to achieve a cost reduction even larger than the setting without price uncertainty.

2020-11-30
Cheng, D., Zhou, X., Ding, Z., Wang, Y., Ji, M..  2019.  Heterogeneity Aware Workload Management in Distributed Sustainable Datacenters. IEEE Transactions on Parallel and Distributed Systems. 30:375–387.
The tremendous growth of cloud computing and large-scale data analytics highlight the importance of reducing datacenter power consumption and environmental impact of brown energy. While many Internet service operators have at least partially powered their datacenters by green energy, it is challenging to effectively utilize green energy due to the intermittency of renewable sources, such as solar or wind. We find that the geographical diversity of internet-scale services can be carefully scheduled to improve the efficiency of applying green energy in datacenters. In this paper, we propose a holistic heterogeneity-aware cloud workload management approach, sCloud, that aims to maximize the system goodput in distributed self-sustainable datacenters. sCloud adaptively places the transactional workload to distributed datacenters, allocates the available resource to heterogeneous workloads in each datacenter, and migrates batch jobs across datacenters, while taking into account the green power availability and QoS requirements. We formulate the transactional workload placement as a constrained optimization problem that can be solved by nonlinear programming. Then, we propose a batch job migration algorithm to further improve the system goodput when the green power supply varies widely at different locations. Finally, we extend sCloud by integrating a flexible batch job manager to dynamically control the job execution progress without violating the deadlines. We have implemented sCloud in a university cloud testbed with real-world weather conditions and workload traces. Experimental results demonstrate sCloud can achieve near-to-optimal system performance while being resilient to dynamic power availability. sCloud with the flexible batch job management approach outperforms a heterogeneity-oblivious approach by 37 percent in improving system goodput and 33 percent in reducing QoS violations.
2020-07-27
Xu, Shuiling, Ji, Xinsheng, Liu, Wenyan.  2019.  Enhancing the Reliability of NFV with Heterogeneous Backup. 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :923–927.
Virtual network function provides tenant with flexible and scalable end-to-end service chaining in the cloud computing and data center environments. However, comparing with traditional hardware network devices, the uncertainty caused by software and virtualization of Network Function Virtualization expands the attack surface, making the network node vulnerable to a certain types of attacks. The existing approaches for solving the problem of reliability are able to reduce the impact of failure of physical devices, but pay little attention to the attack scenario, which could be persistent and covert. In this paper, a heterogeneous backup strategy is brought up, enhancing the intrusion tolerance of NFV SFC by dynamically switching the VNF executor. The validity of the method is verified by simulation and game theory analysis.
Babay, Amy, Tantillo, Thomas, Aron, Trevor, Platania, Marco, Amir, Yair.  2018.  Network-Attack-Resilient Intrusion-Tolerant SCADA for the Power Grid. 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :255–266.
As key components of the power grid infrastructure, Supervisory Control and Data Acquisition (SCADA) systems are likely to be targeted by nation-state-level attackers willing to invest considerable resources to disrupt the power grid. We present Spire, the first intrusion-tolerant SCADA system that is resilient to both system-level compromises and sophisticated network-level attacks and compromises. We develop a novel architecture that distributes the SCADA system management across three or more active sites to ensure continuous availability in the presence of simultaneous intrusions and network attacks. A wide-area deployment of Spire, using two control centers and two data centers spanning 250 miles, delivered nearly 99.999% of all SCADA updates initiated over a 30-hour period within 100ms. This demonstrates that Spire can meet the latency requirements of SCADA for the power grid.
2020-05-15
Khorsandroo, Sajad, Tosun, Ali Saman.  2018.  Time Inference Attacks on Software Defined Networks: Challenges and Countermeasures. 2018 IEEE 11th International Conference on Cloud Computing (CLOUD). :342—349.

Through time inference attacks, adversaries fingerprint SDN controllers, estimate switches flow-table size, and perform flow state reconnaissance. In fact, timing a SDN and analyzing its results can expose information which later empowers SDN resource-consumption or saturation attacks. In the real world, however, launching such attacks is not easy. This is due to some challenges attackers may encounter while attacking an actual SDN deployment. These challenges, which are not addressed adequately in the related literature, are investigated in this paper. Accordingly, practical solutions to mitigate such attacks are also proposed. Discussed challenges are clarified by means of conducting extensive experiments on an actual cloud data center testbed. Moreover, mitigation schemes have been implemented and examined in details. Experimental results show that proposed countermeasures effectively block time inference attacks.

2020-04-17
Khorsandroo, Sajad, Tosun, Ali Saman.  2019.  White Box Analysis at the Service of Low Rate Saturation Attacks on Virtual SDN Data Plane. 2019 IEEE 44th LCN Symposium on Emerging Topics in Networking (LCN Symposium). :100—107.

Today's virtual switches not only support legacy network protocols and standard network management interfaces, but also become adapted to OpenFlow as a prevailing communication protocol. This makes them a core networking component of today's virtualized infrastructures which are able to handle sophisticated networking scenarios in a flexible and software-defined manner. At the same time, these virtual SDN data planes become high-value targets because a compromised switch is hard to detect while it affects all components of a virtualized/SDN-based environment.Most of the well known programmable virtual switches in the market are open source which makes them cost-effective and yet highly configurable options in any network infrastructure deployment. However, this comes at a cost which needs to be addressed. Accordingly, this paper raises an alarm on how attackers may leverage white box analysis of software switch functionalities to lunch effective low profile attacks against it. In particular, we practically present how attackers can systematically take advantage of static and dynamic code analysis techniques to lunch a low rate saturation attack on virtual SDN data plane in a cloud data center.

2019-12-16
Pal, Manjish, Sahu, Prashant, Jaiswal, Shailesh.  2018.  LevelTree: A New Scalable Data Center Networks Topology. 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN). :482-486.

In recent time it has become very crucial for the data center networks (DCN) to broaden the system limit to be able to meet with the increasing need of cloud based applications. A decent DCN topology must comprise of numerous properties for low diameter, high bisection bandwidth, ease of organization and so on. In addition, a DCN topology should depict aptness in failure resiliency, scalability, construction and routing. In this paper, we introduce a new Data Center Network topology termed LevelTree built up with several modules grows as a tree topology and each module is constructed from a complete graph. LevelTree demonstrates great topological properties and it beats critical topologies like Jellyfish, VolvoxDC, and Fattree regarding providing a superior worthwhile plan with greater capacity.

2019-08-05
He, X., Zhang, Q., Han, Z..  2018.  The Hamiltonian of Data Center Network BCCC. 2018 IEEE 4th International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing, (HPSC) and IEEE International Conference on Intelligent Data and Security (IDS). :147–150.

With the development of cloud computing the topology properties of data center network are important to the computing resources. Recently a data center network structure - BCCC is proposed, which is recursively built structure with many good properties. and expandability. The Hamiltonian and expandability in data center network structure plays an extremely important role in network communication. This paper described the Hamiltonian and expandability of the expandable data center network for BCCC structure, the important role of Hamiltonian and expandability in network traffic.

2019-03-22
Liu, Y., Li, X., Xiao, L..  2018.  Service Oriented Resilience Strategy for Cloud Data Center. 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :269-274.

As an information hinge of various trades and professions in the era of big data, cloud data center bears the responsibility to provide uninterrupted service. To cope with the impact of failure and interruption during the operation on the Quality of Service (QoS), it is important to guarantee the resilience of cloud data center. Thus, different resilience actions are conducted in its life circle, that is, resilience strategy. In order to measure the effect of resilience strategy on the system resilience, this paper propose a new approach to model and evaluate the resilience strategy for cloud data center focusing on its core part of service providing-IT architecture. A comprehensive resilience metric based on resilience loss is put forward considering the characteristic of cloud data center. Furthermore, mapping model between system resilience and resilience strategy is built up. Then, based on a hierarchical colored generalized stochastic petri net (HCGSPN) model depicting the procedure of the system processing the service requests, simulation is conducted to evaluate the resilience strategy through the metric calculation. With a case study of a company's cloud data center, the applicability and correctness of the approach is demonstrated.

Guntupally, K., Devarakonda, R., Kehoe, K..  2018.  Spring Boot Based REST API to Improve Data Quality Report Generation for Big Scientific Data: ARM Data Center Example. 2018 IEEE International Conference on Big Data (Big Data). :5328-5329.

Web application technologies are growing rapidly with continuous innovation and improvements. This paper focuses on the popular Spring Boot [1] java-based framework for building web and enterprise applications and how it provides the flexibility for service-oriented architecture (SOA). One challenge with any Spring-based applications is its level of complexity with configurations. Spring Boot makes it easy to create and deploy stand-alone, production-grade Spring applications with very little Spring configuration. Example, if we consider Spring Model-View-Controller (MVC) framework [2], we need to configure dispatcher servlet, web jars, a view resolver, and component scan among other things. To solve this, Spring Boot provides several Auto Configuration options to setup the application with any needed dependencies. Another challenge is to identify the framework dependencies and associated library versions required to develop a web application. Spring Boot offers simpler dependency management by using a comprehensive, but flexible, framework and the associated libraries in one single dependency, which provides all the Spring related technology that you need for starter projects as compared to CRUD web applications. This framework provides a range of additional features that are common across many projects such as embedded server, security, metrics, health checks, and externalized configuration. Web applications are generally packaged as war and deployed to a web server, but Spring Boot application can be packaged either as war or jar file, which allows to run the application without the need to install and/or configure on the application server. In this paper, we discuss how Atmospheric Radiation Measurement (ARM) Data Center (ADC) at Oak Ridge National Laboratory, is using Spring Boot to create a SOA based REST [4] service API, that bridges the gap between frontend user interfaces and backend database. Using this REST service API, ARM scientists are now able to submit reports via a user form or a command line interface, which captures the same data quality or other important information about ARM data.

2018-12-10
Versluis, L., Neacsu, M., Iosup, A..  2018.  A Trace-Based Performance Study of Autoscaling Workloads of Workflows in Datacenters. 2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). :223–232.

To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated autoscaling policies, that is, policies that dynamically provision resources for the customer. Although selecting and tuning autoscaling policies is a challenging task for datacenter operators, so far relatively few studies investigate the performance of autoscaling for workloads of workflows. Complementing previous knowledge, in this work we propose the first comprehensive performance study in the field. Using trace-based simulation, we compare state-of-the-art autoscaling policies across multiple application domains, workload arrival patterns (e.g., burstiness), and system utilization levels. We further investigate the interplay between autoscaling and regular allocation policies, and the complexity cost of autoscaling. Our quantitative study focuses not only on traditional performance metrics and on state-of-the-art elasticity metrics, but also on time-and memory-related autoscaling-complexity metrics. Our main results give strong and quantitative evidence about previously unreported operational behavior, for example, that autoscaling policies perform differently across application domains and allocation and provisioning policies should be co-designed.

2018-10-26
Arya, D., Dave, M..  2017.  Security-based service broker policy for FOG computing environment. 2017 8th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–6.

With the evolution of computing from using personal computers to use of online Internet of Things (IoT) services and applications, security risks have also evolved as a major concern. The use of Fog computing enhances reliability and availability of the online services due to enhanced heterogeneity and increased number of computing servers. However, security remains an open challenge. Various trust models have been proposed to measure the security strength of available service providers. We utilize the quantized security of Datacenters and propose a new security-based service broker policy(SbSBP) for Fog computing environment to allocate the optimal Datacenter(s) to serve users' requests based on users' requirements of cost, time and security. Further, considering the dynamic nature of Fog computing, the concept of dynamic reconfiguration has been added. Comparative analysis of simulation results shows the effectiveness of proposed policy to incorporate users' requirements in the decision-making process.

2018-07-18
Thakre, P. P., Sahare, V. N..  2017.  VM live migration time reduction using NAS based algorithm during VM live migration. 2017 Third International Conference on Sensing, Signal Processing and Security (ICSSS). :242–246.

Live migration is the process used in virtualization environment of datacenters in order to take the benefit of zero downtime during system maintenance. But during migrating live virtual machines along with system files and storage data, network traffic gets increases across network bandwidth and delays in migration time. There is need to reduce the migration time in order to maintain the system performance by analyzing and optimizing the storage overheads which mainly creates due to unnecessary duplicated data transferred during live migration. So there is need of such storage device which will keep the duplicated data residing in both the source as well as target physical host i.e. NAS. The proposed hash map based algorithm maps all I/O operations in order to track the duplicated data by assigning hash value to both NAS and RAM data. Only the unique data then will be sent data to the target host without affecting service level agreement (SLA), without affecting VM migration time, application downtime, SLA violations, VM pre-migration and downtime post migration overheads during pre and post migration of virtual machines.

2018-03-19
Mehta, N. P., Sahai, A. K..  2017.  Internet of Things: Raging Devices and Standardization in Low-Powered Protocols. 2017 Second International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1–5.

This paper addresses the need for standard communication protocols for IoT devices with limited power and computational capabilities. The world is rapidly changing with the proliferation and deployment of IoT devices. This will bring in new communication challenges as these devices are connected to Internet and need to communicate with each other in real time. The paper provides an overview of IoT system architecture and the forthcoming challenges it will bring. There is an urging need to establish standards for communication in the IoT world. With the recent development of new protocols like CoAP, 6LowPAN, IEEE 802.15.4 and Thread in different layers of OSI model, additional challenges also present themselves. Performance and data management is becoming more critical than ever before due to the complexity of connecting raging number of IoT devices. The performance of the systems dealing with IoT devices will require appropriate capacity planning the associated development of data centers. Finally, the paper also presents some reasonable approaches to address the above issues in the IoT world.

2018-02-21
Ibdah, D., Kanani, M., Lachtar, N., Allan, N., Al-Duwairi, B..  2017.  On the security of SDN-enabled smartgrid systems. 2017 International Conference on Electrical and Computing Technologies and Applications (ICECTA). :1–5.

Software Defined Networks (SDNs) is a new networking paradigm that has gained a lot of attention in recent years especially in implementing data center networks and in providing efficient security solutions. The popularity of SDN and its attractive security features suggest that it can be used in the context of smart grid systems to address many of the vulnerabilities and security problems facing such critical infrastructure systems. This paper studies the impact of different cyber attacks that can target smart grid communication network which is implemented as a software defined network on the operation of the smart grid system in general. In particular, we perform different attack scenarios including DDoS attacks, location highjacking and link overloading against SDN networks of different controller types that include POX, Floodlight and RYU. Our experiments were carried out using the mininet simulator. The experiments show that SDN-enabled smartgrid systems are vulnerable to different types of attacks.

Lu, Y., Chen, G., Luo, L., Tan, K., Xiong, Y., Wang, X., Chen, E..  2017.  One more queue is enough: Minimizing flow completion time with explicit priority notification. IEEE INFOCOM 2017 - IEEE Conference on Computer Communications. :1–9.

Ideally, minimizing the flow completion time (FCT) requires millions of priorities supported by the underlying network so that each flow has its unique priority. However, in production datacenters, the available switch priority queues for flow scheduling are very limited (merely 2 or 3). This practical constraint seriously degrades the performance of previous approaches. In this paper, we introduce Explicit Priority Notification (EPN), a novel scheduling mechanism which emulates fine-grained priorities (i.e., desired priorities or DP) using only two switch priority queues. EPN can support various flow scheduling disciplines with or without flow size information. We have implemented EPN on commodity switches and evaluated its performance with both testbed experiments and extensive simulations. Our results show that, with flow size information, EPN achieves comparable FCT as pFabric that requires clean-slate switch hardware. And EPN also outperforms TCP by up to 60.5% if it bins the traffic into two priority queues according to flow size. In information-agnostic setting, EPN outperforms PIAS with two priority queues by up to 37.7%. To the best of our knowledge, EPN is the first system that provides millions of priorities for flow scheduling with commodity switches.

2018-02-02
Ghosh, U., Chatterjee, P., Tosh, D., Shetty, S., Xiong, K., Kamhoua, C..  2017.  An SDN Based Framework for Guaranteeing Security and Performance in Information-Centric Cloud Networks. 2017 IEEE 10th International Conference on Cloud Computing (CLOUD). :749–752.

Cloud data centers are critical infrastructures to deliver cloud services. Although security and performance of cloud data centers have been well studied in the past, their networking aspects are overlooked. Current network infrastructures in cloud data centers limit the ability of cloud provider to offer guaranteed cloud network resources to users. In order to ensure security and performance requirements as defined in the service level agreement (SLA) between cloud user and provider, cloud providers need the ability to provision network resources dynamically and on the fly. The main challenge for cloud provider in utilizing network resource can be addressed by provisioning virtual networks that support information centric services by separating the control plane from the cloud infrastructure. In this paper, we propose an sdn based information centric cloud framework to provision network resources in order to support elastic demands of cloud applications depending on SLA requirements. The framework decouples the control plane and data plane wherein the conceptually centralized control plane controls and manages the fully distributed data plane. It computes the path to ensure security and performance of the network. We report initial experiment on average round-trip delay between consumers and producers.

2018-01-16
Ba-Hutair, M. N., Kamel, I..  2016.  A New Scheme for Protecting the Privacy and Integrity of Spatial Data on the Cloud. 2016 IEEE Second International Conference on Multimedia Big Data (BigMM). :394–397.

As the amount of spatial data gets bigger, organizations realized that it is cheaper and more flexible to keep their data on the Cloud rather than to establish and maintain in-house huge data centers. Though this saves a lot for IT costs, organizations are still concerned about the privacy and security of their data. Encrypting the whole database before uploading it to the Cloud solves the security issue. But querying the database requires downloading and decrypting the data set, which is impractical. In this paper, we propose a new scheme for protecting the privacy and integrity of spatial data stored in the Cloud while being able to execute range queries efficiently. The proposed technique suggests a new index structure to support answering range query over encrypted data set. The proposed indexing scheme is based on the Z-curve. The paper describes a distributed algorithm for answering range queries over spatial data stored on the Cloud. We carried many simulation experiments to measure the performance of the proposed scheme. The experimental results show that the proposed scheme outperforms the most recent schemes by Kim et al. in terms of data redundancy.

Ghutugade, K. B., Patil, G. A..  2016.  Privacy preserving auditing for shared data in cloud. 2016 International Conference on Computing, Analytics and Security Trends (CAST). :300–305.

Cloud computing, often referred to as simply “the cloud,” is the delivery of on-demand computing resources; everything from applications to data centers over the Internet. Cloud is used not only for storing data, but also the stored data can be shared by multiple users. Due to this, the integrity of cloud data is subject to doubt. Every time it is not possible for user to download all data and verify integrity, so proposed system contain Third Party Auditor (TPA) to verify the integrity of shared data. During auditing, the shared data is kept private from public verifiers, who are able to verify shared data integrity without downloading or retrieving the entire data file. Group signature is used to preserve identity privacy of group members from third party auditor. Privacy preserving is done to ensure that the TPA cannot derive user's data content from the information collected during the auditing process.

2018-01-10
Wrona, K., Amanowicz, M., Szwaczyk, S., Gierłowski, K..  2017.  SDN testbed for validation of cross-layer data-centric security policies. 2017 International Conference on Military Communications and Information Systems (ICMCIS). :1–6.

Software-defined networks offer a promising framework for the implementation of cross-layer data-centric security policies in military systems. An important aspect of the design process for such advanced security solutions is the thorough experimental assessment and validation of proposed technical concepts prior to their deployment in operational military systems. In this paper, we describe an OpenFlow-based testbed, which was developed with a specific focus on validation of SDN security mechanisms - including both the mechanisms for protecting the software-defined network layer and the cross-layer enforcement of higher level policies, such as data-centric security policies. We also present initial experimentation results obtained using the testbed, which confirm its ability to validate simulation and analytic predictions. Our objective is to provide a sufficiently detailed description of the configuration used in our testbed so that it can be easily re-plicated and re-used by other security researchers in their experiments.

2017-12-28
Mondal, S. K., Sabyasachi, A. S., Muppala, J. K..  2017.  On Dependability, Cost and Security Trade-Off in Cloud Data Centers. 2017 IEEE 22nd Pacific Rim International Symposium on Dependable Computing (PRDC). :11–19.

The performance, dependability, and security of cloud service systems are vital for the ongoing operation, control, and support. Thus, controlled improvement in service requires a comprehensive analysis and systematic identification of the fundamental underlying constituents of cloud using a rigorous discipline. In this paper, we introduce a framework which helps identifying areas for potential cloud service enhancements. A cloud service cannot be completed if there is a failure in any of its underlying resources. In addition, resources are kept offline for scheduled maintenance. We use redundant resources to mitigate the impact of failures/maintenance for ensuring performance and dependability; which helps enhancing security as well. For example, at least 4 replicas are required to defend the intrusion of a single instance or a single malicious attack/fault as defined by Byzantine Fault Tolerance (BFT). Data centers with high performance, dependability, and security are outsourced to the cloud computing environment with greater flexibility of cost of owing the computing infrastructure. In this paper, we analyze the effectiveness of redundant resource usage in terms of dependability metric and cost of service deployment based on the priority of service requests. The trade-off among dependability, cost, and security under different redundancy schemes are characterized through the comprehensive analytical models.