Biblio
This paper presents the development and configuration of a virtually air-gapped cloud environment in AWS, to secure the production software workloads and patient data (ePHI) and to achieve HIPAA compliance.
The diagnosis of performance issues in cloud environments is a challenging problem, due to the different levels of virtualization, the diversity of applications and their interactions on the same physical host. Moreover, because of privacy, security, ease of deployment and execution overhead, an agent-less method, which limits its data collection to the physical host level, is often the only acceptable solution. In this paper, a precise host-based method, to recover wait state for the processes inside a given Virtual Machine (VM), is proposed. The virtual Process State Detection (vPSD) algorithm computes the state of processes through host kernel tracing. The state of a virtual Process (vProcess) is displayed in an interactive trace viewer (Trace Compass) for further inspection. Our proposed VM trace analysis algorithm has been open-sourced for further enhancements and for the benefit of other developers. Experimental evaluations were conducted using a mix of workload types (CPU, Disk, and Network), with different applications like Hadoop, MySQL, and Apache. vPSD, being based on host hypervisor tracing, brings a lower overhead (around 0.03%) as compared to other approaches.
Companies want to offer chat bots to their customers and employees which can answer questions, enable self-service, and showcase their products and services. Implementing and maintaining chat bots by hand costs time and money. Companies typically have web APIs for their services, which are often documented with an API specification. This paper presents a compiler that takes a web API specification written in Swagger and automatically generates a chat bot that helps the user make API calls. The generated bot is self-documenting, using descriptions from the API specification to answer help requests. Unfortunately, Swagger specifications are not always good enough to generate high-quality chat bots. This paper addresses this problem via a novel in-dialogue curation approach: the power user can improve the generated chat bot by interacting with it. The result is then saved back as an API specification. This paper reports on the design and implementation of the chat bot compiler, the in-dialogue curation, and working case studies.
The evolution of cloud-computing imposes many challenges on performance testing and requires not only a different approach and methodology of performance evaluation and analysis, but also specialized tools and frameworks to support such work. In traditional performance testing, typically a single workload was run against a static test configuration. The main metrics derived from such experiments included throughput, response times, and system utilization at steady-state. While this may have been sufficient in the past, where in many cases a single application was run on dedicated hardware, this approach is no longer suitable for cloud-based deployments. Whether private or public cloud, such environments typically host a variety of applications on distributed shared hardware resources, simultaneously accessed by a large number of tenants running heterogeneous workloads. The number of tenants as well as their activity and resource needs dynamically change over time, and the cloud infrastructure reacts to this by reallocating existing or provisioning new resources. Besides metrics such as the number of tenants and overall resource utilization, performance testing in the cloud must be able to answer many more questions: How is the quality of service of a tenant impacted by the constantly changing activity of other tenants? How long does it take the cloud infrastructure to react to changes in demand, and what is the effect on tenants while it does so? How well are service level agreements met? What is the resource consumption of individual tenants? How can global performance metrics on application- and system-level in a distributed system be correlated to an individual tenant's perceived performance? In this paper we present CloudPerf, a performance test framework specifically designed for distributed and dynamic multi-tenant environments, capable of answering all of the above questions, and more. CloudPerf consists of a distributed harness, a protocol-independent load generator and workload modeling framework, an extensible statistics framework with live-monitoring and post-analysis tools, interfaces for cloud deployment operations, and a rich set of both low-level as well as high-level workloads from different domains.
Security at virtualization level has always been a major issue in cloud computing environment. A large number of virtual machines that are hosted on a single server by various customers/client may face serious security threats due to internal/external network attacks. In this work, we have examined and evaluated these threats and their impact on OpenStack private cloud. We have also discussed the most popular DOS (Denial-of-Service) attack on DHCP server on this private cloud platform and evaluated the vulnerabilities in an OpenStack networking component, Neutron, due to which this attack can be performed through rogue DHCP server. Finally, a solution, a game-theory based cloud architecture, that helps to detect and prevent DOS attacks in OpenStack has been proposed.
The majority of business activity of our integrated and connected world takes place in networks based on cloud computing infrastructure that cross national, geographic and jurisdictional boundaries. Such an efficient entity interconnection is made possible through an emerging networking paradigm, Software Defined Networking (SDN) that intends to vastly simplify policy enforcement and network reconfiguration in a dynamic manner. However, despite the obvious advantages this novel networking paradigm introduces, its increased attack surface compared to traditional networking deployments proved to be a thorny issue that creates skepticism when safety-critical applications are considered. Especially when SDN is used to support Internet-of-Things (IoT)-related networking elements, additional security concerns rise, due to the elevated vulnerability of such deployments to specific types of attacks and the necessity of inter-cloud communication any IoT application would require. The overall number of connected nodes makes the efficient monitoring of all entities a real challenge, that must be tackled to prevent system degradation and service outage. This position paper provides an overview of common security issues of SDN when linked to IoT clouds, describes the design principals of the recently introduced Blockchain paradigm and advocates the reasons that render Blockchain as a significant security factor for solutions where SDN and IoT are involved.
The majority of business activity of our integrated and connected world takes place in networks based on cloud computing infrastructure that cross national, geographic and jurisdictional boundaries. Such an efficient entity interconnection is made possible through an emerging networking paradigm, Software Defined Networking (SDN) that intends to vastly simplify policy enforcement and network reconfiguration in a dynamic manner. However, despite the obvious advantages this novel networking paradigm introduces, its increased attack surface compared to traditional networking deployments proved to be a thorny issue that creates skepticism when safety-critical applications are considered. Especially when SDN is used to support Internet-of-Things (IoT)-related networking elements, additional security concerns rise, due to the elevated vulnerability of such deployments to specific types of attacks and the necessity of inter-cloud communication any IoT application would require. The overall number of connected nodes makes the efficient monitoring of all entities a real challenge, that must be tackled to prevent system degradation and service outage. This position paper provides an overview of common security issues of SDN when linked to IoT clouds, describes the design principals of the recently introduced Blockchain paradigm and advocates the reasons that render Blockchain as a significant security factor for solutions where SDN and IoT are involved.
This paper considers the security problem of outsourcing storage from user devices to the cloud. A secure searchable encryption scheme is presented to enable searching of encrypted user data in the cloud. The scheme simultaneously supports fuzzy keyword searching and matched results ranking, which are two important factors in facilitating practical searchable encryption. A chaotic fuzzy transformation method is proposed to support secure fuzzy keyword indexing, storage and query. A secure posting list is also created to rank the matched results while maintaining the privacy and confidentiality of the user data, and saving the resources of the user mobile devices. Comprehensive tests have been performed and the experimental results show that the proposed scheme is efficient and suitable for a secure searchable cloud storage system.
Cloud computing presents unlimited prospects for Information Technology (IT) industry and business enterprises alike. Rapid advancement brings a dark underbelly of new vulnerabilities and challenges unfolding with alarming regularity. Although cloud technology provides a ubiquitous environment facilitating business enterprises to conduct business across disparate locations, security effectiveness of this platform interspersed with threats which can bring everything that subscribes to the cloud, to a halt raises questions. However advantages of cloud platforms far outweighs drawbacks and study of new challenges helps overcome drawbacks of this technology. One such emerging security threat is of ransomware attack on the cloud which threatens to hold systems and data on cloud network to ransom with widespread damaging implications. This provides huge scope for IT security specialists to sharpen their skillset to overcome this new challenge. This paper covers the broad cloud architecture, current inherent cloud threat mechanisms, ransomware vulnerabilities posed and suggested methods to mitigate it.
In this paper, we review big data characteristics and security challenges in the cloud and visit different cloud domains and security regulations. We propose using integrated auditing for secure data storage and transaction logs, real-time compliance and security monitoring, regulatory compliance, data environment, identity and access management, infrastructure auditing, availability, privacy, legality, cyber threats, and granular auditing to achieve big data security. We apply a stochastic process model to conduct security analyses in availability and mean time to security failure. Potential future works are also discussed.
The main issue with big data in cloud is the processed or used always need to be by third party. It is very important for the owners of data or clients to trust and to have the guarantee of privacy for the information stored in cloud or analyzed as big data. The privacy models studied in previous research showed that privacy infringement for big data happened because of limitation, privacy guarantee rate or dissemination of accurate data which is obtainable in the data set. In addition, there are various privacy models. In order to determine the best and the most appropriate model to be applied in the future, which also guarantees big data privacy, it is necessary to invest in research and study. In the next part, we surfed some of the privacy models in order to determine the advantages and disadvantages of each model in privacy assurance for big data in cloud. The present study also proposes combined Diff-Anonym algorithm (K-anonymity and differential models) to provide data anonymity with guarantee to keep balance between ambiguity of private data and clarity of general data.
Delegated authorization protocols have become wide-spread to implement Web applications and services, where some popular providers managing people identity information and personal data allow their users to delegate third party Web services to access their data. In this paper, we analyze the risks related to untrusted providers not behaving correctly, and we solve this problem by proposing the first verifiable delegated authorization protocol that allows third party services to verify the correctness of users data returned by the provider. The contribution of the paper is twofold: we show how delegated authorization can be cryptographically enforced through authenticated data structures protocols, we extend the standard OAuth2 protocol by supporting efficient and verifiable delegated authorization including database updates and privileges revocation.
As the amount of spatial data gets bigger, organizations realized that it is cheaper and more flexible to keep their data on the Cloud rather than to establish and maintain in-house huge data centers. Though this saves a lot for IT costs, organizations are still concerned about the privacy and security of their data. Encrypting the whole database before uploading it to the Cloud solves the security issue. But querying the database requires downloading and decrypting the data set, which is impractical. In this paper, we propose a new scheme for protecting the privacy and integrity of spatial data stored in the Cloud while being able to execute range queries efficiently. The proposed technique suggests a new index structure to support answering range query over encrypted data set. The proposed indexing scheme is based on the Z-curve. The paper describes a distributed algorithm for answering range queries over spatial data stored on the Cloud. We carried many simulation experiments to measure the performance of the proposed scheme. The experimental results show that the proposed scheme outperforms the most recent schemes by Kim et al. in terms of data redundancy.
In order to support large volume of transactions and number of users, as estimated by the load demand modeling, a system needs to scale in order to continue to satisfy required quality attributes. In particular, for systems exposed to the Internet, scaling up may increase the attack surface susceptible to malicious intrusions. The new proactive approach based on the concept of Moving Target Defense (MTD) should be considered as a complement to current cybersecurity protection. In this paper, we analyze the scalability of the Self Cleansing Intrusion Tolerance (SCIT) MTD approach using Cloud infrastructure services. By applying the model of MTD with continuous rotation and diversity to a multi-node or multi-instance system, we argue that the effectiveness of the approach is dependent on the share-nothing architecture pattern of the large system. Furthermore, adding more resources to the MTD mechanism can compensate to achieve the desired level of secure availability.
The cloud has become an established and widespread paradigm. This success is due to the gain of flexibility and savings provided by this technology. However, the main obstacle to full cloud adoption is security. The cloud, as many other systems taking advantage of the Internet, is also facing threats that compromise data confidentiality and availability. In addition, new cloud-specific attacks have emerged and current intrusion detection and prevention mechanisms are not enough to protect the complex infrastructure of the cloud from these vulnerabilities. Furthermore, one of the promises of the cloud is the Quality of Service (QoS) by continuous delivery, which must be ensured even in case of intrusion. This work presents an overview of the main cloud vulnerabilities, along with the solutions proposed in the context of the H2020 CLARUS project in terms of monitoring techniques for intrusion detection and prevention, including attack-tolerance mechanisms.
Software discovery is a key management function to ensure that systems are free of vulnerabilities, comply with licensing requirements, and support advanced search for systems containing given software. Today, software is predominantly discovered through querying package management tools, or using rules that check for file metadata or contents. These approaches are inadequate as not every software is installed through package managers, and agile development practices lead to frequent deployment of software. Other approaches to software discovery use machine learning methods requiring training phase, or require maintaining knowledge bases. Columbus uses the knowledge of the software packaging practices that evolved over time, and uses the information embedded in the file system impression created by a software package to discover it. Columbus is able to discover software in 92% of all official Docker images. Further, Columbus can be used in problem diagnosis and drift detection situations to compare two different systems, or to determine the evolution of a system overtime.
Aerial photography is fast becoming essential in scientific research that requires multi-agent system in several perspective and we proposed a secured system using one of the well-known public key cryptosystem namely NTRU that is somewhat homomorphic in nature. Here we processed images of aerial photography that were captured by multi-agents. The agents encrypt the images and upload those in the cloud server that is untrusted. Cloud computing is a buzzword in modern era and public cloud is being used by people everywhere for its shared, on-demand nature. Cloud Environment faces a lot of security and privacy issues that needs to be solved. This paper focuses on how to use cloud so effectively that there remains no possibility of data or computation breaches from the cloud server itself as it is prone to the attack of treachery in different ways. The cloud server computes on the encrypted data without knowing the contents of the images. After concatenation, encrypted result is delivered to the concerned authority where it is decrypted retaining its originality. We set up our experiment in Amazon EC2 cloud server where several instances were the agents and an instance acted as the server. We varied several parameters so that we could minimize encryption time. After experimentation we produced our desired result within feasible time sustaining the image quality. This work ensures data security in public cloud that was our main concern.
Big data is the next frontier for modernization, rivalry, and profitability. It is the foundation of all the major trends such as social networks, mobile devices, healthcare, stock markets etc. Big data is efficiently stored in the cloud because of its high-volume, high-speed and high-assortment data resources. An unauthorized user access control is the gravest threat of huge information in the cloud environment because of the remote file storage. Attribute Based Encryption (ABE) is an efficient access control procedure to guarantee end-to-end security for huge information in the cloud. Most often existing ABE working principle is based on bilinear pairing. In this paper, we construct a peculiar ABE for big data in the cloud. Our proposed scheme is based on quadratic residue and attribute union which is based on fundamental arithmetic theorem.
The popularity of cloud hosting services also brings in new security challenges: it has been reported that these services are increasingly utilized by miscreants for their malicious online activities. Mitigating this emerging threat, posed by such "bad repositories" (simply Bar), is challenging due to the different hosting strategy to traditional hosting service, the lack of direct observations of the repositories by those outside the cloud, the reluctance of the cloud provider to scan its customers' repositories without their consent, and the unique evasion strategies employed by the adversary. In this paper, we took the first step toward understanding and detecting this emerging threat. Using a small set of "seeds" (i.e., confirmed Bars), we identified a set of collective features from the websites they serve (e.g., attempts to hide Bars), which uniquely characterize the Bars. These features were utilized to build a scanner that detected over 600 Bars on leading cloud platforms like Amazon, Google, and 150K sites, including popular ones like groupon.com, using them. Highlights of our study include the pivotal roles played by these repositories on malicious infrastructures and other important discoveries include how the adversary exploited legitimate cloud repositories and why the adversary uses Bars in the first place that has never been reported. These findings bring such malicious services to the spotlight and contribute to a better understanding and ultimately eliminating this new threat.
Cloud service providers typically adopt the multi-tenancy model to optimize resources usage and achieve the promised cost-effectiveness. Sharing resources between different tenants and the underlying complex technology increase the necessity of transparency and accountability. In this regard, auditing security compliance of the provider's infrastructure against standards, regulations and customers' policies takes on an increasing importance in the cloud to boost the trust between the stakeholders. However, virtualization and scalability make compliance verification challenging. In this work, we propose an automated framework that allows auditing the cloud infrastructure from the structural point of view while focusing on virtualization-related security properties and consistency between multiple control layers. Furthermore, to show the feasibility of our approach, we integrate our auditing system into OpenStack, one of the most used cloud infrastructure management systems. To show the scalability and validity of our framework, we present our experimental results on assessing several properties related to auditing inter-layer consistency, virtual machines co-residence, and virtual resources isolation.
Cloud has gained a wide acceptance across the globe. Despite wide acceptance and adoption of cloud computing, certain apprehensions and diffidence, related to safety and security of data still exists. The service provider needs to convince and demonstrate to the client, the confidentiality of data on the cloud. This can be broadly translated to issues related to the process of identifying, developing, maintaining and optimizing trust with clients regarding the services provided. Continuous demonstration, maintenance and optimization of trust of the agreed upon services affects the relationship with a client. The paper proposes a framework of integration of trust at the IAAS level in the cloud. It proposes a novel method of generation of trust index factor, considering the performance and the agility of the feedback received using fuzzy logic.
Cloud service providers typically adopt the multi-tenancy model to optimize resources usage and achieve the promised cost-effectiveness. Sharing resources between different tenants and the underlying complex technology increase the necessity of transparency and accountability. In this regard, auditing security compliance of the provider's infrastructure against standards, regulations and customers' policies takes on an increasing importance in the cloud to boost the trust between the stakeholders. However, virtualization and scalability make compliance verification challenging. In this work, we propose an automated framework that allows auditing the cloud infrastructure from the structural point of view while focusing on virtualization-related security properties and consistency between multiple control layers. Furthermore, to show the feasibility of our approach, we integrate our auditing system into OpenStack, one of the most used cloud infrastructure management systems. To show the scalability and validity of our framework, we present our experimental results on assessing several properties related to auditing inter-layer consistency, virtual machines co-residence, and virtual resources isolation.