Visible to the public Biblio

Filters: Keyword is Kubernetes  [Clear All Filters]
2022-08-26
Ganguli, Mrittika, Ranganath, Sunku, Ravisundar, Subhiksha, Layek, Abhirupa, Ilangovan, Dakshina, Verplanke, Edwin.  2021.  Challenges and Opportunities in Performance Benchmarking of Service Mesh for the Edge. 2021 IEEE International Conference on Edge Computing (EDGE). :78—85.
As Edge deployments move closer towards the end devices, low latency communication among Edge aware applications is one of the key tenants of Edge service offerings. In order to simplify application development, service mesh architectures have emerged as the evolutionary architectural paradigms for taking care of bulk of application communication logic such as health checks, circuit breaking, secure communication, resiliency (among others), thereby decoupling application logic with communication infrastructure. The latency to throughput ratio needs to be measurable for high performant deployments at the Edge. Providing benchmark data for various edge deployments with Bare Metal and virtual machine-based scenarios, this paper digs into architectural complexities of deploying service mesh at edge environment, performance impact across north-south and east-west communications in and out of a service mesh leveraging popular open-source service mesh Istio/Envoy using a simple on-prem Kubernetes cluster. The performance results shared indicate performance impact of Kubernetes network stack with Envoy data plane. Microarchitecture analyses indicate bottlenecks in Linux based stacks from a CPU micro-architecture perspective and quantify the high impact of Linux's Iptables rule matching at scale. We conclude with the challenges in multiple areas of profiling and benchmarking requirement and a call to action for deploying a service mesh, in latency sensitive environments at Edge.
2022-01-31
Patel, Jatin, Halabi, Talal.  2021.  Optimizing the Performance of Web Applications in Mobile Cloud Computing. 2021 IEEE 6th International Conference on Smart Cloud (SmartCloud). :33—37.
Cloud computing adoption is on the rise. Many organizations have decided to shift their workload to the cloud to benefit from the scalability, resilience, and cost reduction characteristics. Mobile Cloud Computing (MCC) is an emerging computing paradigm that also provides many advantages to mobile users. Mobile devices function on wireless internet connectivity, which entails issues of limited bandwidth and network congestion. Hence, the primary focus of Web applications in MCC is on improving performance by quickly fulfilling customer's requests to improve service satisfaction. This paper investigates a new approach to caching data in these applications using Redis, an in-memory data store, to enhance Quality of Service. We highlight the two implementation approaches of fetching the data of an application either directly from the database or from the cache. Our experimental analysis shows that, based on performance metrics such as response time, throughput, latency, and number of hits, the caching approach achieves better performance by speeding up the data retrieval by up to four times. This improvement is of significant importance in mobile devices considering their limitation of network bandwidth and wireless connectivity.
2021-12-21
Rodigari, Simone, O'Shea, Donna, McCarthy, Pat, McCarry, Martin, McSweeney, Sean.  2021.  Performance Analysis of Zero-Trust Multi-Cloud. 2021 IEEE 14th International Conference on Cloud Computing (CLOUD). :730–732.
Zero Trust security model permits to secure cloud native applications while encrypting all network communication, authenticating, and authorizing every request. The service mesh can enable Zero Trust using a side-car proxy without changes to the application code. To the best of our knowledge, no previous work has provided a performance analysis of Zero Trust in a multi-cloud environment. This paper proposes a multi-cloud framework and a testing workflow to analyse performance of the data plane under load and the impact on the control plane, when Zero Trust is enabled. The results of preliminary tests show that Istio has reduced latency variability in responding to sequential HTTP requests. Results also reveal that the overall CPU and memory usage can increase based on service mesh configuration and the cloud environment.
2021-12-20
Mahboob, Jamal, Coffman, Joel.  2021.  A Kubernetes CI/CD Pipeline with Asylo as a Trusted Execution Environment Abstraction Framework. 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC). :0529–0535.
Modern commercial software development organizations frequently prescribe to a development and deployment pattern for releases known as continuous integration / continuous deployment (CI/CD). Kubernetes, a cluster-based distributed application platform, is often used to implement this pattern. While the abstract concept is fairly well understood, CI/CD implementations vary widely. Resources are scattered across on-premise and cloud-based services, and systems may not be fully automated. Additionally, while a development pipeline may aim to ensure the security of the finished artifact, said artifact may not be protected from outside observers or cloud providers during execution. This paper describes a complete CI/CD pipeline running on Kubernetes that addresses four gaps in existing implementations. First, the pipeline supports strong separation-of-duties, partitioning development, security, and operations (i.e., DevSecOps) roles. Second, automation reduces the need for a human interface. Third, resources are scoped to a Kubernetes cluster for portability across environments (e.g., public cloud providers). Fourth, deployment artifacts are secured with Asylo, a development framework for trusted execution environments (TEEs).
2021-05-25
Taha, Mohammad Bany, Chowdhury, Rasel.  2020.  GALB: Load Balancing Algorithm for CP-ABE Encryption Tasks in E-Health Environment. 2020 Fifth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN). :165–170.
Security of personal data in the e-healthcare has always been challenging issue. The embedded and wearable devices used to collect these personal and critical data of the patients and users are sensitive in nature. Attribute-Based Encryption is believed to provide access control along with data security for distributed data among multiple parties. These resources limited devices do have the capabilities to secure the data while sending to the cloud but instead it increases the overhead and latency of running the encryption algorithm. On the top of if confidentiality is required, which will add more latency. In order to reduce latency and overhead, we propose a new load balancing algorithm that will distribute the data to nearby devices with available resources to encrypt the data and send it to the cloud. In this article, we are proposing a load balancing algorithm for E-Health system called (GALB). Our algorithm is based on Genetic Algorithm (GA). Our algorithm (GALB) distribute the tasks that received to the main gateway between the devices on E-health environment. The distribution strategy is based on the available resources in the devices, the distance between the gateway and the those devices, and the complexity of the task (size) and CP-ABE encryption policy length. In order to evaluate our algorithm performance, we compare the near optimal solution proposed by GALB with the optimal solution proposed by LP.
2020-12-11
Sabek, I., Chandramouli, B., Minhas, U. F..  2019.  CRA: Enabling Data-Intensive Applications in Containerized Environments. 2019 IEEE 35th International Conference on Data Engineering (ICDE). :1762—1765.
Today, a modern data center hosts a wide variety of applications comprising batch, interactive, machine learning, and streaming applications. In this paper, we factor out the commonalities in a large majority of these applications, into a generic dataflow layer called Common Runtime for Applications (CRA). In parallel, another trend, with containerization technologies (e.g., Docker), has taken a serious hold on cloud-scale data centers, with direct implications on building next generation of data center applications. Container orchestrators (e.g., Kubernetes) have made deployment a lot easy, and they solve many infrastructure level problems, e.g., service discovery, auto-restart, and replication. For best in class performance, there is a need to marry the next generation applications with containerization technologies. To that end, CRA leverages and builds upon the containerization and resource orchestration capabilities of Kubernetes/Docker, and makes it easy to build a wide range of cloud-edge applications on top. To the best of our knowledge, we are the first to present a cloud native runtime for building data center applications. We show the efficiency of CRA through various micro-benchmarking experiments.
Liu, F., Li, J., Wang, Y., Li, L..  2019.  Kubestorage: A Cloud Native Storage Engine for Massive Small Files. 2019 6th International Conference on Behavioral, Economic and Socio-Cultural Computing (BESC). :1—4.
Cloud Native, the emerging computing infrastructure has become a new trend for cloud computing, especially after the development of containerization technology such as docker and LXD, and the orchestration system for them like Kubernetes and Swarm. With the growing popularity of Cloud Native, the following problems have been raised: (i) most Cloud Native applications were designed for making full use of the cloud platform, but their file storage has not been completely optimized for adapting it. (ii) the traditional file system is designed as a utility for storing and retrieving files, usually built into the kernel of the operating systems. But when placing it to a large-scale condition, like a network storage server shared by thousands of computing instances, and stores millions of files, it will be slow and even unstable. (iii) most storage solutions use metadata for faster tracking of files, but the metadata itself will take up a lot of space, and the capacity of it is usually limited. If the file system store metadata directly into hard disk without caching, the tracking of massive small files will be a lot slower. (iv) The traditional object storage solution can't provide enough features to make itself more practical on the cloud such as caching and auto replication. This paper proposes a new storage engine based on the well-known Haystack storage engine, optimized in terms of service discovery and Automated fault tolerance, make it more suitable for Cloud Native infrastructure, deployment and applications. We use the object storage model to solve the large and high-frequency file storage needs, offering a simple and unified set of APIs for application to access. We also take advantage of Kubernetes' sophisticated and automated toolchains to make cloud storage easier to deploy, more flexible to scale, and more stable to run.
2020-08-28
Chen, Chien-An.  2019.  With Great Abstraction Comes Great Responsibility: Sealing the Microservices Attack Surface. 2019 IEEE Cybersecurity Development (SecDev). :144—144.

While the IT industry is embracing the cloud-native technologies, migrating from monolithic architecture to service-oriented architecture is not a trivial process. It involves a lot of dissection and abstraction. The layer of abstraction designed for simplifying the development quickly becomes the barrier of visibility and the source of misconfigurations. The complexity may give microservices a larger attack surface compared to monolithic applications. This talk presents a microservices threat modeling that uncovers the attack vectors hidden in each abstraction layer. Scenarios of security breaches in microservices platforms are discussed, followed by the countermeasures to close these attack vectors. Finally, a decision-making process for architecting secure microservices is presented.

2020-08-14
Hussain, Fatima, Li, Weiyue, Noye, Brett, Sharieh, Salah, Ferworn, Alexander.  2019.  Intelligent Service Mesh Framework for API Security and Management. 2019 IEEE 10th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON). :0735—0742.
With the advancements in enterprise-level business development, the demand for new applications and services is overwhelming. For the development and delivery of such applications and services, enterprise businesses rely on Application Programming Interfaces (APIs). API management and classification is a cumbersome task considering the rapid increase in the number of APIs, and API to API calls. API Mashups, domain APIs and API service mesh are a few recommended techniques for ease of API creation, management, and monitoring. API service mesh is considered as one of the techniques in this regard, in which the service plane and the control plane are separated for improving efficiency as well as security. In this paper, we propose and implement a security framework for the creation of a secure API service mesh using Istio and Kubernetes. Afterwards, we propose an smart association model for automatic association of new APIs to already existing categories of service mesh. To the best of our knowledge, this smart association model is the first of its kind.
2019-12-16
Mikkilineni, Rao, Morana, Giovanni.  2019.  Post-Turing Computing, Hierarchical Named Networks and a New Class of Edge Computing. 2019 IEEE 28th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE). :82-87.

Advances in our understanding of the nature of cognition in its myriad forms (Embodied, Embedded, Extended, and Enactive) displayed in all living beings (cellular organisms, animals, plants, and humans) and new theories of information, info-computation and knowledge are throwing light on how we should build software systems in the digital universe which mimic and interact with intelligent, sentient and resilient beings in the physical universe. Recent attempts to infuse cognition into computing systems to push the boundaries of Church-Turing thesis have led to new computing models that mimic biological systems in encoding knowledge structures using both algorithms executed in stored program control machines and neural networks. This paper presents a new model and implements an application as hierarchical named network composed of microservices to create a managed process workflow by enabling dynamic configuration and reconfiguration of the microservice network. We demonstrate the resiliency, efficiency and scaling of the named microservice network using a novel edge cloud platform by Platina Systems. The platform eliminates the need for Virtual Machine overlay and provides high performance and low-latency with L3 based 100 GbE network and SSD support with RDMA and NVMeoE. The hierarchical named microservice network using Kubernetes provisioning stack provides all the cloud features such as elasticity, autoscaling, self-repair and live-migration without reboot. The model is derived from a recent theoretical framework for unification of different models of computation using "Structural Machines.'' They are shown to simulate Turing machines, inductive Turing machines and also are proved to be more efficient than Turing machines. The structural machine framework with a hierarchy of controllers managing the named service connections provides dynamic reconfiguration of the service network from browsers to database to address rapid fluctuations in the demand for or the availability of resources without having to reconfigure IP address base networks.