Visible to the public Biblio

Found 138 results

Filters: Keyword is storage management  [Clear All Filters]
2023-03-31
Alzarog, Jellalah, Almhishi, Abdalwart, Alsunousi, Abubaker, Abulifa, Tareg Abubaker, Eltarjaman, Wisam, Sati, Salem Omar.  2022.  POX Controller Evaluation Based On Tree Topology For Data Centers. 2022 International Conference on Data Analytics for Business and Industry (ICDABI). :67–71.
The Software Defined Networking (SDN) is a solution for Data Center Networks (DCN). This solution offers a centralized control that helps to simplify the management and reduce the big data issues of storage management and data analysis. This paper investigates the performance of deploying an SDN controller in DCN. The paper considers the network topology with a different number of hosts using the Mininet emulator. The paper evaluates the performance of DCN based on Python SDN controllers with a different number of hosts. This evaluation compares POX and RYU controllers as DCN solutions using the throughput, delay, overhead, and convergence time. The results show that the POX outperforms the RYU controller and is the best choice for DCN.
2022-05-09
Zhou, Rui, He, Mingxing, Chen, Zhimin.  2021.  Certificateless Public Auditing Scheme with Data Privacy Preserving for Cloud Storage. 2021 IEEE 6th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA). :675–682.
Rapid development of cloud storage services, users are allowed to upload heavy storage and computational cost to cloud to reduce the local resource and energy consumption. While people enjoy the desirable benefits from the cloud storage service, critical security concerns in data outsourcing have been raised seriously. In the cloud storage service, data owner loses the physical control of the data and these data are fully controlled by the cloud server. As such, the integrity of outsourced data is being put at risk in reality. Remote data integrity checking (RDIC) is an effective solution to checking the integrity of uploaded data. However, most RDIC schemes are rely on traditional public key infrastructure (PKI), which leads communication and storage overhead due to the certificate management. Identity-based RDIC scheme is not need the storage management, but it has a drawback of key escrow. To solve these problems, we propose a practical certificateless RDIC scheme. Moreover, many public auditing schemes authorize the third party auditor (TPA) to check the integrity of remote data and the TPA is not fully trusted. Thus, we take the data privacy into account. The proposed scheme not only can overcome the above deficiencies but also able to preserve the data privacy against the TPA. Our theoretical analyses prove that our mechanism is correct and secure, and our mechanism is able to audit the integrity of cloud data efficiently.
2022-02-10
Rahman Mahdi, Md Safiur, Sadat, Md Nazmus, Mohammed, Noman, Jiang, Xiaoqian.  2020.  Secure Count Query on Encrypted Heterogeneous Data. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :548–555.
Cost-effective and efficient sequencing technologies have resulted in massive genomic data availability. To compute on a large-scale genomic dataset, it is often required to outsource the dataset to the cloud. To protect data confidentiality, data owners encrypt sensitive data before outsourcing. Outsourcing enhances data owners to eliminate the storage management problem. Since genome data is large in volume, secure execution of researchers query is challenging. In this paper, we propose a method to securely perform count query on datasets containing genotype, phenotype, and numeric data. Our method modifies the prefix-tree proposed by Hasan et al. [1] to incorporate numerical data. The proposed method guarantees data privacy, output privacy, and query privacy. We preserve the security through encryption and garbled circuits. For a query of 100 single-nucleotide polymorphism (SNPs) sequence, we achieve query execution time approximately 3.5 minutes in a database of 1500 records. To the best of our knowledge, this is the first proposed secure framework that addresses heterogeneous biomedical data including numeric attributes.
2021-03-29
Khan, S., Jadhav, A., Bharadwaj, I., Rooj, M., Shiravale, S..  2020.  Blockchain and the Identity based Encryption Scheme for High Data Security. 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC). :1005—1008.

Using the blockchain technology to store the privatedocuments of individuals will help make data more reliable and secure, preventing the loss of data and unauthorized access. The Consensus algorithm along with the hash algorithms maintains the integrity of data simultaneously providing authentication and authorization. The paper incorporates the block chain and the Identity Based Encryption management concept. The Identity based Management system allows the encryption of the user's data as well as their identity and thus preventing them from Identity theft and fraud. These two technologies combined will result in a more secure way of storing the data and protecting the privacy of the user.

Maklachkova, V. V., Dokuchaev, V. A., Statev, V. Y..  2020.  Risks Identification in the Exploitation of a Geographically Distributed Cloud Infrastructure for Storing Personal Data. 2020 International Conference on Engineering Management of Communication and Technology (EMCTECH). :1—6.

Throughout the life cycle of any technical project, the enterprise needs to assess the risks associated with its development, commissioning, operation and decommissioning. This article defines the task of researching risks in relation to the operation of a data storage subsystem in the cloud infrastructure of a geographically distributed company and the tools that are required for this. Analysts point out that, compared to 2018, in 2019 there were 3.5 times more cases of confidential information leaks from storages on unprotected (freely accessible due to incorrect configuration) servers in cloud services. The total number of compromised personal data and payment information records increased 5.4 times compared to 2018 and amounted to more than 8.35 billion records. Moreover, the share of leaks of payment information has decreased, but the percentage of leaks of personal data has grown and accounts for almost 90% of all leaks from cloud storage. On average, each unsecured service identified resulted in 33.7 million personal data records being leaked. Leaks are mainly related to misconfiguration of services and stored resources, as well as human factors. These impacts can be minimized by improving the skills of cloud storage administrators and regularly auditing storage. Despite its seeming insecurity, the cloud is a reliable way of storing data. At the same time, leaks are still occurring. According to Kaspersky Lab, every tenth (11%) data leak from the cloud became possible due to the actions of the provider, while a third of all cyber incidents in the cloud (31% in Russia and 33% in the world) were due to gullibility company employees caught up in social engineering techniques. Minimizing the risks associated with the storage of personal data is one of the main tasks when operating a company's cloud infrastructure.

2021-03-22
Fan, X., Zhang, F., Turamat, E., Tong, C., Wu, J. H., Wang, K..  2020.  Provenance-based Classification Policy based on Encrypted Search. 2020 2nd International Conference on Industrial Artificial Intelligence (IAI). :1–6.
As an important type of cloud data, digital provenance is arousing increasing attention on improving system performance. Currently, provenance has been employed to provide cues regarding access control and to estimate data quality. However, provenance itself might also be sensitive information. Therefore, provenance might be encrypted and stored in the Cloud. In this paper, we provide a mechanism to classify cloud documents by searching specific keywords from their encrypted provenance, and we prove our scheme achieves semantic security. In term of application of the proposed techniques, considering that files are classified to store separately in the cloud, in order to facilitate the regulation and security protection for the files, the classification policies can use provenance as conditions to determine the category of a document. Such as the easiest sample policy goes like: the documents have been reviewed twice can be classified as “public accessible”, which can be accessed by the public.
Wang, X., Chi, Y., Zhang, Y..  2020.  Traceable Ciphertext Policy Attribute-based Encryption Scheme with User Revocation for Cloud Storage. 2020 International Conference on Computer Engineering and Application (ICCEA). :91–95.
Ciphertext policy Attribute-based encryption (CPABE) plays an increasingly important role in the field of fine-grained access control for cloud storage. However, The exiting solution can not balance the issue of user identity tracking and user revocation. In this paper, we propose a CP-ABE scheme that supports association revocation and traceability. This scheme uses identity directory technology to realize single user revocation and associated user revocation, and the ciphertext re-encryption technology guarantees the forward security of revocation without updating the private key. In addition, we can accurately trace the identity of the user according to the decryption private key and effectively solve the problem of key abuse. This scheme is proved to be safe and traceable under the standard model, and can effectively control the computational and storage costs while maintaining functional advantages. It is suitable for the practical scenarios of tracking audit and user revocation.
Singh, P., Saroj, S. K..  2020.  A Secure Data Dynamics and Public Auditing Scheme for Cloud Storage. 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS). :695–700.
Cloud computing is an evolving technology that provides data storage and highly fast computing services at a very low cost. All data stored in the cloud is handled by their cloud service providers or the caretaker of the cloud. The data owner is concerned about the authenticity and reliability of the data stored in the cloud as the data owners. Data can be misappropriated or altered by any unauthorized user or person. This paper desire to suggest a secure public auditing scheme applying third party auditors to authenticate the privacy, reliability, and integrity of data stored in the cloud. This proposed auditing scheme composes the use of the AES-256 algorithm for encryption, SHA-512 for integrity check and RSA-15360 for public-key encryption. And perform data dynamics operation which deals with mostly insertion, deletion, and, modification.
2021-03-15
Bresch, C., Lysecky, R., Hély, D..  2020.  BackFlow: Backward Edge Control Flow Enforcement for Low End ARM Microcontrollers. 2020 Design, Automation Test in Europe Conference Exhibition (DATE). :1606–1609.
This paper presents BackFlow, a compiler-based toolchain that enforces indirect backward edge control flow integrity for low-end ARM Cortex-M microprocessors. BackFlow is implemented within the Clang/LLVM compiler and supports the ARM instruction set and its subset Thumb. The control flow integrity generated by the compiler relies on a bitmap, where each set bit indicates a valid pointer destination. The efficiency of the framework is benchmarked using an STM32 NUCLEO F446RE microcontroller. The obtained results show that the control flow integrity solution incurs an execution time overhead ranging from 1.5 to 4.5%.
2021-02-23
Patil, A., Jha, A., Mulla, M. M., Narayan, D. G., Kengond, S..  2020.  Data Provenance Assurance for Cloud Storage Using Blockchain. 2020 International Conference on Advances in Computing, Communication Materials (ICACCM). :443—448.

Cloud forensics investigates the crime committed over cloud infrastructures like SLA-violations and storage privacy. Cloud storage forensics is the process of recording the history of the creation and operations performed on a cloud data object and investing it. Secure data provenance in the Cloud is crucial for data accountability, forensics, and privacy. Towards this, we present a Cloud-based data provenance framework using Blockchain, which traces data record operations and generates provenance data. Initially, we design a dropbox like application using AWS S3 storage. The application creates a cloud storage application for the students and faculty of the university, thereby making the storage and sharing of work and resources efficient. Later, we design a data provenance mechanism for confidential files of users using Ethereum blockchain. We also evaluate the proposed system using performance parameters like query and transaction latency by varying the load and number of nodes of the blockchain network.

2021-02-15
Rabieh, K., Mercan, S., Akkaya, K., Baboolal, V., Aygun, R. S..  2020.  Privacy-Preserving and Efficient Sharing of Drone Videos in Public Safety Scenarios using Proxy Re-encryption. 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science (IRI). :45–52.
Unmanned Aerial Vehicles (UAVs) also known as drones are being used in many applications where they can record or stream videos. One interesting application is the Intelligent Transportation Systems (ITS) and public safety applications where drones record videos and send them to a control center for further analysis. These videos are shared by various clients such as law enforcement or emergency personnel. In such cases, the recording might include faces of civilians or other sensitive information that might pose privacy concerns. While the video can be encrypted and stored in the cloud that way, it can still be accessed once the keys are exposed to third parties which is completely insecure. To prevent such insecurity, in this paper, we propose proxy re-encryption based sharing scheme to enable third parties to access only limited videos without having the original encryption key. The costly pairing operations in proxy re-encryption are not used to allow rapid access and delivery of the surveillance videos to third parties. The key management is handled by a trusted control center, which acts as the proxy to re-encrypt the data. We implemented and tested the approach in a realistic simulation environment using different resolutions under ns-3. The implementation results and comparisons indicate that there is an acceptable overhead while it can still preserve the privacy of drivers and passengers.
Chen, Z., Chen, J., Meng, W..  2020.  A New Dynamic Conditional Proxy Broadcast Re-Encryption Scheme for Cloud Storage and Sharing. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :569–576.
Security of cloud storage and sharing is concerned for years since a semi-trusted party, Cloud Server Provider (CSP), has access to user data on cloud server that may leak users' private data without constraint. Intuitively, an efficient solution of protecting cloud data is to encrypt it before uploading to the cloud server. However, a new requirement, data sharing, makes it difficult to manage secret keys among data owners and target users. Therefore conditional proxy broadcast re-encryption technology (CPBRE) is proposed in recent years to provide data encryption and sharing approaches for cloud environment. It enables a data owner to upload encrypted data to the cloud server and a third party proxy can re-encrypted cloud data under certain condition to a new ciphertext so that target users can decrypt re-encrypted data using their own private key. But few CPBRE schemes are applicable for a dynamic cloud environment. In this paper, we propose a new dynamic conditional proxy broadcast reencryption scheme that can be dynamic in system user setting and target user group. The initialization phase does not require a fixed system user setup so that users can join or leave the system in any time. And data owner can dynamically change the group of user he wants to share data with. We also provide security analysis which proves our scheme to be secure against CSP, and performance analysis shows that our scheme exceeds other schemes in terms of functionality and resource cost.
Zhang, Z., Wang, Z., Li, S..  2020.  Research and Implementation on an Efficient Public Key Encryption Algorithm with Keyword Search Scheme. 2020 IEEE 5th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA). :314–319.
With the rapid development of network storage service, a number of companies and individuals have stored data on a third-party server. Encryption is an effective means of protecting the confidentiality and privacy of data, but retrieval on the encrypted data is a very difficult task. Thus, searchable encryption has become a hot topic in recent years. The paper first introduces the existing searchable encryption algorithms. Then studies the new PEKS scheme (NPEKS) and analyzes its performance and efficiency. In the end, based on NPEKS, introduced attribute encryption, designed a scheme which is suitable for corporate cloud storage environment. This scheme not only has the advantages of simplicity and efficiency, but also can realize the secret retrieval of the third-party data. Experiments show that comparing with existing PEKS schemes and other improved schemes, this scheme has the advantages of simplicity and high efficiency. In addition, its security is the same as existing PEKS schemes.
2021-01-25
ManJiang, D., Kai, C., ZengXi, W., LiPeng, Z..  2020.  Design of a Cloud Storage Security Encryption Algorithm for Power Bidding System. 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). 1:1875–1879.
To solve the problem of poor security and performance caused by traditional encryption algorithm in the cloud data storage of power bidding system, we proposes a hybrid encryption method based on symmetric encryption and asymmetric encryption. In this method, firstly, the plaintext upload file is divided into several blocks according to the proportion, then the large file block is encrypted by symmetrical encryption algorithm AES to ensure the encryption performance, and then the small file block and AES key are encrypted by asymmetric encryption algorithm ECC to ensure the file encryption strength and the security of key transmission. Finally, the ciphertext file is generated and stored in the cloud storage environment to prevent sensitive files Pieces from being stolen and destroyed. The experimental results show that the hybrid encryption method can improve the anti-attack ability of cloud storage files, ensure the security of file storage, and have high efficiency of file upload and download.
2021-01-11
Wang, W.-C., Ho, C.-C., Chang, Y.-M., Chang, Y.-H..  2020.  Challenges and Designs for Secure Deletion in Storage Systems. 2020 Indo – Taiwan 2nd International Conference on Computing, Analytics and Networks (Indo-Taiwan ICAN). :181–189.
Data security has risen to be one of the most critical concerns of computer professionals. Tighter legal requirements now exist for the purpose of protecting user data from unauthorized uses and for both preserving and erasing/sanitizing data records to meet legal compliance requirements. To meet the data security requirement, many secure (data) deletion techniques have been proposed to deal with the data security concerns from different system layers. This paper surveys the state-of-the-art secure deletion techniques that have been designed to pursue higher efficiency, verifiability, and portability for emerging types of hard disk drives and flash-based solid-state drives. Meanwhile, the pros and cons of implementing secure deletion in different system layers are also discussed, so as to assist in pursuing better secure deletion designs for future storage systems.
2020-12-28
Riaz, S., Khan, A. H., Haroon, M., Latif, S., Bhatti, S..  2020.  Big Data Security and Privacy: Current Challenges and Future Research perspective in Cloud Environment. 2020 International Conference on Information Management and Technology (ICIMTech). :977—982.

Cloud computing is an Internet-based technology that emerging rapidly in the last few years due to popular and demanded services required by various institutions, organizations, and individuals. structured, unstructured, semistructured data is transfer at a record pace on to the cloud server. These institutions, businesses, and organizations are shifting more and more increasing workloads on cloud server, due to high cost, space and maintenance issues from big data, cloud computing will become a potential choice for the storage of data. In Cloud Environment, It is obvious that data is not secure completely yet from inside and outside attacks and intrusions because cloud servers are under the control of a third party. The Security of data becomes an important aspect due to the storage of sensitive data in a cloud environment. In this paper, we give an overview of characteristics and state of art of big data and data security & privacy top threats, open issues and current challenges and their impact on business are discussed for future research perspective and review & analysis of previous and recent frameworks and architectures for data security that are continuously established against threats to enhance how to keep and store data in the cloud environment.

2020-12-15
Kleckler, M., Mohajer, S..  2020.  Secure Determinant Codes: Type-II Security. 2020 IEEE International Symposium on Information Theory (ISIT). :652—657.

{The secure exact-repair regenerating codes are studied, for distributed storage systems with parameters (n

Prajapati, S. A., Deb, S., Gupta, M. K..  2020.  On Some Universally Good Fractional Repetition Codes. 2020 International Conference on COMmunication Systems NETworkS (COMSNETS). :404—411.
Data storage in Distributed Storage Systems (DSS) is a multidimensional optimization problem. Using network coding, one wants to provide reliability, scalability, security, reduced storage overhead, reduced bandwidth for repair and minimal disk I/O in such systems. Advances in the construction of optimal Fractional Repetition (FR) codes, a smart replication of encoded packets on n nodes which also provides optimized disk I/O and where a node failure can be repaired by contacting some specific set of nodes in the system, is in high demand. An attempt towards the construction of universally good FR codes using three different approaches is addressed in this work. In this paper, we present that the code constructed using the partial regular graph for heterogeneous DSS, where the number of packets on each node is different, is universally good. Further, we also encounter the list of parameters for which the ring construction and the T-construction results in universally good codes. In addition, we evaluate the FR code constructions meeting the minimum distance bound.
2020-12-11
Liu, F., Li, J., Wang, Y., Li, L..  2019.  Kubestorage: A Cloud Native Storage Engine for Massive Small Files. 2019 6th International Conference on Behavioral, Economic and Socio-Cultural Computing (BESC). :1—4.
Cloud Native, the emerging computing infrastructure has become a new trend for cloud computing, especially after the development of containerization technology such as docker and LXD, and the orchestration system for them like Kubernetes and Swarm. With the growing popularity of Cloud Native, the following problems have been raised: (i) most Cloud Native applications were designed for making full use of the cloud platform, but their file storage has not been completely optimized for adapting it. (ii) the traditional file system is designed as a utility for storing and retrieving files, usually built into the kernel of the operating systems. But when placing it to a large-scale condition, like a network storage server shared by thousands of computing instances, and stores millions of files, it will be slow and even unstable. (iii) most storage solutions use metadata for faster tracking of files, but the metadata itself will take up a lot of space, and the capacity of it is usually limited. If the file system store metadata directly into hard disk without caching, the tracking of massive small files will be a lot slower. (iv) The traditional object storage solution can't provide enough features to make itself more practical on the cloud such as caching and auto replication. This paper proposes a new storage engine based on the well-known Haystack storage engine, optimized in terms of service discovery and Automated fault tolerance, make it more suitable for Cloud Native infrastructure, deployment and applications. We use the object storage model to solve the large and high-frequency file storage needs, offering a simple and unified set of APIs for application to access. We also take advantage of Kubernetes' sophisticated and automated toolchains to make cloud storage easier to deploy, more flexible to scale, and more stable to run.
2020-12-01
Garbo, A., Quer, S..  2018.  A Fast MPEG’s CDVS Implementation for GPU Featured in Mobile Devices. IEEE Access. 6:52027—52046.
The Moving Picture Experts Group's Compact Descriptors for Visual Search (MPEG's CDVS) intends to standardize technologies in order to enable an interoperable, efficient, and cross-platform solution for internet-scale visual search applications and services. Among the key technologies within CDVS, we recall the format of visual descriptors, the descriptor extraction process, and the algorithms for indexing and matching. Unfortunately, these steps require precision and computation accuracy. Moreover, they are very time-consuming, as they need running times in the order of seconds when implemented on the central processing unit (CPU) of modern mobile devices. In this paper, to reduce computation times and maintain precision and accuracy, we re-design, for many-cores embedded graphical processor units (GPUs), all main local descriptor extraction pipeline phases of the MPEG's CDVS standard. To reach this goal, we introduce new techniques to adapt the standard algorithm to parallel processing. Furthermore, to reduce memory accesses and efficiently distribute the kernel workload, we use new approaches to store and retrieve CDVS information on proper GPU data structures. We present a complete experimental analysis on a large and standard test set. Our experiments show that our GPU-based approach is remarkably faster than the CPU-based reference implementation of the standard, and it maintains a comparable precision in terms of true and false positive rates.
2020-11-23
Sreekumari, P..  2018.  Privacy-Preserving Keyword Search Schemes over Encrypted Cloud Data: An Extensive Analysis. 2018 IEEE 4th International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing, (HPSC) and IEEE International Conference on Intelligent Data and Security (IDS). :114–120.
Big Data has rapidly developed into a hot research topic in many areas that attracts attention from academia and industry around the world. Many organization demands efficient solution to store, process, analyze and search huge amount of information. With the rapid development of cloud computing, organization prefers cloud storage services to reduce the overhead of storing data locally. However, the security and privacy of big data in cloud computing is a major source of concern. One of the positive ways of protecting data is encrypting it before outsourcing to remote servers, but the encrypted significant amounts of cloud data brings difficulties for the remote servers to perform any keyword search functions without leaking information. Various privacy-preserving keyword search (PPKS) schemes have been proposed to mitigate the privacy issue of big data encrypted on cloud storage. This paper presents an extensive analysis of the existing PPKS techniques in terms of verifiability, efficiency and data privacy. Through this analysis, we present some valuable directions for future work.
2020-11-16
Zhang, C., Xu, C., Xu, J., Tang, Y., Choi, B..  2019.  GEMˆ2-Tree: A Gas-Efficient Structure for Authenticated Range Queries in Blockchain. 2019 IEEE 35th International Conference on Data Engineering (ICDE). :842–853.
Blockchain technology has attracted much attention due to the great success of the cryptocurrencies. Owing to its immutability property and consensus protocol, blockchain offers a new solution for trusted storage and computation services. To scale up the services, prior research has suggested a hybrid storage architecture, where only small meta-data are stored onchain and the raw data are outsourced to off-chain storage. To protect data integrity, a cryptographic proof can be constructed online for queries over the data stored in the system. However, the previous schemes only support simple key-value queries. In this paper, we take the first step toward studying authenticated range queries in the hybrid-storage blockchain. The key challenge lies in how to design an authenticated data structure (ADS) that can be efficiently maintained by the blockchain, in which a unique gas cost model is employed. By analyzing the performance of the existing techniques, we propose a novel ADS, called GEM2-tree, which is not only gas-efficient but also effective in supporting authenticated queries. To further reduce the ADS maintenance cost without sacrificing much the query performance, we also propose an optimized structure, GEM2*-tree, by designing a two-level index structure. Theoretical analysis and empirical evaluation validate the performance of the proposed ADSs.
Geeta, C. M., Rashmi, B. N., Raju, R. G. Shreyas, Raghavendra, S., Buyya, R., Venugopal, K. R., Iyengar, S. S., Patnaik, L. M..  2019.  EAODBT: Efficient Auditing for Outsourced Database with Token Enforced Cloud Storage. 2019 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE). :1–4.
Database outsourcing is one of the important utilities in cloud computing in which the Information Proprietor (IP) transfers the database administration to the Cloud Service Provider (CSP) in order to minimize the administration cost and preservation expenses of the database. Inspite of its immense profit, it undergoes few security issues such as privacy of deployed database and provability of search results. In the recent past, few of the studies have been carried out on provability of search results of Outsourced Database (ODB) that affords correctness and completeness of search results. But in the existing schemes, since there is flow of data between the Information Proprietor and the clients frequently, huge communication cost prevails at the Information Proprietor side. To address this challenge, in this paper we propose Efficient Auditing for Outsourced Database with Token Enforced Cloud Storage (EAODBT). The proposed scheme reduces the large communication cost prevailing at the Information Proprietor side and achieves correctness and completeness of search results even if the mischievous CSP knowingly sends a null set. Experimental analysis show that the proposed scheme has totally reduced the huge communication cost prevailing between Information Proprietor and clients, and simultaneously achieves the correctness and completeness of search results.
2020-11-04
Torkura, K. A., Sukmana, M. I. H., Strauss, T., Graupner, H., Cheng, F., Meinel, C..  2018.  CSBAuditor: Proactive Security Risk Analysis for Cloud Storage Broker Systems. 2018 IEEE 17th International Symposium on Network Computing and Applications (NCA). :1—10.

Cloud Storage Brokers (CSB) provide seamless and concurrent access to multiple Cloud Storage Services (CSS) while abstracting cloud complexities from end-users. However, this multi-cloud strategy faces several security challenges including enlarged attack surfaces, malicious insider threats, security complexities due to integration of disparate components and API interoperability issues. Novel security approaches are imperative to tackle these security issues. Therefore, this paper proposes CS-BAuditor, a novel cloud security system that continuously audits CSB resources, to detect malicious activities and unauthorized changes e.g. bucket policy misconfigurations, and remediates these anomalies. The cloud state is maintained via a continuous snapshotting mechanism thereby ensuring fault tolerance. We adopt the principles of chaos engineering by integrating BrokerMonkey, a component that continuously injects failure into our reference CSB system, CloudRAID. Hence, CSBAuditor is continuously tested for efficiency i.e. its ability to detect the changes injected by BrokerMonkey. CSBAuditor employs security metrics for risk analysis by computing severity scores for detected vulnerabilities using the Common Configuration Scoring System, thereby overcoming the limitation of insufficient security metrics in existing cloud auditing schemes. CSBAuditor has been tested using various strategies including chaos engineering failure injection strategies. Our experimental evaluation validates the efficiency of our approach against the aforementioned security issues with a detection and recovery rate of over 96 %.

2020-11-02
Kralevska, Katina, Gligoroski, Danilo, Jensen, Rune E., Øverby, Harald.  2018.  HashTag Erasure Codes: From Theory to Practice. IEEE Transactions on Big Data. 4:516—529.
Minimum-Storage Regenerating (MSR) codes have emerged as a viable alternative to Reed-Solomon (RS) codes as they minimize the repair bandwidth while they are still optimal in terms of reliability and storage overhead. Although several MSR constructions exist, so far they have not been practically implemented mainly due to the big number of I/O operations. In this paper, we analyze high-rate MDS codes that are simultaneously optimized in terms of storage, reliability, I/O operations, and repair-bandwidth for single and multiple failures of the systematic nodes. The codes were recently introduced in [1] without any specific name. Due to the resemblance between the hashtag sign \# and the procedure of the code construction, we call them in this paper HashTag Erasure Codes (HTECs). HTECs provide the lowest data-read and data-transfer, and thus the lowest repair time for an arbitrary sub-packetization level α, where α ≤ r⌈k/r⌉, among all existing MDS codes for distributed storage including MSR codes. The repair process is linear and highly parallel. Additionally, we show that HTECs are the first high-rate MDS codes that reduce the repair bandwidth for more than one failure. Practical implementations of HTECs in Hadoop release 3.0.0-alpha2 demonstrate their great potentials.