Biblio
The growing popularity of Android and the increasing amount of sensitive data stored in mobile devices have lead to the dissemination of Android ransomware. Ransomware is a class of malware that makes data inaccessible by blocking access to the device or, more frequently, by encrypting the data; to recover the data, the user has to pay a ransom to the attacker. A solution for this problem is to backup the data. Although backup tools are available for Android, these tools may be compromised or blocked by the ransomware itself. This paper presents the design and implementation of RANSOMSAFEDROID, a TrustZone based backup service for mobile devices. RANSOMSAFEDROID is protected from malware by leveraging the ARM TrustZone extension and running in the secure world. It does backup of files periodically to a secure local persistent partition and pushes these backups to external storage to protect them from ransomware. Initially, RANSOMSAFEDROID does a full backup of the device filesystem, then it does incremental backups that save the changes since the last backup. As a proof-of-concept, we implemented a RANSOMSAFEDROID prototype and provide a performance evaluation using an i.MX53 development board.
The challenge of maintaining confidentiality of stored and processed data in a remote database or cloud is quite urgent. Using homomorphic encryption may solve the problem, because it allows to compute some functions over encrypted data without preliminary deciphering of data. Fully homomorphic encryption schemes have a number of limitations such as accumulation of noise and increase of ciphertext extension during performing operations, the range of operations is limited. Nowadays a lot of homomorphic encryption schemes and their modifications have been investigated, so more than 25 reports on homomorphic encryption schemes have already been published on Cryptology ePrint Archive for 2016. We propose an overview of current Fully Homomorphic Encryption Schemes and analyze specific operations for databases which homomorphic cryptosystems allow to perform. We also investigate the possibility of sorting over encrypted data and present our approach to compare data encrypted by Multi-bit FHE scheme.
Virtualization based memory isolation has been widely used as a security primitive in many security systems. This paper firstly provides an in-depth analysis of its effectiveness in the multicore setting, a first in the literature. Our study reveals that memory isolation by itself is inadequate for security. Due to the fundamental design choices in hardware, it faces several challenging issues including page table maintenance, address mapping validation and thread identification. As demonstrated by our attacks implemented on XMHF and BitVisor, these issues undermine the security of memory isolation. Next, we propose a new isolation approach that is immune to the aforementioned problems. In our design, the hypervisor constructs a fully isolated micro computing environment (FIMCE) that exposes a minimal attack surface to an untrusted OS on a multicore platform. By virtue of its architectural niche, FIMCE offers stronger assurance and greater versatility than memory isolation. We have built a prototype of FIMCE and measured its performance. To show the benefits of using FIMCE as a building block, we have also implemented several practical applications which cannot be securely realized by using memory isolation alone.
Securing their critical documents on the cloud from data threats is a major challenge faced by organizations today. Controlling and limiting access to such documents requires a robust and trustworthy access control mechanism. In this paper, we propose a semantically rich access control system that employs an access broker module to evaluate access decisions based on rules generated using the organizations confidentiality policies. The proposed system analyzes the multi-valued attributes of the user making the request and the requested document that is stored on a cloud service platform, before making an access decision. Furthermore, our system guarantees an end-to-end oblivious data transaction between the organization and the cloud service provider using oblivious storage techniques. Thus, an organization can use our system to secure their documents as well as obscure their access pattern details from an untrusted cloud service provider.
The objective of this paper is to outline the design specification, implementation and evaluation of a proposed accelerated encryption framework which deploys both homomorphic and symmetric-key encryptions to serve the privacy preserving processing; in particular, as a sub-system within the Privacy Preserving Speech Processing framework architecture as part of the PPSP-in-Cloud Platform. Following a preliminary study of GPU efficiency gains optimisations benchmarked for AES implementation we have addressed and resolved the Big Integer processing challenges in parallel implementation of bilinear pairing thus enabling the creation of partially homomorphic encryption schemes which facilitates applications such as speech processing in the encrypted domain on the cloud. This novel implementation has been validated in laboratory tests using a standard speech corpus and can be used for other application domains to support secure computation and privacy preserving big data storage/processing in the cloud.
In the process of big data analysis and processing, a key concern blocking users from storing and processing their data in the cloud is their misgivings about the security and performance of cloud services. There is an urgent need to develop an approach that can help each cloud service provider (CSP) to demonstrate that their infrastructure and service behavior can meet the users' expectations. However, most of the prior research work focused on validating the process compliance of cloud service without an accurate description of the basic service behaviors, and could not measure the security capability. In this paper, we propose a novel approach to verify cloud service security conformance called CloudSec, which reduces the description gap between the cloud provider and customer through modeling cloud service behaviors (CloudBeh Model) and security SLA (SecSLA Model). These models enable a systematic integration of security constraints and service behavior into cloud while using UPPAAL to check the conformance, which can not only check CloudBeh performance metrics conformance, but also verify whether the security constraints meet the SecSLA. The proposed approach is validated through case study and experiments with a cloud storage service based on OpenStack, which illustrates CloudSec approach effectiveness and can be applied in real cloud scenarios.
In this paper, we review big data characteristics and security challenges in the cloud and visit different cloud domains and security regulations. We propose using integrated auditing for secure data storage and transaction logs, real-time compliance and security monitoring, regulatory compliance, data environment, identity and access management, infrastructure auditing, availability, privacy, legality, cyber threats, and granular auditing to achieve big data security. We apply a stochastic process model to conduct security analyses in availability and mean time to security failure. Potential future works are also discussed.
NoSQL databases have become popular with enterprises due to their scalable and flexible storage management of big data. Nevertheless, their popularity also brings up security concerns. Most NoSQL databases lacked secure data encryption, relying on developers to implement cryptographic methods at application level or middleware layer as a wrapper around the database. While this approach protects the integrity of data, it increases the difficulty of executing queries. We were motivated to design a system that not only provides NoSQL databases with the necessary data security, but also supports the execution of query over encrypted data. Furthermore, how to exploit the distributed fashion of NoSQL databases to deliver high performance and scalability with massive client accesses is another important challenge. In this research, we introduce Crypt-NoSQL, the first prototype to support execution of query over encrypted data on NoSQL databases with high performance. Three different models of Crypt-NoSQL were proposed and performance was evaluated with Yahoo! Cloud Service Benchmark (YCSB) considering an enormous number of clients. Our experimental results show that Crypt-NoSQL can process queries over encrypted data with high performance and scalability. A guidance of establishing service level agreement (SLA) for Crypt-NoSQL as a cloud service is also proposed.
HDFS has been widely used for storing massive scale data which is vulnerable to site disaster. The file system backup is an important strategy for data retention. In this paper, we present an efficient, easy- to-use Backup and Disaster Recovery System for HDFS. The system includes a client based on HDFS with additional feature of remote backup, and a remote server with a HDFS cluster to keep the backup data. It supports full backup and regularly incremental backup to the server with very low cost and high throughout. In our experiment, the average speed of backup and recovery is up to 95 MB/s, approaching the theoretical maximum speed of gigabit Ethernet.
Open Science Big Data is emerging as an important area of research and software development. Although there are several high quality frameworks for Big Data, additional capabilities are needed for Open Science Big Data. These include data provenance, citable reusable data, data sources providing links to research literature, relationships to other data and theories, transparent analysis/reproducibility, data privacy, new optimizations/advanced algorithms, data curation, data storage and transfer. An important part of science is explanation of results, ideally leading to theory formation. In this paper, we examine means for supporting the use of theory in big data analytics as well as using big data to assist in theory formation. One approach is to fit data in a way that is compatible with some theory, existing or new. Functional Data Analysis allows precise fitting of data as well as penalties for lack of smoothness or even departure from theoretical expectations. This paper discusses principal differential analysis and related techniques for fitting data where, for example, a time-based process is governed by an ordinary differential equation. Automation in theory formation is also considered. Case studies in the fields of computational economics and finance are considered.
Compute-intensive simulations typically charge substantial workloads on an online simulation platform backed by limited computing clusters and storage resources. Some (or most) of the simulations initiated by users may accompany input parameters/files that have been already provided by other (or same) users in the past. Unfortunately, these duplicate simulations may aggravate the performance of the platform by drastic consumption of the limited resources shared by a number of users on the platform. To minimize or avoid conducting repeated simulations, we present a novel system, called SUPERMAN (SimUlation ProvEnance Recycling MANager) that can record simulation provenances and recycle the results of past simulations. This system presents a great opportunity to not only reutilize existing results but also perform various analytics helpful for those who are not familiar with the platform. The system also offers interoperability across other systems by collecting the provenances in a standardized format. In our simulated experiments we found that over half of past computing jobs could be answered without actual executions by our system.
Industrial networking has many issues based on the type of industries, data storage, data centers, and cloud computing, etc. Green data storage improves the scientific, commercial and industrial profile of the networking. Future industries are looking for cybersecurity solution with the low-cost resources in which the energy serving is the main problem in the industrial networking. To improve these problems, green data storage will be the priority because data centers and cloud computing deals with the data storage. In this analysis, we have decided to use solar energy source and different light rays as methodologies include a prism and the Li-Fi techniques. In this approach, light rays sent through the prism which allows us to transmit the data with different frequencies. This approach provides green energy and maximum protection within the data center. As a result, we have illustrated that cloud services within the green data center in industrial networking will achieve better protection with the low-cost energy through this analysis. Finally, we have to conclude that Li-Fi enhances the use of green energy and protection which are advantages to current and future industrial networking.
Now a day's cloud computing is power station to run multiple businesses. It is cumulating more and more users every day. Database-as-a-service is service model provided by cloud computing to store, manage and process data on a cloud platform. Database-as-a-service has key characteristics such as availability, scalability, elasticity. A customer does not have to worry about database installation and management. As a replacement, the cloud database service provider takes responsibility for installing and maintaining the database. The real problem occurs when it comes to storing confidential or private information in the cloud database, we cannot rely on the cloud data vendor. A curious cloud database vendor may capture and leak the secret information. For that purpose, Protected Database-as-a-service is a novel solution to this problem that provides provable and pragmatic privacy in the face of a compromised cloud database service provider. Protected Database-as-a-service defines various encryption schemes to choose encryption algorithm and encryption key to encrypt and decrypt data. It also provides "Master key" to users, so that a metadata storage table can be decrypted only by using the master key of the users. As a result, a cloud service vendor never gets access to decrypted data, and even if all servers are jeopardized, in such inauspicious circumstances a cloud service vendor will not be able to decrypt the data. Proposed Protected Database-as-a-service system allows multiple geographically distributed clients to execute concurrent and independent operation on encrypted data and also conserve data confidentiality and consistency at cloud level, to eradicate any intermediate server between the client and the cloud database.
Cloud Computing is one of the large and essential environment now a days to work for the storage collection and privacy preserve to that data. Cloud data security is most important and major concern for the client while use of the cloud services provided by the different service providers. There can be some major security concern and conflicts between the client and the service provider. To get out from those issues, a third party auditor uses as an auditor for assurance of data in the environment. Storage systems for the cloud has many fundamental challenges still today. All basic as well critical challenges among which storage space and security is generally the top concern in the cloud environment. To give the appropriate security issues we have proposed third party authentication system. The cloud not only for the simplified data storage but also secure data acquisition in cloud environment. At last we have perform different security analysis as well performance analysis. It give the results that proposed scheme has significant increases in efficiency for maintaining highly secure data storage and acquisition. The proposed method also helps to minimize the cost in environment and also increases communication efficiency in the cloud environment.
Due to a rapid revaluation in a virtualization environment, Virtual Machines (VMs) are target point for an attacker to gain privileged access of the virtual infrastructure. The Advanced Persistent Threats (APTs) such as malware, rootkit, spyware, etc. are more potent to bypass the existing defense mechanisms designed for VM. To address this issue, Virtual Machine Introspection (VMI) emerged as a promising approach that monitors run state of the VM externally from hypervisor. However, limitation of VMI lies with semantic gap. An open source tool called LibVMI address the semantic gap. Memory Forensic Analysis (MFA) tool such as Volatility can also be used to address the semantic gap. But, it needs to capture a memory dump (RAM) as input. Memory dump acquires time and its analysis time is highly crucial if Intrusion Detection System IDS (IDS) depends on the data supplied by FAM or VMI tool. In this work, live virtual machine RAM dump acquire time of LibVMI is measured. In addition, captured memory dump analysis time consumed by Volatility is measured and compared with other memory analyzer such as Rekall. It is observed through experimental results that, Rekall takes more execution time as compared to Volatility for most of the plugins. Further, Volatility and Rekall are compared with LibVMI. It is noticed that examining the volatile data through LibVMI is faster as it eliminates memory dump acquire time.
In distributed wireless storage systems, failed recovery probability depends on not only wireless channel conditions but also storage size of each distributed storage node. For efficient utilization of limited storage capacity, we asymptotically analyze the failed recovery probability of a distributed wireless storage system with a sum storage capacity constraint when signal-to-noise ratio goes to infinity, and find the optimal storage allocation strategy across distributed storage nodes in terms of the asymptotic failed recovery probability. It is also shown that when the number of storage nodes is sufficiently large the storage size required at each node is not so large for high exponential order of the failed recovery probability.
Modern storage systems stripe redundant data across multiple nodes to provide availability guarantees against node failures. One form of data redundancy is based on XOR-based erasure codes, which use only XOR operations for encoding and decoding. In addition to tolerating failures, a storage system must also provide fast failure recovery to reduce the window of vulnerability. This work addresses the problem of speeding up the recovery of a single-node failure for general XOR-based erasure codes. We propose a replace recovery algorithm, which uses a hill-climbing technique to search for a fast recovery solution, such that the solution search can be completed within a short time period. We further extend the algorithm to adapt to the scenario where nodes have heterogeneous capabilities (e.g., processing power and transmission bandwidth). We implement our replace recovery algorithm atop a parallelized architecture to demonstrate its feasibility. We conduct experiments on a networked storage system testbed, and show that our replace recovery algorithm uses less recovery time than the conventional recovery approach.
In modern parallel storage systems (e.g., cloud storage and data centers), it is important to provide data availability guarantees against disk (or storage node) failures via redundancy coding schemes. One coding scheme is X-code, which is double-fault tolerant while achieving the optimal update complexity. When a disk/node fails, recovery must be carried out to reduce the possibility of data unavailability. We propose an X-code-based optimal recovery scheme called minimum-disk-read-recovery (MDRR), which minimizes the number of disk reads for single-disk failure recovery. We make several contributions. First, we show that MDRR provides optimal single-disk failure recovery and reduces about 25 percent of disk reads compared to the conventional recovery approach. Second, we prove that any optimal recovery scheme for X-code cannot balance disk reads among different disks within a single stripe in general cases. Third, we propose an efficient logical encoding scheme that issues balanced disk read in a group of stripes for any recovery algorithm (including the MDRR scheme). Finally, we implement our proposed recovery schemes and conduct extensive testbed experiments in a networked storage system prototype. Experiments indicate that MDRR reduces around 20 percent of recovery time of the conventional approach, showing that our theoretical findings are applicable in practice.
Checking remote data possession is of crucial importance in public cloud storage. It enables the users to check whether their outsourced data have been kept intact without downloading the original data. The existing remote data possession checking (RDPC) protocols have been designed in the PKI (public key infrastructure) setting. The cloud server has to validate the users' certificates before storing the data uploaded by the users in order to prevent spam. This incurs considerable costs since numerous users may frequently upload data to the cloud server. This study addresses this problem with a new model of identity-based RDPC (ID-RDPC) protocols. The authors present the first ID-RDPC protocol proven to be secure assuming the hardness of the standard computational Diffie-Hellman problem. In addition to the structural advantage of elimination of certificate management and verification, the authors ID-RDPC protocol also outperforms the existing RDPC protocols in the PKI setting in terms of computation and communication.
Remote data integrity checking is of crucial importance in cloud storage. It can make the clients verify whether their outsourced data is kept intact without downloading the whole data. In some application scenarios, the clients have to store their data on multicloud servers. At the same time, the integrity checking protocol must be efficient in order to save the verifier's cost. From the two points, we propose a novel remote data integrity checking model: ID-DPDP (identity-based distributed provable data possession) in multicloud storage. The formal system model and security model are given. Based on the bilinear pairings, a concrete ID-DPDP protocol is designed. The proposed ID-DPDP protocol is provably secure under the hardness assumption of the standard CDH (computational Diffie-Hellman) problem. In addition to the structural advantage of elimination of certificate management, our ID-DPDP protocol is also efficient and flexible. Based on the client's authorization, the proposed ID-DPDP protocol can realize private verification, delegated verification, and public verification.
The term Cloud Computing is not something that appeared overnight, it may come from the time when computer system remotely accessed the applications and services. Cloud computing is Ubiquitous technology and receiving a huge attention in the scientific and industrial community. Cloud computing is ubiquitous, next generation's in-formation technology architecture which offers on-demand access to the network. It is dynamic, virtualized, scalable and pay per use model over internet. In a cloud computing environment, a cloud service provider offers “house of resources” includes applications, data, runtime, middleware, operating system, virtualization, servers, data storage and sharing and networking and tries to take up most of the overhead of client. Cloud computing offers lots of benefits, but the journey of the cloud is not very easy. It has several pitfalls along the road because most of the services are outsourced to third parties with added enough level of risk. Cloud computing is suffering from several issues and one of the most significant is Security, privacy, service availability, confidentiality, integrity, authentication, and compliance. Security is a shared responsibility of both client and service provider and we believe security must be information centric, adaptive, proactive and built in. Cloud computing and its security are emerging study area nowadays. In this paper, we are discussing about data security in cloud at the service provider end and proposing a network storage architecture of data which make sure availability, reliability, scalability and security.
Cloud computing paradigm is being used because of its low up-front cost. In recent years, even mobile phone users store their data at Cloud. Customer information stored at Cloud needs to be protected against potential intruders as well as cloud service provider. There is threat to the data in transit and data at cloud due to different possible attacks. Organizations are transferring important information to the Cloud that increases concern over security of data. Cryptography is common approach to protect the sensitive information in Cloud. Cryptography involves managing encryption and decryption keys. In this paper, we compare key management methods, apply key management methods to various cloud environments and analyze symmetric key cryptography algorithms.