Biblio
Cloud computing denotes an IT infrastructure where data and software are stored and processed remotely in a data center of a cloud provider, which are accessible via an Internet service. This new paradigm is increasingly reaching the ears of companies and has revolutionized the marketplace of today owing to several factors, in particular its cost-effective architectures covering transmission, storage and intensive data computing. However, like any new technology, the cloud computing technology brings new problems of security, which represents the main restrain on turning to this paradigm. For this reason, users are reluctant to resort to the cloud because of security and protection of private data as well as lack of trust in cloud service providers. The work in this paper allows the readers to familiarize themselves with the field of security in the cloud computing paradigm while suggesting our contribution in this context. The security schema we propose allowing a distant user to ensure a completely secure migration of all their data anywhere in the cloud through DNA cryptography. Carried out experiments showed that our security solution outperforms its competitors in terms of integrity and confidentiality of data.
Today's emerging Industrial Internet of Things (IIoT) scenarios are characterized by the exchange of data between services across enterprises. Traditional access and usage control mechanisms are only able to determine if data may be used by a subject, but lack an understanding of how it may be used. The ability to control the way how data is processed is however crucial for enterprises to guarantee (and provide evidence of) compliant processing of critical data, as well as for users who need to control if their private data may be analyzed or linked with additional information - a major concern in IoT applications processing personal information. In this paper, we introduce LUCON, a data-centric security policy framework for distributed systems that considers data flows by controlling how messages may be routed across services and how they are combined and processed. LUCON policies prevent information leaks, bind data usage to obligations, and enforce data flows across services. Policy enforcement is based on a dynamic taint analysis at runtime and an upfront static verification of message routes against policies. We discuss the semantics of these two complementing enforcement models and illustrate how LUCON policies are compiled from a simple policy language into a first-order logic representation. We demonstrate the practical application of LUCON in a real-world IoT middleware and discuss its integration into Apache Camel. Finally, we evaluate the runtime impact of LUCON and discuss performance and scalability aspects.
ARM devices (mobile phone, IoT devices) are getting more popular in our daily life due to the low power consumption and cost. These devices carry a huge number of user's private information, which attracts attackers' attention and increase the security risk. The operating systems (e.g., Android, Linux) works out many memory data protection strategies on user's private information. However, the monolithic OS may contain security vulnerabilities that are exploited by the attacker to get root or even kernel privilege. Once the kernel privilege is obtained by the attacker, all data protection strategies will be gone and user's private information can be taken away. In this paper, we propose a hardened memory data protection framework called H-Securebox to defeat kernel-level memory data stolen attacks. H-Securebox leverages ARM hardware virtualization technique to protect the data on the memory with hypervisor privilege. We designed three types H-Securebox for programing developers to use. Although the attacker may have kernel privilege, she can not touch private data inside H-Securebox, since hypervisor privilege is higher than kernel privilege. With the implementation of H-Securebox system assisting by a tiny hypervisor on Raspberry Pi2 development board, we measure the performance overhead of our system and do the security evaluations. The results positively show that the overhead is negligible and the malicious application with root or kernel privilege can not access the private data protected by our system.
Because the Internet makes human lives easier, many devices are connected to the Internet daily. The private data of individuals and large companies, including health-related data, user bank accounts, and military and manufacturing data, are increasingly accessible via the Internet. Because almost all data is now accessible through the Internet, protecting these valuable assets has become a major concern. The goal of cyber security is to protect such assets from unauthorized use. Attackers use automated tools and manual techniques to penetrate systems by exploiting existing vulnerabilities and software bugs. To provide good enough security; attack methodologies, vulnerability concepts and defence strategies should be thoroughly investigated. The main purpose of this study is to show that the patches released for existing vulnerabilities at the operating system (OS) level and in software programs does not completely prevent cyber-attack. Instead, producing specific patches for each company and fixing software bugs by being aware of the software running on each specific system can provide a better result. This study also demonstrates that firewalls, antivirus software, Windows Defender and other prevention techniques are not sufficient to prevent attacks. Instead, this study examines different aspects of penetration testing to determine vulnerable applications and hosts using the Nmap and Metasploit frameworks. For a test case, a virtualized system is used that includes different versions of Windows and Linux OS.
We propose a privacy-preserving framework for learning visual classifiers by leveraging distributed private image data. This framework is designed to aggregate multiple classifiers updated locally using private data and to ensure that no private information about the data is exposed during and after its learning procedure. We utilize a homomorphic cryptosystem that can aggregate the local classifiers while they are encrypted and thus kept secret. To overcome the high computational cost of homomorphic encryption of high-dimensional classifiers, we (1) impose sparsity constraints on local classifier updates and (2) propose a novel efficient encryption scheme named doublypermuted homomorphic encryption (DPHE) which is tailored to sparse high-dimensional data. DPHE (i) decomposes sparse data into its constituent non-zero values and their corresponding support indices, (ii) applies homomorphic encryption only to the non-zero values, and (iii) employs double permutations on the support indices to make them secret. Our experimental evaluation on several public datasets shows that the proposed approach achieves comparable performance against state-of-the-art visual recognition methods while preserving privacy and significantly outperforms other privacy-preserving methods.
The main issue with big data in cloud is the processed or used always need to be by third party. It is very important for the owners of data or clients to trust and to have the guarantee of privacy for the information stored in cloud or analyzed as big data. The privacy models studied in previous research showed that privacy infringement for big data happened because of limitation, privacy guarantee rate or dissemination of accurate data which is obtainable in the data set. In addition, there are various privacy models. In order to determine the best and the most appropriate model to be applied in the future, which also guarantees big data privacy, it is necessary to invest in research and study. In the next part, we surfed some of the privacy models in order to determine the advantages and disadvantages of each model in privacy assurance for big data in cloud. The present study also proposes combined Diff-Anonym algorithm (K-anonymity and differential models) to provide data anonymity with guarantee to keep balance between ambiguity of private data and clarity of general data.