Biblio
Reliability and robustness of Internet of Things (IoT)-cloud-based communication is an important issue for prospective development of the IoT concept. In this regard, a robust and unique client-to-cloud communication physical layer is required. Physical Unclonable Function (PUF) is regarded as a suitable physics-based random identification hardware, but suffers from reliability problems. In this paper, we propose novel hardware concepts and furthermore an analysis method in CMOS technology to improve the hardware-based robustness of the generated PUF word from its first point of generation to the last cloud-interfacing point in a client. Moreover, we present a spectral analysis for an inexpensive high-yield implementation in a 65nm generation. We also offer robust monitoring concepts for the PUF-interfacing communication physical layer hardware.
The paper presents an example Sensor-cloud architecture that integrates security as its native ingredient. It is based on the multi-layer client-server model with separation of physical and virtual instances of sensors, gateways, application servers and data storage. It proposes the application of virtualised sensor nodes as a prerequisite for increasing security, privacy, reliability and data protection. All main concerns in Sensor-Cloud security are addressed: from secure association, authentication and authorization to privacy and data integrity and protection. The main concept is that securing the virtual instances is easier to implement, manage and audit and the only bottleneck is the physical interaction between real sensor and its virtual reflection.
Emerging computing relies heavily on secure backend storage for the massive size of big data originating from the Internet of Things (IoT) smart devices to the Cloud-hosted web applications. Structured Query Language (SQL) Injection Attack (SQLIA) remains an intruder's exploit of choice to pilfer confidential data from the back-end database with damaging ramifications. The existing approaches were all before the new emerging computing in the context of the Internet big data mining and as such will lack the ability to cope with new signatures concealed in a large volume of web requests over time. Also, these existing approaches were strings lookup approaches aimed at on-premise application domain boundary, not applicable to roaming Cloud-hosted services' edge Software-Defined Network (SDN) to application endpoints with large web request hits. Using a Machine Learning (ML) approach provides scalable big data mining for SQLIA detection and prevention. Unfortunately, the absence of corpus to train a classifier is an issue well known in SQLIA research in applying Artificial Intelligence (AI) techniques. This paper presents an application context pattern-driven corpus to train a supervised learning model. The model is trained with ML algorithms of Two-Class Support Vector Machine (TC SVM) and Two-Class Logistic Regression (TC LR) implemented on Microsoft Azure Machine Learning (MAML) studio to mitigate SQLIA. This scheme presented here, then forms the subject of the empirical evaluation in Receiver Operating Characteristic (ROC) curve.
With the steady increase of offered cloud storage services, they became a popular alternative to local storage systems. Beside several benefits, the usage of cloud storage services can offer, they have also some downsides like potential vendor lock-in or unavailability. Different pricing models, storage technologies and changing storage requirements are further complicating the selection of the best fitting storage solution. In this work, we present a heuristic optimization approach that optimizes the placement of data on cloud-based storage services in a redundant, cost- and latency-efficient way while considering user-defined Quality of Service requirements. The presented approach uses monitored data access patterns to find the best fitting storage solution. Through extensive evaluations, we show that our approach saves up to 30% of the storage cost and reduces the upload and download times by up to 48% and 69% in comparison to a baseline that follows a state-of-the-art approach.
Recently, Jung et al. [1] proposed a data access privilege scheme and claimed that their scheme addresses data and identity privacy as well as multi-authority, and provides data access privilege for attribute-based encryption. In this paper, we show that this scheme, and also its former and latest versions (i.e. [2] and [3] respectively) suffer from a number of weaknesses in terms of finegrained access control, users and authorities collusion attack, user authorization, and user anonymity protection. We then propose our new scheme that overcomes these shortcomings. We also prove the security of our scheme against user collusion attacks, authority collusion attacks and chosen plaintext attacks. Lastly, we show that the efficiency of our scheme is comparable with existing related schemes.
New generation communication technologies (e.g., 5G) enhance interactions in mobile and wireless communication networks between devices by supporting a large-scale data sharing. The vehicle is such kind of device that benefits from these technologies, so vehicles become a significant component of vehicular networks. Thus, as a classic application of Internet of Things (IoT), the vehicular network can provide more information services for its human users, which makes the vehicular network more socialized. A new concept is then formed, namely "Vehicular Social Networks (VSNs)", which bring both benefits of data sharing and challenges of security. Traditional public key infrastructures (PKI) can guarantee user identity authentication in the network; however, PKI cannot distinguish untrustworthy information from authorized users. For this reason, a trust evaluation mechanism is required to guarantee the trustworthiness of information by distinguishing malicious users from networks. Hence, this paper explores a trust evaluation algorithm for VSNs and proposes a cloud-based VSN architecture to implement the trust algorithm. Experiments are conducted to investigate the performance of trust algorithm in a vehicular network environment through building a three-layer VSN model. Simulation results reveal that the trust algorithm can be efficiently implemented by the proposed three-layer model.
The blockchain technology has emerged as an attractive solution to address performance and security issues in distributed systems. Blockchain's public and distributed peer-to-peer ledger capability benefits cloud computing services which require functions such as, assured data provenance, auditing, management of digital assets, and distributed consensus. Blockchain's underlying consensus mechanism allows to build a tamper-proof environment, where transactions on any digital assets are verified by set of authentic participants or miners. With use of strong cryptographic methods, blocks of transactions are chained together to enable immutability on the records. However, achieving consensus demands computational power from the miners in exchange of handsome reward. Therefore, greedy miners always try to exploit the system by augmenting their mining power. In this paper, we first discuss blockchain's capability in providing assured data provenance in cloud and present vulnerabilities in blockchain cloud. We model the block withholding (BWH) attack in a blockchain cloud considering distinct pool reward mechanisms. BWH attack provides rogue miner ample resources in the blockchain cloud for disrupting honest miners' mining efforts, which was verified through simulations.
The cloud computing paradigm enables enterprises to realise significant cost savings whilst boosting their agility and productivity. However, security and privacy concerns generally deter enterprises from migrating their critical data to the cloud. One way to alleviate these concerns, hence bolster the adoption of cloud computing, is to devise adequate security policies that control the manner in which these data are stored and accessed in the cloud. Nevertheless, for enterprises to entrust these policies, a framework capable of providing assurances about their correctness is required. This work proposes such a framework. In particular, it proposes an approach that enables enterprises to define their own view of what constitutes a correct policy through the formulation of an appropriate set of well-formedness constraints. These constraints are expressed ontologically thus enabling–-by virtue of semantic inferencing–- automated reasoning about their satisfaction by the policies.
Some IoT data are time-sensitive and cannot be processed in clouds, which are too far away from IoT devices. Fog computing, located as close as possible to data sources at the edge of IoT systems, deals with this problem. Some IoT data are sensitive and require privacy controls. The proposed Policy Enforcement Fog Module (PEFM), running within a single fog, operates close to data sources connected to their fog, and enforces privacy policies for all sensitive IoT data generated by these data sources. PEFM distinguishes two kinds of fog data processing. First, fog nodes process data for local IoT applications, running within the local fog. All real-time data processing must be local to satisfy real-time constraints. Second, fog nodes disseminate data to nodes beyond the local fog (including remote fogs and clouds) for remote (and non-real-time) IoT applications. PEFM has two components for these two kinds of fog data processing. First, Local Policy Enforcement Module (LPEM), performs direct privacy policy enforcement for sensitive data accessed by local IoT applications. Second, Remote Policy Enforcement Module (RPEM), sets up a mechanism for indirectly enforcing privacy policies for sensitive data sent to remote IoT applications. RPEM is based on creating and disseminating Active Data Bundles-software constructs bundling inseparably sensitive data, their privacy policies, and an execution engine able to enforce privacy policies. To prove effectiveness and efficiency of the solution, we developed a proof-of-concept scenario for a smart home IoT application. We investigate privacy threats for sensitive IoT data and show a framework for using PEFM to overcome these threats.
Now a day's cloud technology is a new example of computing that pays attention to more computer user, government agencies and business. Cloud technology brought more advantages particularly in every-present services where everyone can have a right to access cloud computing services by internet. With use of cloud computing, there is no requirement for physical servers or hardware that will help the computer system of company, networks and internet services. One of center services offered by cloud technology is storing the data in remote storage space. In the last few years, storage of data has been realized as important problems in information technology. In cloud computing data storage technology, there are some set of significant policy issues that includes privacy issues, anonymity, security, government surveillance, telecommunication capacity, liability, reliability and among others. Although cloud technology provides a lot of benefits, security is the significant issues between customer and cloud. Normally cloud computing technology has more customers like as academia, enterprises, and normal users who have various incentives to go to cloud. If the clients of cloud are academia, security result on computing performance and for this types of clients cloud provider's needs to discover a method to combine performance and security. In this research paper the more significant issue is security but with diverse vision. High performance might be not as dangerous for them as academia. In our paper, we design an efficient secure and verifiable outsourcing protocol for outsourcing data. We develop extended QP problem protocol for storing and outsourcing a data securely. To achieve the data security correctness, we validate the result returned through the cloud by Karush\_Kuhn\_Tucker conditions that are sufficient and necessary for the most favorable solution.
In cloud storage systems, users can upload their data along with associated tags (authentication information) to cloud storage servers. To ensure the availability and integrity of the outsourced data, provable data possession (PDP) schemes convince verifiers (users or third parties) that the outsourced data stored in the cloud storage server is correct and unchanged. Recently, several PDP schemes with designated verifier (DV-PDP) were proposed to provide the flexibility of arbitrary designated verifier. A designated verifier (private verifier) is trustable and designated by a user to check the integrity of the outsourced data. However, these DV-PDP schemes are either inefficient or insecure under some circumstances. In this paper, we propose the first non-repudiable PDP scheme with designated verifier (DV-NRPDP) to address the non-repudiation issue and resolve possible disputations between users and cloud storage servers. We define the system model, framework and adversary model of DV-NRPDP schemes. Afterward, a concrete DV-NRPDP scheme is presented. Based on the computing discrete logarithm assumption, we formally prove that the proposed DV-NRPDP scheme is secure against several forgery attacks in the random oracle model. Comparisons with the previously proposed schemes are given to demonstrate the advantages of our scheme.
With an aim of provisioning fast, reliable and low cost services to the users, the cloud-computing technology has progressed leaps and bounds. But, adjacent to its development is ever increasing ability of malicious users to compromise its security from outside as well as inside. The Network Intrusion Detection System (NIDS) techniques has gone a long way in detection of known and unknown attacks. The methods of detection of intrusion and deployment of NIDS in cloud environment are dependent on the type of services being rendered by the cloud. It is also important that the cloud administrator is able to determine the malicious intensions of the attackers and various methods of attack. In this paper, we carry out the integration of NIDS module and Honeypot Networks in Cloud environment with objective to mitigate the known and unknown attacks. We also propose method to generate and update signatures from information derived from the proposed integrated model. Using sandboxing environment, we perform dynamic malware analysis of binaries to derive conclusive evidence of malicious attacks.
Applications for data analysis of biomedical data are complex programs and often consist of multiple components. Re-usage of existing solutions from external code repositories or program libraries is common in algorithm development. To ease reproducibility as well as transfer of algorithms and required components into distributed infrastructures Linux containers are increasingly used in those environments, that are at least partly connected to the internet. However concerns about the untrusted application remain and are of high interest when medical data is processed. Additionally, the portability of the containers needs to be ensured by using only security technologies, that do not require additional kernel modules. In this paper we describe measures and a solution to secure the execution of an example biomedical application for normalization of multidimensional biosignal recordings. This application, the required runtime environment and the security mechanisms are installed in a Docker-based container. A fine-grained restricted environment (sandbox) for the execution of the application and the prevention of unwanted behaviour is created inside the container. The sandbox is based on the filtering of system calls, as they are required to interact with the operating system to access potentially restricted resources e.g. the filesystem or network. Due to the low-level character of system calls, the creation of an adequate rule set for the sandbox is challenging. Therefore the presented solution includes a monitoring component to collect required data for defining the rules for the application sandbox. Performance evaluation of the application execution shows no significant impact of the resulting sandbox, while detailed monitoring may increase runtime up to over 420%.
Fog computing provides a new architecture for the implementation of the Internet of Things (IoT), which can connect sensor nodes to the cloud using the edge of the network. This structure has improved the latency and energy consumption in the cloud. In this heterogeneous and distributed environment, resource allocation is very important. Hence, scheduling will be a challenge to increase productivity and allocate resources appropriately to the tasks. Programs that run in this environment should be protected from intruders. We consider three parameters as authentication, integrity, and confidentiality to maintain security in fog devices. These parameters have time and computational overhead. In the proposed approach, we schedule the modules for the run in fog devices by heuristic algorithms based on data mining technique. The objective function is included CPU utilization, bandwidth, and security overhead. We compare the proposed algorithm with several heuristic algorithms. The results show that our proposed algorithm improved the average energy consumption of 63.27%, cost 44.71% relative to the PSO, ACO, SA algorithms.
The continuous advance in recent cloud-based computer networks has generated a number of security challenges associated with intrusions in network systems. With the exponential increase in the volume of network traffic data, involvement of humans in such detection systems is time consuming and a non-trivial problem. Secondly, network traffic data tends to be highly dimensional, comprising of numerous features and attributes, making classification challenging and thus susceptible to the curse of dimensionality problem. Given such scenarios, the need arises for dimensional reduction, feature selection, combined with machine-learning techniques in the classification of such data. Therefore, as a contribution, this paper seeks to employ data mining techniques in a cloud-based environment, by selecting appropriate attributes and features with the least importance in terms of weight for the classification. Often the standard is to select features with better weights while ignoring those with least weights. In this study, we seek to find out if we can make prediction using those features with least weights. The motivation is that adversaries use stealth to hide their activities from the obvious. The question then is, can we predict any stealth activity of an adversary using the least observed attributes? In this particular study, we employ information gain to select attributes with the lowest weights and then apply machine learning to classify if a combination, in this case, of both source and destination ports are attacked or not. The motivation of this investigation is if attributes that are of least importance can be used to predict if an attack could occur. Our preliminary results show that even when the source and destination port attributes are used in combination with features with the least weights, it is possible to classify such network traffic data and predict if an attack will occur or not.
Cloud storage can provide outsourcing data services for both organizations and individuals. However, cloud storage still faces many challenges, e.g., public integrity auditing, the support of dynamic data, and low computational audit cost. To solve the problems, a number of techniques have been proposed. Recently, Tian et al. proposed a novel public auditing scheme for secure cloud storage based on a new data structure DHT. The authors claimed that their scheme was proven to be secure. Unfortunately, through our security analysis, we find that the scheme suffers from one attack and one security shortage. The attack is that an adversary can forge the data to destroy the correctness of files without being detected. The shortage of the scheme is that the updating operations for data blocks is vulnerable and easy to be modified. Finally, we give our countermeasures to remedy the security problems.
Cloud computing is the expansion of parallel computing, distributed computing. The technology of cloud computing becomes more and more widely used, and one of the fundamental issues in this cloud environment is related to task scheduling. However, scheduling in Cloud environments represents a difficult issue since it is basically NP-complete. Thus, many variants based on approximation techniques, especially those inspired by Swarm Intelligence (SI) have been proposed. This paper proposes a machine learning algorithm to guide the cloud choose the scheduling technique by using multi criteria decision to optimize the performance. The main contribution of our work is to minimize the makespan of a given task set. The new strategy is simulated using the CloudSim toolkit package where the impact of the algorithm is checked with different numbers of VMs varying from 2 to 50, and different task sizes between 30 bytes and 2700 bytes. Experiment results show that the proposed algorithm minimizes the execution time and the makespan between 7% and 75%, and improves the performance of the load balancing scheduling.
By embracing the cloud computing paradigm enterprises are able to boost their agility and productivity whilst realising significant cost savings. However, many enterprises are reluctant to adopt cloud services for supporting their critical operations due to security and privacy concerns. One way to alleviate these concerns is to devise policies that infuse suitable security controls in cloud services. This work proposes a class of ontologically-expressed rules, namely the so-called axiomatic rules, that aim at ensuring the correctness of these policies by harnessing the various knowledge artefacts that they embody. It also articulates an adequate framework for the expression of policies, one which provides ontological templates for modelling the knowledge artefacts encoded in the policies and which form the basis for the proposed axiomatic rules.
In the past the security of building automation solely depended on the security of the devices inside or tightly connected to the building. In the last years more devices evolved using some kind of cloud service as a back-end or providers supplying some kind of device to the user. Also, the number of building automation systems connected to the Internet for management, control, and data storage increases every year. These developments cause the appearance of new threats on building automation. As Internet of Thing (IoT) and building automation intertwine more and more these threats are also valid for IoT installations. The paper presents new attack vectors and new threats using the threat model of Meyer et al.[1].