Biblio
Edge and Fog Computing will be increasingly pervasive in the years to come due to the benefits they bring in many specific use-case scenarios over traditional Cloud Computing. Nevertheless, the security concerns Fog and Edge Computing bring in have not been fully considered and addressed so far, especially when considering the underlying technologies (e.g. virtualization) instrumental to reap the benefits of the adoption of the Edge paradigm. In particular, these virtualization technologies (i.e. Containers, Real Time Operating Systems, and Unikernels), are far from being adequately resilient and secure. Aiming at shedding some light on current technology limitations, and providing hints on future research security issues and technology development, in this paper we introduce the main technologies supporting the Edge paradigm, survey existing issues, introduce relevant scenarios, and discusses benefits and caveats of the different existing solutions in the above introduced scenarios. Finally, we provide a discussion on the current security issues in the introduced context, and strive to outline future research directions in both security and technology development in a number of Edge/Fog scenarios.
Fog computing extends cloud computing technology to the edge of the infrastructure to support dynamic computation for IoT applications. Reduced latency and location awareness in objects' data access is attained by displacing workloads from the central cloud to edge devices. Doing so, it reduces raw data transfers from target objects to the central cloud, thus overcoming communication bottlenecks. This is a key step towards the pervasive uptake of next generation IoT-based services. In this work we study efficient orchestration of applications in fog computing, where a fog application is the cascade of a cloud module and a fog module. The problem results into a mixed integer non linear optimisation. It involves multiple constraints due to computation and communication demands of fog applications, available infrastructure resources and it accounts also the location of target IoT objects. We show that it is possible to reduce the complexity of the original problem with a related placement formulation, which is further solved using a greedy algorithm. This algorithm is the core placement logic of FogAtlas, a fog computing platform based on existing virtualization technologies. Extensive numerical results validate the model and the scalability of the proposed algorithm, showing performance close to the optimal solution with respect to the number of served applications.
Cyber-physical systems (CPS) and their Internet of Things (IoT) components are repeatedly subject to various attacks targeting weaknesses in their firmware. For that reason emerges an imminent demand for secure update mechanisms that not only include specific systems but cover all parts of the critical infrastructure. In this paper we introduce a theoretical concept for a secure CPS device update and verification mechanism and provide information on handling hardware-based security incorporating trusted platform modules (TPM) on those CPS devices. We will describe secure communication channels by state of the art technology and also integrity measurement mechanisms to ensure the system is in a known state. In addition, a multi-level fail-over concept is presented, ensuring continuous patching to minimize the necessity of restarting those systems.
The article considers the approach to identifying potentially unsafe data in program code of embedded systems which can lead to errors and fails in the functioning of equipment. The sources of invalid data are revealed and the process of changing the status of this data in process of static code analysis is shown. The mechanism for annotating functions that operate on unsafe data is described, which allows to control the entire process of using them and thus it will improve the quality of the output code.
The need to process the verity, volume and velocity of data generated by today's Internet of Things (IoT) devices has pushed both academia and the industry to investigate new architectural alternatives to support the new challenges. As a result, Edge Computing (EC) has emerged to address these issues, by placing part of the cloud resources (e.g., computation, storage, logic) closer to the edge of the network, which allows faster and context dependent data analysis and storage. However, as EC infrastructures grow, different providers who do not necessarily trust each other need to collaborate in order serve different IoT devices. In this context, EC infrastructures, IoT devices and the data transiting the network all need to be subject to identity and provenance checks, in order to increase trust and accountability. Each device/data in the network needs to be identified and the provenance of its actions needs to be tracked. In this paper, we propose a blockchain container based architecture that implements the W3C-PROV Data Model, to track identities and provenance of all orchestration decisions of a business network. This architecture provides new forms of interaction between the different stakeholders, which supports trustworthy transactions and leads to a new decentralized interaction model for IoT based applications.
SDN networks rely mainly on a set of software defined modules, running on generic hardware platforms, and managed by a central SDN controller. The tight coupling and lack of isolation between the controller and the underlying host limit the controller resilience against host-based attacks and failures. That controller is a single point of failure and a target for attackers. ``Linux-containers'' is a successful thin virtualization technique that enables encapsulated, host-isolated execution-environments for running applications. In this paper we present PAFR, a controller sandboxing mechanism based on Linux-containers. PAFR enables controller/host isolation, plug-and-play operation, failure-and-attack-resilient execution, and fast recovery. PAFR employs and manages live remote checkpointing and migration between different hosts to evade failures and attacks. Experiments and simulations show that the frequent employment of PAFR's live-migration minimizes the chance of successful attack/failure with limited to no impact on network performance.
Applications for data analysis of biomedical data are complex programs and often consist of multiple components. Re-usage of existing solutions from external code repositories or program libraries is common in algorithm development. To ease reproducibility as well as transfer of algorithms and required components into distributed infrastructures Linux containers are increasingly used in those environments, that are at least partly connected to the internet. However concerns about the untrusted application remain and are of high interest when medical data is processed. Additionally, the portability of the containers needs to be ensured by using only security technologies, that do not require additional kernel modules. In this paper we describe measures and a solution to secure the execution of an example biomedical application for normalization of multidimensional biosignal recordings. This application, the required runtime environment and the security mechanisms are installed in a Docker-based container. A fine-grained restricted environment (sandbox) for the execution of the application and the prevention of unwanted behaviour is created inside the container. The sandbox is based on the filtering of system calls, as they are required to interact with the operating system to access potentially restricted resources e.g. the filesystem or network. Due to the low-level character of system calls, the creation of an adequate rule set for the sandbox is challenging. Therefore the presented solution includes a monitoring component to collect required data for defining the rules for the application sandbox. Performance evaluation of the application execution shows no significant impact of the resulting sandbox, while detailed monitoring may increase runtime up to over 420%.
Named Data Networking (NDN) is one of the future internet architectures, which is a clean-slate approach. NDN provides intelligent data retrieval using the principles of name-based symmetrical forwarding of Interest/Data packets and innetwork caching. The continually increasing demand for rapid dissemination of large-scale scientific data is driving the use of NDN in data-intensive science experiments. In this paper, we establish an intercontinental NDN testbed. In the testbed, an NDN-based application that targets climate science as an example data intensive science application is designed and implemented, which has differentiated features compared to those of previous studies. We verify experimental justification of using NDN for climate science in the intercontinental network, through performance comparisons between classical delivery techniques and NDN-based climate data delivery.
Software-defined networks offer a promising framework for the implementation of cross-layer data-centric security policies in military systems. An important aspect of the design process for such advanced security solutions is the thorough experimental assessment and validation of proposed technical concepts prior to their deployment in operational military systems. In this paper, we describe an OpenFlow-based testbed, which was developed with a specific focus on validation of SDN security mechanisms - including both the mechanisms for protecting the software-defined network layer and the cross-layer enforcement of higher level policies, such as data-centric security policies. We also present initial experimentation results obtained using the testbed, which confirm its ability to validate simulation and analytic predictions. Our objective is to provide a sufficiently detailed description of the configuration used in our testbed so that it can be easily re-plicated and re-used by other security researchers in their experiments.
Science is conducted collaboratively, often requiring knowledge sharing about computational experiments. When experiments include only datasets, they can be shared using Uniform Resource Identifiers (URIs) or Digital Object Identifiers (DOIs). An experiment, however, seldom includes only datasets, but more often includes software, its past execution, provenance, and associated documentation. The Research Object has recently emerged as a comprehensive and systematic method for aggregation and identification of diverse elements of computational experiments. While a necessary method, mere aggregation is not sufficient for the sharing of computational experiments. Other users must be able to easily recompute on these shared research objects. In this paper, we present the sciunit, a reusable research object in which aggregated content is recomputable. We describe a Git-like client that efficiently creates, stores, and repeats sciunits. We show through analysis that sciunits repeat computational experiments with minimal storage and processing overhead. Finally, we provide an overview of sharing and reproducible cyberinfrastructure based on sciunits gaining adoption in the domain of geosciences.
Cloud computing is revolutionizing many IT ecosystems through offering scalable computing resources that are easy to configure, use and inter-connect. However, this model has always been viewed with some suspicion as it raises a wide range of security and privacy issues that need to be negotiated. This research focuses on the construction of a trust layer in cloud computing to build a trust relationship between cloud service providers and cloud users. In particular, we address the rise of container-based virtualisation has a weak isolation compared to traditional VMs because of the shared use of the OS kernel and system components. Therefore, we will build a trust layer to solve the issues of weaker isolation whilst maintaining the performance and scalability of the approach. This paper has two objectives. Firstly, we propose a security system to protect containers from other guests through the addition of a Role-based Access Control (RBAC) model and the provision of strict data protection and security. Secondly, we provide a stress test using isolation benchmarking tools to evaluate the isolation in containers in term of performance.
The MP4 files has become to most used video media file available, and will mostly likely remain at the top for some time to come. This makes MP4 files an interesting candidate for steganography. With its size and structure, it offers a challenge to steganography developers. While some attempts have been made to create a truly covert file, few are as successful as Martin Fiedler's TCSteg. TCSteg allows users to hide a TrueCrypt hidden volume in an MP4 file. The structure of the file makes it difficult to identify that a volume exists. In our analysis of TCSteg, we will show how Fielder's code works and how we may be able to detect the existence of steganography. We will then implement these methods in hope that other steganography analysis can use them to determine if an MP4 file is a carrier file. Finally, we will address the future of MP4 steganography.
Security in cloud environments is always considered an issue, due to the lack of control over leased resources. In this paper, we present a solution that offers security-as-a-service by relying on Security Service Level Agreements (Security SLAs) as a means to represent the security features to be granted. In particular, we focus on a security mechanism that is automatically configured and activated in an as-a-service fashion in order to protect cloud resources against DoS attacks. The activities reported in this paper are part of a wider work carried out in the FP7-ICT programme project SPECS, which aims at building a framework offering Security-as-a-Service using an SLA-based approach. The proposed approach founds on the adoption of SPECS Services to negotiate, to enforce and to monitor suitable security metrics, chosen by cloud customers, negotiated with the provider and included in a signed Security SLA.
There are relatively fewer studies on the security-check waiting lines for screening cargo containers using queueing models. In this paper, we address two important measures at a security-check system, which are concerning the security screening effectiveness and the efficiency. The goal of this paper is to provide a modelling framework to understand the economic trade-offs embedded in container-inspection decisions. In order to analyze the policy initiatives, we develop a stylized queueing model with the novel features pertaining to the security checkpoints.
Drinking water availability is a crucial problem that must be addressed in order to improve the quality of life of individuals living developing nations. Improving water supply availability is important for public health, as it is the third highest risk factor for poor health in developing nations with high mortality rates. This project researched drinking water filtration for areas of Sub-Saharan Africa near existing bodies of water, where the populations are completely reliant on collecting from surface water sources: the most contaminated water source type. Water filtration methods that can be completely created by the consumer would alleviate aid organization dependence in developing nations, put the consumers in control, and improve public health. Filtration processes pass water through a medium that will catch contaminants through physical entrapment or absorption and thus yield a cleaner effluent. When exploring different materials for filtration, removal of contaminants and hydraulic conductivity are the two most important components. Not only does the method have to treat the water, but also it has to do so in a timeframe that is quick enough to produce potable water at a rate that keeps up with everyday needs. Cement is easily accessible in Sub- Saharan regions. Most concrete mixtures are not meant to be pervious, as it is a construction material used for its compressive strength, however, reduced water content in a cement mixture gives it higher permeability. Several different concrete samples of varying thicknesses and water concentrations were created. Bacterial count tests were performed on both pre-filtered and filtered water samples. Concrete filtration does remove bacteria from drinking water, however, the method can still be improved upon.
Hadoop has become increasingly popular as it rapidly processes data in parallel. Cloud computing gives reliability, flexibility, scalability, elasticity and cost saving to cloud users. Deploying Hadoop in cloud can benefit Hadoop users. Our evaluation exhibits that various internal cloud attacks can bypass current Hadoop security mechanisms, and compromised Hadoop components can be used to threaten overall Hadoop. It is urgent to improve compromise resilience, Hadoop can maintain a relative high security level when parts of Hadoop are compromised. Hadoop has two vulnerabilities that can dramatically impact its compromise resilience. The vulnerabilities are the overloaded authentication key, and the lack of fine-grained access control at the data access level. We developed a security enhancement for a public cloud-based Hadoop, named SEHadoop, to improve the compromise resilience through enhancing isolation among Hadoop components and enforcing least access privilege for Hadoop processes. We have implemented the SEHadoop model, and demonstrated that SEHadoop fixes the above vulnerabilities with minimal or no run-time overhead, and effectively resists related attacks.
- « first
- ‹ previous
- 1
- 2
- 3