Biblio
Distributed Denial of Service (DDoS) is a sophisticated cyber-attack due to its variety of types and techniques. The traditional mitigation method of this attack is to deploy dedicated security appliances such as firewall, load balancer, etc. However, due to the limited capacity of the hardware and the potential high volume of DDoS traffic, it may not be able to defend all the attacks. Therefore, cloud-based DDoS protection services were introduced to allow the organizations to redirect their traffic to the scrubbing centers in the cloud for filtering. This solution has some drawbacks such as privacy violation and latency. More recently, Network Functions Virtualization (NFV) and edge computing have been proposed as new networking service models. In this paper, we design a framework that leverages NFV and edge computing for DDoS mitigation through two-stage processes.
Outsourcing services to third-party providers comes with a high security cost-to fully trust the providers. Using trusted hardware can help, but current trusted execution environments do not adequately support services that process very large scale datasets. We present LASTGT, a system that bridges this gap by supporting the execution of self-contained services over a large state, with a small and generic trusted computing base (TCB). LASTGT uses widely deployed trusted hardware to guarantee integrity and verifiability of the execution on a remote platform, and it securely supplies data to the service through simple techniques based on virtual memory. As a result, LASTGT is general and applicable to many scenarios such as computational genomics and databases, as we show in our experimental evaluation based on an implementation of LAST-GT on a secure hypervisor. We also describe a possible implementation on Intel SGX.
Over the past few years we have articulated theory that describes ‘encrypted computing’, in which data remains in encrypted form while being worked on inside a processor, by virtue of a modified arithmetic. The last two years have seen research and development on a standards-compliant processor that shows that near-conventional speeds are attainable via this approach. Benchmark performance with the US AES-128 flagship encryption and a 1GHz clock is now equivalent to a 433MHz classic Pentium, and most block encryptions fit in AES's place. This summary article details how user data is protected by a system based on the processor from being read or interfered with by the computer operator, for those computing paradigms that entail trust in data-oriented computation in remote locations where it may be accessible to powerful and dishonest insiders. We combine: (i) the processor that runs encrypted; (ii) a slightly modified conventional machine code instruction set architecture with which security is achievable; (iii) an ‘obfuscating’ compiler that takes advantage of its possibilities, forming a three-point system that provably provides cryptographic "semantic security" for user data against the operator and system insiders.
With Software Defined Networking (SDN) the control plane logic of forwarding devices, switches and routers, is extracted and moved to an entity called SDN controller, which acts as a broker between the network applications and physical network infrastructure. Failures of the SDN controller inhibit the network ability to respond to new application requests and react to events coming from the physical network. Despite of the huge impact that a controller has on the network performance as a whole, a comprehensive study on its failure dynamics is still missing in the state of the art literature. The goal of this paper is to analyse, model and evaluate the impact that different controller failure modes have on its availability. A model in the formalism of Stochastic Activity Networks (SAN) is proposed and applied to a case study of a hypothetical controller based on commercial controller implementations. In case study we show how the proposed model can be used to estimate the controller steady state availability, quantify the impact of different failure modes on controller outages, as well as the effects of software ageing, and impact of software reliability growth on the transient behaviour.
The performance, dependability, and security of cloud service systems are vital for the ongoing operation, control, and support. Thus, controlled improvement in service requires a comprehensive analysis and systematic identification of the fundamental underlying constituents of cloud using a rigorous discipline. In this paper, we introduce a framework which helps identifying areas for potential cloud service enhancements. A cloud service cannot be completed if there is a failure in any of its underlying resources. In addition, resources are kept offline for scheduled maintenance. We use redundant resources to mitigate the impact of failures/maintenance for ensuring performance and dependability; which helps enhancing security as well. For example, at least 4 replicas are required to defend the intrusion of a single instance or a single malicious attack/fault as defined by Byzantine Fault Tolerance (BFT). Data centers with high performance, dependability, and security are outsourced to the cloud computing environment with greater flexibility of cost of owing the computing infrastructure. In this paper, we analyze the effectiveness of redundant resource usage in terms of dependability metric and cost of service deployment based on the priority of service requests. The trade-off among dependability, cost, and security under different redundancy schemes are characterized through the comprehensive analytical models.
The world is becoming an immense critical information infrastructure, with the fast and increasing entanglement of utilities, telecommunications, Internet, cloud, and the emerging IoT tissue. This may create enormous opportunities, but also brings about similarly extreme security and dependability risks. We predict an increase in very sophisticated targeted attacks, or advanced persistent threats (APT), and claim that this calls for expanding the frontier of security and dependability methods and techniques used in our current CII. Extreme threats require extreme defenses: we propose resilience as a unifying paradigm to endow systems with the capability of dynamically and automatically handling extreme adversary power, and sustaining perpetual and unattended operation. In this position paper, we present this vision and describe our methodology, as well as the assurance arguments we make for the ultra-resilient components and protocols they enable, illustrated with case studies in progress.
Sophisticated cyber attacks by state-sponsored and criminal actors continue to plague government and industrial infrastructure. Intuitively, partitioning cyber systems into survivable, intrusion tolerant compartments is a good idea. This prevents witting and unwitting insiders from moving laterally and reaching back to their command and control (C2) servers. However, there is a lack of artifacts that can predict the effectiveness of this approach in a realistic setting. We extend earlier work by relaxing simplifying assumptions and providing a new attacker-facing metric. In this article, we propose new closed-form mathematical models and a discrete time simulation to predict three critical statistics: probability of compromise, probability of external host compromise and probability of reachback. The results of our new artifacts agree with one another and with previous work, which suggests they are internally valid and a viable method to evaluate the effectiveness of cyber zone defense.
In recent years, more and more multimedia data are generated and transmitted in various fields. So, many encryption methods for multimedia content have been put forward to satisfy various applications. However, there are still some open issues. Each encryption method has its advantages and drawbacks. Our main goal is expected to provide a solution for multimedia encryption which satisfies the target application constraints and performs metrics of the encryption algorithm. The Advanced Encryption Standard (AES) is the most popular algorithm used in symmetric key cryptography. Furthermore, chaotic encryption is a new research direction of cryptography which is characterized by high initial-value sensitivity and good randomness. In this paper we propose a hybrid video cryptosystem which combines two encryption techniques. The proposed cryptosystem realizes the video encryption through the chaos and AES in CTR mode. Experimental results and security analysis demonstrate that this cryptosystem is highly efficient and a robust system for video encryption.
We propose to use a genetic algorithm to evolve novel reconfigurable hardware to implement elliptic curve cryptographic combinational logic circuits. Elliptic curve cryptography offers high security-level with a short key length making it one of the most popular public-key cryptosystems. Furthermore, there are no known sub-exponential algorithms for solving the elliptic curve discrete logarithm problem. These advantages render elliptic curve cryptography attractive for incorporating in many future cryptographic applications and protocols. However, elliptic curve cryptography has proven to be vulnerable to non-invasive side-channel analysis attacks such as timing, power, visible light, electromagnetic, and acoustic analysis attacks. In this paper, we use a genetic algorithm to address this vulnerability by evolving combinational logic circuits that correctly implement elliptic curve cryptographic hardware that is also resistant to simple timing and power analysis attacks. Using a fitness function composed of multiple objectives - maximizing correctness, minimizing propagation delays and minimizing circuit size, we can generate correct combinational logic circuits resistant to non-invasive, side channel attacks. To the best of our knowledge, this is the first work to evolve a cryptography circuit using a genetic algorithm. We implement evolved circuits in hardware on a Xilinx Kintex-7 FPGA. Results reveal that the evolutionary algorithm can successfully generate correct, and side-channel resistant combinational circuits with negligible propagation delay.
GENI (Global Environment for Network Innovations) is a National Science Foundation (NSF) funded program which provides a virtual laboratory for networking and distributed systems research and education. It is well suited for exploring networks at a scale, thereby promoting innovations in network science, security, services and applications. GENI allows researchers obtain compute resources from locations around the United States, connect compute resources using 100G Internet2 L2 service, install custom software or even custom operating systems on these compute resources, control how network switches in their experiment handle traffic flows, and run their own L3 and above protocols. GENI architecture incorporates cloud federation. With the federation, cloud resources can be federated and/or community of clouds can be formed. The heart of federation is user identity and an ability to “advertise” cloud resources into community including compute, storage, and networking. GENI administrators can carve out what resources are available to the community and hence a portion of GENI resources are reserved for internal consumption. GENI architecture also provides “stitching” of compute and storage resources researchers request. This provides L2 network domain over Internet2's 100G network. And researchers can run their Software Defined Networking (SDN) controllers on the provisioned L2 network domain for a complete control of networking traffic. This capability is useful for large science data transfer (bypassing security devices for high throughput). Renaissance Computing Institute (RENCI), a research institute in the state of North Carolina, has developed ORCA (Open Resource Control Architecture), a GENI control framework. ORCA is a distributed resource orchestration system to serve science experiments. ORCA provides compute resources as virtual machines and as well as baremetals. ORCA based GENI ra- k was designed to serve both High Throughput Computing (HTC) and High Performance Computing (HPC) type of computes. Although, GENI is primarily used in various universities and research entities today, GENI architecture can be leveraged in the commercial, aerospace and government settings. This paper will go over the architecture of GENI and discuss the GENI architecture for scientific computing experiments.
Energy efficient High-Performance Computing (HPC) is becoming increasingly important. Recent ventures into this space have introduced an unlikely candidate to achieve exascale scientific computing hardware with a small energy footprint. ARM processors and embedded GPU accelerators originally developed for energy efficiency in mobile devices, where battery life is critical, are being repurposed and deployed in the next generation of supercomputers. Unfortunately, the performance of executing scientific workloads on many of these devices is largely unknown, yet the bulk of computation required in high-performance supercomputers is scientific. We present an analysis of one such scientific code, in the form of Gaussian Elimination, and evaluate both execution time and energy used on a range of embedded accelerator SoCs. These include three ARM CPUs and two mobile GPUs. Understanding how these low power devices perform on scientific workloads will be critical in the selection of appropriate hardware for these supercomputers, for how can we estimate the performance of tens of thousands of these chips if the performance of one is largely unknown?
In this paper, a dual-field elliptic curve cryptographic processor is proposed to support arbitrary curves within 576-bit in dual field. Besides, two heterogeneous function units are coupled with the processor for the parallel operations in finite field based on the analysis of the characteristics of elliptic curve cryptographic algorithms. To simplify the hardware complexity, the clustering technology is adopted in the processor. At last, a fast Montgomery modular division algorithm and its implementation is proposed based on the Kaliski's Montgomery modular inversion. Using UMC 90-nm CMOS 1P9M technology, the proposed processor occupied 0.86-mm2 can perform the scalar multiplication in 0.34ms in GF(p160) and 0.22ms in GF(2160), respectively. Compared to other elliptic curve cryptographic processors, our design is advantageous in hardware efficiency and speed moderation.
In presence of known and unknown vulnerabilities in code and flow control of programs, virtual machine alike isolation and sandboxing to confine maliciousness of process, by monitoring and controlling the behaviour of untrusted application, is an effective strategy. A confined malicious application cannot effect system resources and other applications running on same operating system. But present techniques used for sandboxing have some drawbacks ranging from scope to methodology. Some of proposed techniques restrict specific aspect of execution e.g. system calls and file system access. In the same way techniques that truly isolate the application by providing separate execution environment either require modification in kernel or full blown operating system. Moreover these do not provide isolation from top to bottom but only virtualize operating system services. In this paper, we propose a design to confine native Linux process in virtual machine equivalent isolation by using hardware virtualization extensions with nominal initialization and acceptable execution overheads. We implemented our prototype called Process Virtual Machine that transition a native process into virtual machine, provides minimal possible execution environment, intercept and virtualize system calls to execute it on host kernel. Experimental results show effectiveness of our proposed technique.
Devices in the internet of things (IoT) are frequently (i) resource-constrained, and (ii) deployed in unmonitored, physically unsecured environments. Securing these devices requires tractable cryptographic protocols, as well as cost effective tamper resistance solutions. We propose and evaluate cryptographic protocols that leverage physical unclonable functions (PUFs): circuits whose input to output mapping depends on the unique characteristics of the physical hardware on which it is executed. PUF-based protocols have the benefit of minimizing private key exposure, as well as providing cost-effective tamper resistance. We present and experimentally evaluate an elliptic curve based variant of a theoretical PUF-based authentication protocol proposed previously in the literature. Our work improves over an existing proof-of-concept implementation, which relied on the discrete logarithm problem as proposed in the original work. In contrast, our construction uses elliptic curve cryptography, which substantially reduces the computational and storage burden on the device. We describe PUF-based algorithms for device enrollment, authentication, decryption, and digital signature generation. The performance of each construction is experimentally evaluated on a resource-constrained device to demonstrate tractability in the IoT domain. We demonstrate that our implementation achieves practical performance results, while also providing realistic security. Our work demonstrates that PUF-based protocols may be practically and securely deployed on low-cost resource-constrained IoT devices.
This work presents a highly reliable and tamper-resistant design of Physical Unclonable Function (PUF) exploiting Resistive Random Access Memory (RRAM). The RRAM PUF properties such as uniqueness and reliability are experimentally measured on 1 kb HfO2 based RRAM arrays. Firstly, our experimental results show that selection of the split reference and offset of the split sense amplifier (S/A) significantly affect the uniqueness. More dummy cells are able to generate a more accurate split reference, and relaxing transistor's sizes of the split S/A can reduce the offset, thus achieving better uniqueness. The average inter-Hamming distance (HD) of 40 RRAM PUF instances is 42%. Secondly, we propose using the sum of the read-out currents of multiple RRAM cells for generating one response bit, which statistically minimizes the risk of early retention failure of a single cell. The measurement results show that with 8 cells per bit, 0% intra-HD can maintain more than 50 hours at 150 °C or equivalently 10 years at 69 °C by 1/kT extrapolation. Finally, we propose a layout obfuscation scheme where all the S/A are randomly embedded into the RRAM array to improve the RRAM PUF's resistance against invasive tampering. The RRAM cells are uniformly placed between M4 and M5 across the array. If the adversary attempts to invasively probe the output of the S/A, he has to remove the top-level interconnect and destroy the RRAM cells between the interconnect layers. Therefore, the RRAM PUF has the “self-destructive” feature. The hardware overhead of the proposed design strategies is benchmarked in 64 × 128 RRAM PUF array at 65 nm, while these proposed optimization strategies increase latency, energy and area over a naive implementation, they significantly improve the performance and security.
Lightweight block ciphers, which are required for IoT devices, have attracted attention. Simeck, which is one of the most popular lightweight block ciphers, can be implemented on IoT devices in the smallest area. Regarding the hardware security, the threat of electromagnetic analysis has been reported. However, electromagnetic analysis of Simeck has not been reported. Therefore, this study proposes a dedicated electromagnetic analysis for a lightweight block cipher Simeck to ensure the safety of IoT devices in the future. To our knowledge, this is the first electromagnetic analysis for Simeck. Experiments using a FPGA prove the validity of the proposed method.
Software defined networking promises network operators to dramatically simplify network management. It provides flexibility and innovation through network programmability. With SDN, network management moves from codifying functionality in terms of low-level device configuration to building software that facilitates network management and debugging[1]. SDN provides new techniques to solve long-standing problems in networking like routing by separating the complexity of state distribution from network specification. Despite all the hype surrounding SDNs, exploiting its full potential is demanding. Security is still the major issue and a striking challenge that reduces the growth of SDNs. Moreover the introduction of various architectural components and up cycling of novel entities of SDN poses new security issues and threats. SDN is considered as major target for digital threats and cyber-attacks[2] and have more devastating effects than simple networks. Initial SDN design doesn't considered security as its part; therefore, it must be raised on the agenda. This article discusses the security solutions proposed to secure SDNs. We categorize the security solutions in the article by presenting a thematic taxonomy based on SDN architectural layers/interfaces[3], security measures and goals, simulation framework. Moreover, the literature also points out the possible attacks[2] targeting different layers/interfaces of SDNs. For securing SDNs, the potential requirements and their key enablers are also identified and presented. Also, the articles sketch the design of secure and dependable SDNs. At last, we discuss open issues and challenges of SDN security that may be rated appropriate to be handled by professionals and researchers in the future.
Multimedia authentication is an integral part of multimedia signal processing in many real-time and security sensitive applications, such as video surveillance. In such applications, a full-fledged video digital rights management (DRM) mechanism is not applicable due to the real time requirement and the difficulties in incorporating complicated license/key management strategies. This paper investigates the potential of multimedia authentication from a brand new angle by employing hardware-based security primitives, such as physical unclonable functions (PUFs). We show that the hardware security approach is not only capable of accomplishing the authentication for both the hardware device and the multimedia stream but, more importantly, introduce minimum performance, resource, and power overhead. We justify our approach using a prototype PUF implementation on Xilinx FPGA boards. Our experimental results on the real hardware demonstrate the high security and low overhead in multimedia authentication obtained by using hardware security approaches.
Cyber-physical system integrity requires both hardware and software security. Many of the cyber attacks are successful as they are designed to selectively target a specific hardware or software component in an embedded system and trigger its failure. Existing security measures also use attack vector models and isolate the malicious component as a counter-measure. Isolated security primitives do not provide the overall trust required in an embedded system. Trust enhancements are proposed to a hardware security platform, where the trust specifications are implemented in both software and hardware. This distribution of trust makes it difficult for a hardware-only or software-only attack to cripple the system. The proposed approach is applied to a smart grid application consisting of third-party soft IP cores, where an attack on this module can result in a blackout. System integrity is preserved in the event of an attack and the anomalous behavior of the IP core is recorded by a supervisory module. The IP core also provides a snapshot of its trust metric, which is logged for further diagnostics.