Biblio
Cloud Computing is an important term of modern technology. The usefulness of Cloud is increasing day by day and simultaneously more and more security problems are arising as well. Two of the major threats of Cloud are improper authentication and multi-tenancy. According to the specialists both pros and cons belong to multi-tenancy. There are security protocols available but it is difficult to claim these protocols are perfect and ensure complete protection. The purpose of this paper is to propose an integrated model to ensure better Cloud security for Authentication and multi-tenancy. Multi-tenancy means sharing of resources and virtualization among clients. Since multi-tenancy allows multiple users to access same resources simultaneously, there is high probability of accessing confidential data without proper privileges. Our model includes Kerberos authentication protocol to enhance authentication security. During our research on Kerberos we have found some flaws in terms of encryption method which have been mentioned in couple of IEEE conference papers. Pondering about this complication we have elected Elliptic Curve Cryptography. On the other hand, to attenuate arose risks due to multi-tenancy we are proposing a Resource Allocation Manager Unit, a Control Database and Resource Allocation Map. This part of the model will perpetuate resource allocation for the users.
To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated autoscaling policies, that is, policies that dynamically provision resources for the customer. Although selecting and tuning autoscaling policies is a challenging task for datacenter operators, so far relatively few studies investigate the performance of autoscaling for workloads of workflows. Complementing previous knowledge, in this work we propose the first comprehensive performance study in the field. Using trace-based simulation, we compare state-of-the-art autoscaling policies across multiple application domains, workload arrival patterns (e.g., burstiness), and system utilization levels. We further investigate the interplay between autoscaling and regular allocation policies, and the complexity cost of autoscaling. Our quantitative study focuses not only on traditional performance metrics and on state-of-the-art elasticity metrics, but also on time-and memory-related autoscaling-complexity metrics. Our main results give strong and quantitative evidence about previously unreported operational behavior, for example, that autoscaling policies perform differently across application domains and allocation and provisioning policies should be co-designed.
The globalization and outsourcing of the semiconductor industry has raised serious concerns about the trustworthiness of the hardware. Importing Third Party IP cores in the Integrated Chip design has opened gates for new form of attacks on hardware. Hardware Trojans embedded in Third Party IPs has necessitated the need for secure IC design process. Design-for-Trust techniques aimed at detection of Hardware Trojans come with overhead in terms of area, latency and power consumption. In this work, we present a Cuckoo Search algorithm based Design Space Exploration process for finding low cost hardware solutions during High Level Synthesis. The exploration is conducted with respect to datapath resource allocation for single and nested loops. The proposed algorithm is compared with existing Hardware Trojan detection mechanisms and experimental results show that the proposed algorithm is able to achieve 3x improvement in Cost when compared existing algorithms.
Recently, the armed forces want to bring the Internet of Things technology to improve the effectiveness of military operations in battlefield. So the Internet of Battlefield Things (IoBT) has entered our view. And due to the high processing latency and low reliability of the “combat cloud” network for IoBT in the battlefield environment, in this paper , a novel “combat cloud-fog” network architecture for IoBT is proposed. The novel architecture adds a fog computing layer which consists of edge network equipment close to the users in the “combat-cloud” network to reduce latency and enhance reliability. Meanwhile, since the computing capability of the fog equipment are weak, it is necessary to implement distributed computing in the “combat cloud-fog” architecture. Therefore, the distributed computing load balancing problem of the fog computing layer is researched. Moreover, a distributed generalized diffusion strategy is proposed to decrease latency and enhance the stability and survivability of the “combat cloud-fog” network system. The simulation result indicates that the load balancing strategy based on generalized diffusion algorithm could decrease the task response latency and support the efficient processing of battlefield information effectively, which is suitable for the “combat cloud- fog” network architecture.
With the evolution of computing from using personal computers to use of online Internet of Things (IoT) services and applications, security risks have also evolved as a major concern. The use of Fog computing enhances reliability and availability of the online services due to enhanced heterogeneity and increased number of computing servers. However, security remains an open challenge. Various trust models have been proposed to measure the security strength of available service providers. We utilize the quantized security of Datacenters and propose a new security-based service broker policy(SbSBP) for Fog computing environment to allocate the optimal Datacenter(s) to serve users' requests based on users' requirements of cost, time and security. Further, considering the dynamic nature of Fog computing, the concept of dynamic reconfiguration has been added. Comparative analysis of simulation results shows the effectiveness of proposed policy to incorporate users' requirements in the decision-making process.
We present a formal method for computing the best security provisioning for Internet of Things (IoT) scenarios characterized by a high degree of mobility. The security infrastructure is intended as a security resource allocation plan, computed as the solution of an optimization problem that minimizes the risk of having IoT devices not monitored by any resource. We employ the shortfall as a risk measure, a concept mostly used in the economics, and adapt it to our scenario. We show how to compute and evaluate an allocation plan, and how such security solutions address the continuous topology changes that affect an IoT environment.
Fog computing provides a new architecture for the implementation of the Internet of Things (IoT), which can connect sensor nodes to the cloud using the edge of the network. This structure has improved the latency and energy consumption in the cloud. In this heterogeneous and distributed environment, resource allocation is very important. Hence, scheduling will be a challenge to increase productivity and allocate resources appropriately to the tasks. Programs that run in this environment should be protected from intruders. We consider three parameters as authentication, integrity, and confidentiality to maintain security in fog devices. These parameters have time and computational overhead. In the proposed approach, we schedule the modules for the run in fog devices by heuristic algorithms based on data mining technique. The objective function is included CPU utilization, bandwidth, and security overhead. We compare the proposed algorithm with several heuristic algorithms. The results show that our proposed algorithm improved the average energy consumption of 63.27%, cost 44.71% relative to the PSO, ACO, SA algorithms.
Cloud computing is the expansion of parallel computing, distributed computing. The technology of cloud computing becomes more and more widely used, and one of the fundamental issues in this cloud environment is related to task scheduling. However, scheduling in Cloud environments represents a difficult issue since it is basically NP-complete. Thus, many variants based on approximation techniques, especially those inspired by Swarm Intelligence (SI) have been proposed. This paper proposes a machine learning algorithm to guide the cloud choose the scheduling technique by using multi criteria decision to optimize the performance. The main contribution of our work is to minimize the makespan of a given task set. The new strategy is simulated using the CloudSim toolkit package where the impact of the algorithm is checked with different numbers of VMs varying from 2 to 50, and different task sizes between 30 bytes and 2700 bytes. Experiment results show that the proposed algorithm minimizes the execution time and the makespan between 7% and 75%, and improves the performance of the load balancing scheduling.
Mobility and multihoming have become the norm in Internet access, e.g. smartphones with Wi-Fi and LTE, and connected vehicles with LTE and DSRC links that change rapidly. Mobility creates challenges for active session continuity when provider-aggregatable locators are used, while multihoming brings opportunities for improving resiliency and allocative efficiency. This paper proposes a novel migration protocol, in the context of the eXpressive Internet Architecture (XIA), the XIA Migration Protocol. We compare it with Mobile IPv6, with respect to handoff latency and overhead, flow migration support, and defense against spoofing and replay of protocol messages. Handoff latencies of the XIA Migration Protocol and Mobile IPv6 Enhanced Route Optimization are comparable and neither protocol opens up avenues for spoofing or replay attacks. However, XIA requires no mobility anchor point to support client mobility while Mobile IPv6 always depends on a home agent. We show that XIA has significant advantage over IPv6 for multihomed hosts and networks in terms of resiliency, scalability, load balancing and allocative efficiency. IPv6 multihoming solutions either forgo scalability (BGP-based) or sacrifice resiliency (NAT-based), while XIA's fallback-based multihoming provides fault tolerance without a heavy-weight protocol. XIA also allows fine-grained incoming load-balancing and QoS-matching by supporting flow migration. Flow migration is not possible using Mobile IPv6 when a single IPv6 address is associated with multiple flows. From a protocol design and architectural perspective, the key enablers of these benefits are flow-level migration, XIA's DAG-based locators and self-certifying identifiers.
Side channel attacks have been used to extract critical data such as encryption keys and confidential user data in a variety of adversarial settings. In practice, this threat is addressed by adhering to a constant-time programming discipline, which imposes strict constraints on the way in which programs are written. This introduces an additional hurdle for programmers faced with the already difficult task of writing secure code, highlighting the need for solutions that give the same source-level guarantees while supporting more natural programming models. We propose a novel type system for verifying that programs correctly implement constant-resource behavior. Our type system extends recent work on automatic amortized resource analysis (AARA), a set of techniques that automatically derive provable upper bounds on the resource consumption of programs. We devise new techniques that build on the potential method to achieve compositionality, precision, and automation. A strict global requirement that a program always maintains constant resource usage is too restrictive for most practical applications. It is sufficient to require that the program's resource behavior remain constant with respect to an attacker who is only allowed to observe part of the program's state and behavior. To account for this, our type system incorporates information flow tracking into its resource analysis. This allows our system to certify programs that need to violate the constant-time requirement in certain cases, as long as doing so does not leak confidential information to attackers. We formalize this guarantee by defining a new notion of resource-aware noninterference, and prove that our system enforces it. Finally, we show how our type inference algorithm can be used to synthesize a constant-time implementation from one that cannot be verified as secure, effectively repairing insecure programs automatically. We also show how a second novel AARA system that computes lower bounds on reso- rce usage can be used to derive quantitative bounds on the amount of information that a program leaks through its resource use. We implemented each of these systems in Resource Aware ML, and show that it can be applied to verify constant-time behavior in a number of applications including encryption and decryption routines, database queries, and other resource-aware functionality.
A5-1 algorithm is a stream cipher used to encrypt voice data in GSM, which needs to be realized with high performance due to real-time requirements. Traditional implementation on FPGA or ASIC can't obtain a trade-off among performance, cost and flexibility. To this aim, this paper introduces CGRCA to implement A5-1, and in order to optimize the performance and resource consumption, this paper proposes a resource-based path seeking (RPS) algorithm to develop an advanced implementation. Experimental results show that final optimal throughput of A5-1 implemented on CGRCA is 162.87Mbps when the frequency is 162.87MHz, and the set-up time is merely 87 cycles, which is optimal among similar works.
Verifying that hardware design implementations adhere to specifications is a time intensive and sometimes intractable problem due to the massive size of the system's state space. Formal methods techniques can be used to prove certain tractable specification properties; however, they are expensive, and often require subject matter experts to develop and solve. Nonetheless, hardware verification is a critical process to ensure security and safety properties are met, and encapsulates problems associated with trust and reliability. For complex designs where coverage of the entire state space is unattainable, prioritizing regions most vulnerable to security or reliability threats would allow efficient allocation of valuable verification resources. Stackelberg security games model interactions between a defender, whose goal is to assign resources to protect a set of targets, and an attacker, who aims to inflict maximum damage on the targets after first observing the defender's strategy. In equilibrium, the defender has an optimal security deployment strategy, given the attacker's best response. We apply this Stackelberg security framework to synthesized hardware implementations using the design's network structure and logic to inform defender valuations and verification costs. The defender's strategy in equilibrium is thus interpreted as a prioritization of the allocation of verification resources in the presence of an adversary. We demonstrate this technique on several open-source synthesized hardware designs.
Mobile ad hoc network (MANET) is one of the most important and unique network in wireless network which has brought maximum mobility and scalability. It is suitable for environments that need on fly setup. A lot of challenges come with implementing these networks. The most sensitive challenge that MANET faces is making the MANET energy efficient at the same time handling the security issues. In this paper we are going to discuss the best routing for maximum energy saving which is Load Balanced Energy Enhanced Clustered Bee Ad Hoc Routing (LBEE) along with secured PKI scheme. LBEE which is inspired from swarm intelligence and follows the bee colony paradigm has been found as the best energy efficient method for the MANETs. In this paper along with energy efficiency care has been taken for security of all the nodes of the network. The best suiting security for the protocol has been chosen as the four key security scheme.
Life-cycle management of stateful VNF services is a complicated task, especially when automated resiliency and scaling should be handled in a secure manner, without service degradation. We present FlowSNAC, a resilient and scalable VNF service for user authentication and service deployment. FlowSNAC consists of both stateful and stateless components, some of that are SDN-based and others that are NFVs. We describe how it adapts to changing conditions by automatically updating resource allocations through a series of intermediate steps of traffic steering, resource allocation, and secure state transfer. We conclude by highlighting some of the lessons learned during implementation, and their wider consequences for the architecture of SDN/NFV management and orchestration systems.
Distributed Denial of Service (DDoS) attack is a congestion-based attack that makes both the network and host-based resources unavailable for legitimate users, sending flooding attack packets to the victim's resources. The non-existence of predefined rules to correctly identify the genuine network flow made the task of DDoS attack detection very difficult. In this paper, a combination of unsupervised data mining techniques as intrusion detection system are introduced. The entropy concept in term of windowing the incoming packets is applied with data mining technique using Clustering Using Representative (CURE) as cluster analysis to detect the DDoS attack in network flow. The data is mainly collected from DARPA2000, CAIDA2007 and CAIDA2008 datasets. The proposed approach has been evaluated and compared with several existing approaches in terms of accuracy, false alarm rate, detection rate, F. measure and Phi coefficient. Results indicates the superiority of the proposed approach with four out five detected phases, more than 99% accuracy rate 96.29% detection rate, around 0% false alarm rate 97.98% F-measure, and 97.98% Phi coefficient.
The increased number of cyber attacks makes the availability of services a major security concern. One common type of cyber threat is distributed denial of service (DDoS). A DDoS attack is aimed at disrupting the legitimate users from accessing the services. It is easier for an insider having legitimate access to the system to deceive any security controls resulting in insider attack. This paper proposes an Early Detection and Isolation Policy (EDIP)to mitigate insider-assisted DDoS attacks. EDIP detects insider among all legitimate clients present in the system at proxy level and isolate it from innocent clients by migrating it to attack proxy. Further an effective algorithm for detection and isolation of insider is developed with the aim of maximizing attack isolation while minimizing disruption to benign clients. In addition, concept of load balancing is used to prevent proxies from getting overloaded.
Cloud computing, often referred to as simply “the cloud,” is the delivery of on-demand computing resources; everything from applications to data centers over the Internet. Cloud is used not only for storing data, but also the stored data can be shared by multiple users. Due to this, the integrity of cloud data is subject to doubt. Every time it is not possible for user to download all data and verify integrity, so proposed system contain Third Party Auditor (TPA) to verify the integrity of shared data. During auditing, the shared data is kept private from public verifiers, who are able to verify shared data integrity without downloading or retrieving the entire data file. Group signature is used to preserve identity privacy of group members from third party auditor. Privacy preserving is done to ensure that the TPA cannot derive user's data content from the information collected during the auditing process.
The theory of robust control models the controller-disturbance interaction as a game where disturbance is nonstrategic. The proviso of a deliberately malicious (strategic) attacker should be considered to increase the robustness of infrastructure systems. This has become especially important since many IT systems supporting critical functionalities are vulnerable to exploits by attackers. While the usefulness of game theory methods for modeling cyber-security is well established in the literature, new game theoretic models of cyber-physical security are needed for deriving useful insights on "optimal" attack plans and defender responses, both in terms of allocation of resources and operational strategies of these players. This whitepaper presents some progress and challenges in using game-theoretic models for security of infrastructure networks. Main insights from the following models are presented: (i) Network security game on flow networks under strategic edge disruptions; (ii) Interdiction problem on distribution networks under node disruptions; (iii) Inspection game to monitor commercial non-technical losses (e.g. energy diversion); and (iv) Interdependent security game of networked control systems under communication failures. These models can be used to analyze the attacker-defender interactions in a class of cyber-physical security scenarios.