Biblio
One of the recent focuses in Cloud Computing networks is Software Defined Clouds (SDC), where the Software-Defined Networking (SDN) technology is combined with the traditional Cloud network. SDC is aimed to create an effective Cloud environment by extending the virtualization concept to all resources. In that, the control plane is decoupled from the data plane in a network device and controlled by the centralized controller using the OpenFlow Protocol (OFP). As the centralized controller performs all control functions in a network, it requires strong security. Already, Cloud Computing faces many security challenges. Most vulnerable attacks in SDC is Denial-of-Service (DoS) and Distributed DoS (DDoS) attacks. To overcome the DoS attacks, we propose a distributed Firewall with Intrusion Prevention System (IPS) for SDC. The proposed distributed security mechanism is investigated for two DoS attacks, ICMP and SYN flooding attacks for different network scenarios. From the simulation results and discussion, we showed that the distributed Firewall with IPS security detects and prevents the DoS attack effectively.
Location-Based Service (LBS) becomes increasingly important for our daily life. However, the localization information in the air is vulnerable to various attacks, which result in serious privacy concerns. To overcome this problem, we formulate a multi-objective optimization problem with considering both the query probability and the practical dummy location region. A low complexity dummy location selection scheme is proposed. We first find several candidate dummy locations with similar query probabilities. Among these selected candidates, a cloaking area based algorithm is then offered to find K - 1 dummy locations to achieve K-anonymity. The intersected area between two dummy locations is also derived to assist to determine the total cloaking area. Security analysis verifies the effectiveness of our scheme against the passive and active adversaries. Compared with other methods, simulation results show that the proposed dummy location scheme can improve the privacy level and enlarge the cloaking area simultaneously.
The survey of related work in the very specialized field of information security (IS) ensurance for the Internet of Things (IoT) allowed us to work out a taxonomy of typical attacks against the IoT elements (with special attention to the IoT device protection). The key directions of countering these attacks were defined on this basis. According to the modern demand for the IoT big IS-related data processing, the application of Security Intelligence approach is proposed. The main direction of the future research, namely the IoT operational resilience, is indicated.
Robust Trojans are inserted in outsourced products resulting in security vulnerabilities. Post-silicon testing is done mandatorily to detect such malicious inclusions. Logic testing becomes obsolete for larger circuits with sequential Trojans. For such cases, side channel analysis is an effective approach. The major challenge with the side channel analysis is reduction in hardware Trojan detection sensitivity due to process variation (process variation could lead to false positives and false negatives and it is unavoidable during a manufacturing stage). In this paper Self Referencing method is proposed that measures leakage power of the circuit at four different time windows that hammers the Trojan into triggering and also help to identify/eliminate false positives/false negatives due to process variation.
Analytics in big data is maturing and moving towards mass adoption. The emergence of analytics increases the need for innovative tools and methodologies to protect data against privacy violation. Many data anonymization methods were proposed to provide some degree of privacy protection by applying data suppression and other distortion techniques. However, currently available methods suffer from poor scalability, performance and lack of framework standardization. Current anonymization methods are unable to cope with the massive size of data processing. Some of these methods were especially proposed for MapReduce framework to operate in Big Data. However, they still operate in conventional data management approaches. Therefore, there were no remarkable gains in the performance. We introduce a framework that can operate in MapReduce environment to benefit from its advantages, as well as from those in Hadoop ecosystems. Our framework provides a granular user's access that can be tuned to different authorization levels. The proposed solution provides a fine-grained alteration based on the user's authorization level to access MapReduce domain for analytics. Using well-developed role-based access control approaches, this framework is capable of assigning roles to users and map them to relevant data attributes.
While most organizations continue to invest in traditional network defences, a formidable security challenge has been brewing within their own boundaries. Malicious insiders with privileged access in the guise of a trusted source have carried out many attacks causing far reaching damage to financial stability, national security and brand reputation for both public and private sector organizations. Growing exposure and impact of the whistleblower community and concerns about job security with changing organizational dynamics has further aggravated this situation. The unpredictability of malicious attackers, as well as the complexity of malicious actions, necessitates the careful analysis of network, system and user parameters correlated with insider threat problem. Thus it creates a high dimensional, heterogeneous data analysis problem in isolating suspicious users. This research work proposes an insider threat detection framework, which utilizes the attributed graph clustering techniques and outlier ranking mechanism for enterprise users. Empirical results also confirm the effectiveness of the method by achieving the best area under curve value of 0.7648 for the receiver operating characteristic curve.
The Sensor Web is evolving into a complex information space, where large volumes of sensor observation data are often consumed by complex applications. Provenance has become an important issue in the Sensor Web, since it allows applications to answer “what”, “when”, “where”, “who”, “why”, and “how” queries related to observations and consumption processes, which helps determine the usability and reliability of data products. This paper investigates characteristics and requirements of provenance in the Sensor Web and proposes an interoperable approach to building a provenance model for the Sensor Web. Our provenance model extends the W3C PROV Data Model with Sensor Web domain vocabularies. It is developed using Semantic Web technologies and thus allows provenance information of sensor observations to be exposed in the Web of Data using the Linked Data approach. A use case illustrates the applicability of the approach.
The use of self organized wireless technologies called as Mobile Ad Hoc Networks (MANETs) has increased and these wireless devices can be deployed anywhere without any infrastructural support or without any base station, hence securing these networks and preventing from Intrusions is necessary. This paper describes a method for securing the MANETs using Hybrid cryptographic technique which uses RSA and AES algorithm along with SHA 256 Hashing technique. This hybrid cryptographic technique provides authentication to the data. To check whether there is any malicious node present, an Intrusion Detection system (IDS) technique called Enhanced Adaptive Acknowledgement (EAACK) is used, which checks for the acknowledgement packets to detect any malicious node present in the system. The routing of packets is done through two protocols AODV and ZRP and both the results are compared. The ZRP protocol when used for routing provides better performance as compared to AODV.
With the rapid development of DC transmission technology and High Voltage Direct Current (HVDC) programs, the reliability of AC/DC hybrid power grid draws more and more attentions. The paper takes both the system static and dynamic characteristics into account, and proposes a novel AC/DC hybrid system reliability evaluation method considering transient security constraints based on Monte-Carlo method and transient stability analytical method. The interaction of AC system and DC system after fault is considered in evaluation process. The transient stability analysis is performed firstly when fault occurs in the system and BPA software is applied to the analysis to improve the computational accuracy and speed. Then the new system state is generated according to the transient analysis results. Then a minimum load shedding model of AC/DC hybrid system with HVDC is proposed. And then adequacy analysis is taken to the new state. The proposed method can evaluate the reliability of AC/DC hybrid grid more comprehensively and reduce the complexity of problem which is tested by IEEE-RTS 96 system and an actual large-scale system.
Now-a-days security is a challenging task in different types of networks, such as Mobile Networks, Wireless Sensor Networks (WSN) and Radio Frequency Identifications Systems (RFIS) etc, to overcome these challenges we use sincryption. Signcryption is a new public key cryptographic primitive that performs the functions of digital signature and encryption in single logical step. The main contribution of signcrytion scheme, it is more suitable for low constrained environment. Moreover some signcryption schemes based on RSA, Elliptic Curve (EC) and Hyper Elliptic Curve (HEC). This paper contains a critical review of signcryption schemes based on hyper elliptic curve, since it reduce communication and computational costs for low constrained devices. It also explores advantages and disadvantages of different signcryption schemes based on HEC.
Data provenance provides a way for scientists to observe how experimental data originates, conveys process history, and explains influential factors such as experimental rationale and associated environmental factors from system metrics measured at runtime. The US Department of Energy Office of Science Integrated end-to-end Performance Prediction and Diagnosis for Extreme Scientific Workflows (IPPD) project has developed a provenance harvester that is capable of collecting observations from file based evidence typically produced by distributed applications. To achieve this, file based evidence is extracted and transformed into an intermediate data format inspired in part by W3C CSV on the Web recommendations, called the Harvester Provenance Application Interface (HAPI) syntax. This syntax provides a general means to pre-stage provenance into messages that are both human readable and capable of being written to a provenance store, Provenance Environment (ProvEn). HAPI is being applied to harvest provenance from climate ensemble runs for Accelerated Climate Modeling for Energy (ACME) project funded under the U.S. Department of Energy's Office of Biological and Environmental Research (BER) Earth System Modeling (ESM) program. ACME informally provides provenance in a native form through configuration files, directory structures, and log files that contain success/failure indicators, code traces, and performance measurements. Because of its generic format, HAPI is also being applied to harvest tabular job management provenance from Belle II DIRAC scheduler relational database tables as well as other scientific applications that log provenance related information.
The ubiquity of the Internet and email, have provided a mostly insecure communication medium for the consumer. During the last few decades, we have seen the development of several ways to secure email messages. However, these solutions are inflexible and difficult to use for encrypting email messages to protect security and privacy while communicating or collaborating via email. Under the current paradigm, the arduous process of setting up email encryption is non-intuitive for the average user. The complexity of the current practices has also yielded to incorrect developers' interpretation of architecture which has resulted in interoperability issues. As a result, the lack of simple and easy-to-use infrastructure in current practices means that the consumers still use plain text emails over insecure networks. In this paper, we introduce and describe a novel, holistic model with new techniques for protecting email messages. The architecture of our innovative model is simpler and easier to use than those currently employed. We use the simplified trust model, which can relieve users from having to perform many complex steps to achieve email security. Utilizing the new techniques presented in this paper can safeguard users' email from unauthorized access and protect their privacy. In addition, a simplified infrastructure enables developers to understand the architecture more readily eliminating interoperability.
Smart IoT applications require connecting multiple IoT devices and networks with multiple services running in fog and cloud computing platforms. One approach to connecting IoT devices with cloud and fog services is to create a federated virtual network. The main benefit of this approach is that IoT devices can then interact with multiple remote services using an application specific federated network where no traffic from other applications passes. This federated network spans multiple cloud platforms and IoT networks but it can be managed as a single entity. From the point of view of security, federated virtual networks can be managed centrally and be secured with a coherent global network security policy. This does not mean that the same security policy applies everywhere, but that the different security policies are specified in a single coherent security policy. In this paper we propose to extend a federated cloud networking security architecture so that it can secure IoT devices and networks. The federated network is extended to the edge of IoT networks by integrating a federation agent in an IoT gateway or network controller (Can bus, 6LowPan, Lora, ...). This allows communication between the federated cloud network and the IoT network. The security architecture is based on the concepts of network function virtualisation (NFV) and service function chaining (SFC) for composing security services. The IoT network and devices can then be protected by security virtual network functions (VNF) running at the edge of the IoT network.
In the Internet of Things (IoT), smart devices are connected using various communication protocols, such as Wi-Fi, ZigBee. Some IoT devices have multiple built-in communication modules. If an IoT device equipped with multiple communication protocols is compromised by an attacker using one communication protocol (e.g., Wi-Fi), it can be exploited as an entry point to the IoT network. Another protocol (e.g., ZigBee) of this IoT device could be used to exploit vulnerabilities of other IoT devices using the same communication protocol. In order to find potential attacks caused by this kind of cross-protocol devices, we group IoT devices based on their communication protocols and construct a graphical security model for each group of devices using the same communication protocol. We combine the security models via the cross-protocol devices and compute hidden attack paths traversing different groups of devices. We use two use cases in the smart home scenario to demonstrate our approach and discuss some feasible countermeasures.
Cloud computing has become a part of people's lives. However, there are many unresolved problems with security of this technology. According to the assessment of international experts in the field of security, there are risks in the appearance of cloud collusion in uncertain conditions. To mitigate this type of uncertainty, and minimize data redundancy of encryption together with harms caused by cloud collusion, modified threshold Asmuth-Bloom and weighted Mignotte secret sharing schemes are used. We show that if the villains do know the secret parts, and/or do not know the secret key, they cannot recuperate the secret. If the attackers do not know the required number of secret parts but know the secret key, the probability that they obtain the secret depends the size of the machine word in bits that is less than 1/2(1-1). We demonstrate that the proposed scheme ensures security under several types of attacks. We propose four approaches to select weights for secret sharing schemes to optimize the system behavior based on data access speed: pessimistic, balanced, and optimistic, and on speed per price ratio. We use the approximate method to improve the detection, localization and error correction accuracy under cloud parameters uncertainty.
With the development of cloud computing and its economic benefit, more and more companies and individuals outsource their data and computation to clouds. Meanwhile, the business way of resource outsourcing makes the data out of control from its owner and results in many security issues. The existing secure keyword search methods assume that cloud servers are curious-but-honest or partial honest, which makes them powerless to deal with the deliberately falsified or fabricated results of insider attacks. In this paper, we propose a verifiable single keyword top-k search scheme against insider attacks which can verify the integrity of search results. Data owners generate verification codes (VCs) for the corresponding files, which embed the ordered sequence information of the relevance scores between files and keywords. Then files and corresponding VCs are outsourced to cloud servers. When a data user performs a keyword search in cloud servers, the qualified result files are determined according to the relevance scores between the files and the interested keyword and then returned to the data user together with a VC. The integrity of the result files is verified by data users through reconstructing a new VC on the received files and comparing it with the received one. Performance evaluation have been conducted to demonstrate the efficiency and result redundancy of the proposed scheme.
The state-of-the-art Android malware often encrypts or encodes malicious code snippets to evade malware detection. In this paper, such undetectable codes are called Mysterious Codes. To make such codes detectable, we design a system called Droidrevealer to automatically identify Mysterious Codes and then decode or decrypt them. The prototype of Droidrevealer is implemented and evaluated with 5,600 malwares. The results show that 257 samples contain the Mysterious Codes and 11,367 items are exposed. Furthermore, several sensitive behaviors hidden in the Mysterious Codes are disclosed by Droidrevealer.
With the development of cyber threats on the Internet, the number of malware, especially unknown malware, is also dramatically increasing. Since all of malware cannot be analyzed by analysts, it is very important to find out new malware that should be analyzed by them. In order to cope with this issue, the existing approaches focused on malware classification using static or dynamic analysis results of malware. However, the static and the dynamic analyses themselves are also too costly and not easy to build the isolated, secure and Internet-like analysis environments such as sandbox. In this paper, we propose a lightweight malware classification method based on detection results of anti-virus software. Since the proposed method can reduce the volume of malware that should be analyzed by analysts, it can be used as a preprocess for in-depth analysis of malware. The experimental showed that the proposed method succeeded in classification of 1,000 malware samples into 187 unique groups. This means that 81% of the original malware samples do not need to analyze by analysts.
The presented work of this paper is to propose the implementation of chaotic crypto-system with the new key generator using chaos in digital filter for data encryption and decryption. The chaos in digital filter of the second order system is produced by the coefficients which are initialed in the key generator to produce other new coefficients. Private key system using the initial coefficients value condition and dynamic input as password of 16 characters is to generate the coefficients for crypto-system. In addition, we have tension specifically to propose the solution of data security in lightweight cryptography based on external and internal key in which conducts with the appropriate key sensitivity plus high performance. The chaos in digital filter has functioned as the main major in the system. The experimental results illustrate that the proposed data encryption with new key generator system is the high sensitive system with accuracy key test 99% and can make data more secure with high performance.
In this paper a random number generation method based on a piecewise linear one dimensional (PL1D) discrete time chaotic maps is proposed for applications in cryptography and steganography. Appropriate parameters are determined by examining the distribution of underlying chaotic signal and random number generator (RNG) is numerically verified by four fundamental statistical test of FIPS 140-2. Proposed design is practically realized on the field programmable analog and digital arrays (FPAA-FPGA). Finally it is experimentally verified that the presented RNG fulfills the NIST 800-22 randomness test without post processing.