Biblio
This paper presents an overview of the research project “High-Performance Hybrid Simulation/Measurement-Based Tools for Proactive Operator Decision-Support”, performed under the auspices of the U.S. Department of Energy grant DE-OE0000628. The objective of this project is to develop software tools to provide enhanced real-time situational awareness to support the decision making and system control actions of transmission operators. The integrated tool will combine high-performance dynamic simulation with synchrophasor measurement data to assess in real time system dynamic performance and operation security risk. The project includes: (i) The development of high-performance dynamic simulation software; (ii) the development of new computationally effective measurement-based tools to estimate operating margins of a power system in real time using measurement data from synchrophasors and SCADA; (iii) the development a hybrid framework integrating measurement-based and simulation-based approaches, and (iv) the use of cutting-edge visualization technology to display various system quantities and to visually process the results of the hybrid measurement-base/simulation-based security-assessment tool. Parallelization and high performance computing are utilized to enable ultrafast transient stability analysis that can be used in a real-time environment to quickly perform “what-if” simulations involving system dynamics phenomena. EPRI's Extended Transient Midterm Simulation Program (ETMSP) is modified and enhanced for this work. The contingency analysis is scaled for large-scale contingency analysis using MPI-based parallelization. Simulations of thousands of contingencies on a high performance computing machine are performed, and results show that parallelization over contingencies with MPI provides good scalability and computational gains. Different ways to reduce the I/O bottleneck have been also exprored. Thread-parallelization of the sparse linear solve is explored also through use of the SuperLU_MT library. Based on performance profiling results for the implicit method, the majority of CPU time is spent on the integration steps. Hence, in order to further improve the ETMSP performance, a variable time step control scheme for the original trapezoidal integration method has been developed and implemented. The Adams-Bashforth-Moulton predictor-corrector method was introduced and designed for ETMSP. Test results show superior performance with this method.
This paper presents an overview of the research project “High-Performance Hybrid Simulation/Measurement-Based Tools for Proactive Operator Decision-Support”, performed under the auspices of the U.S. Department of Energy grant DE-OE0000628. The objective of this project is to develop software tools to provide enhanced real-time situational awareness to support the decision making and system control actions of transmission operators. The integrated tool will combine high-performance dynamic simulation with synchrophasor measurement data to assess in real time system dynamic performance and operation security risk. The project includes: (i) The development of high-performance dynamic simulation software; (ii) the development of new computationally effective measurement-based tools to estimate operating margins of a power system in real time using measurement data from synchrophasors and SCADA; (iii) the development a hybrid framework integrating measurement-based and simulation-based approaches, and (iv) the use of cutting-edge visualization technology to display various system quantities and to visually process the results of the hybrid measurement-base/simulation-based security-assessment tool. Parallelization and high performance computing are utilized to enable ultrafast transient stability analysis that can be used in a real-time environment to quickly perform “what-if” simulations involving system dynamics phenomena. EPRI's Extended Transient Midterm Simulation Program (ETMSP) is modified and enhanced for this work. The contingency analysis is scaled for large-scale contingency analysis using MPI-based parallelization. Simulations of thousands of contingencies on a high performance computing machine are performed, and results show that parallelization over contingencies with MPI provides good scalability and computational gains. Different ways to reduce the I/O bottleneck have been also exprored. Thread-parallelization of the sparse linear solve is explored also through use of the SuperLU_MT library. Based on performance profiling results for the implicit method, the majority of CPU time is spent on the integration steps. Hence, in order to further improve the ETMSP performance, a variable time step control scheme for the original trapezoidal integration method has been developed and implemented. The Adams-Bashforth-Moulton predictor-corrector method was introduced and designed for ETMSP. Test results show superior performance with this method.
Cyber intrusions to substations of a power grid are a source of vulnerability since most substations are unmanned and with limited protection of the physical security. In the worst case, simultaneous intrusions into multiple substations can lead to severe cascading events, causing catastrophic power outages. In this paper, an integrated Anomaly Detection System (ADS) is proposed which contains host- and network-based anomaly detection systems for the substations, and simultaneous anomaly detection for multiple substations. Potential scenarios of simultaneous intrusions into the substations have been simulated using a substation automation testbed. The host-based anomaly detection considers temporal anomalies in the substation facilities, e.g., user-interfaces, Intelligent Electronic Devices (IEDs) and circuit breakers. The malicious behaviors of substation automation based on multicast messages, e.g., Generic Object Oriented Substation Event (GOOSE) and Sampled Measured Value (SMV), are incorporated in the proposed network-based anomaly detection. The proposed simultaneous intrusion detection method is able to identify the same type of attacks at multiple substations and their locations. The result is a new integrated tool for detection and mitigation of cyber intrusions at a single substation or multiple substations of a power grid.
Cyber intrusions to substations of a power grid are a source of vulnerability since most substations are unmanned and with limited protection of the physical security. In the worst case, simultaneous intrusions into multiple substations can lead to severe cascading events, causing catastrophic power outages. In this paper, an integrated Anomaly Detection System (ADS) is proposed which contains host- and network-based anomaly detection systems for the substations, and simultaneous anomaly detection for multiple substations. Potential scenarios of simultaneous intrusions into the substations have been simulated using a substation automation testbed. The host-based anomaly detection considers temporal anomalies in the substation facilities, e.g., user-interfaces, Intelligent Electronic Devices (IEDs) and circuit breakers. The malicious behaviors of substation automation based on multicast messages, e.g., Generic Object Oriented Substation Event (GOOSE) and Sampled Measured Value (SMV), are incorporated in the proposed network-based anomaly detection. The proposed simultaneous intrusion detection method is able to identify the same type of attacks at multiple substations and their locations. The result is a new integrated tool for detection and mitigation of cyber intrusions at a single substation or multiple substations of a power grid.
We propose that to address the growing problems with complexity and data volumes in HPC security wee need to refactor how we look at data by creating tools that not only select data, but analyze and represent it in a manner well suited for intuitive analysis. We propose a set of rules describing what this means, and provide a number of production quality tools that represent our current best effort in implementing these ideas.
The video surveillance widely installed in public areas poses a significant threat to the privacy. This paper proposes a new privacy preserving method via the Generalized Random-Grid based Visual Cryptography Scheme (GRG-based VCS). We first separate the foreground from the background for each video frame. These foreground pixels contain the most important information that needs to be protected. Every foreground area is encrypted into two shares based on GRG-based VCS. One share is taken as the foreground, and the other one is embedded into another frame with random selection. The content of foreground can only be recovered when these two shares are got together. The performance evaluation on several surveillance scenarios demonstrates that our proposed method can effectively protect sensitive privacy information in surveillance videos.
Visual cryptography is a way to encrypt the secret image into several meaningless share images. Noted that no information can be obtained if not all of the shares are collected. Stacking the share images, the secret image can be retrieved. The share images are meaningless to owner which results in difficult to manage. Tagged visual cryptography is a skill to print a pattern onto meaningless share images. After that, users can easily manage their own share images according to the printed pattern. Besides, access control is another popular topic to allow a user or a group to see the own authorizations. In this paper, a self-authentication mechanism with lossless construction ability for image secret sharing scheme is proposed. The experiments provide the positive data to show the feasibility of the proposed scheme.
Address shuffling is a type of moving target defense that prevents an attacker from reliably contacting a system by periodically remapping network addresses. Although limited testing has demonstrated it to be effective, little research has been conducted to examine the theoretical limits of address shuffling. As a result, it is difficult to understand how effective shuffling is and under what circumstances it is a viable moving target defense. This paper introduces probabilistic models that can provide insight into the performance of address shuffling. These models quantify the probability of attacker success in terms of network size, quantity of addresses scanned, quantity of vulnerable systems, and the frequency of shuffling. Theoretical analysis shows that shuffling is an acceptable defense if there is a small population of vulnerable systems within a large network address space, however shuffling has a cost for legitimate users. These results will also be shown empirically using simulation and actual traffic traces.
While many theoretical and simulation works have highlighted the potential gains of cognitive radio, several technical issues still need to be evaluated from an experimental point of view. Deploying complex heterogeneous system scenarios is tedious, time consuming and hardly reproducible. To address this problem, we have developed a new experimental facility, called CorteXlab, that allows complex multi-node cognitive radio scenarios to be easily deployed and tested by anyone in the world. Our objective is not to design new software defined radio (SDR) nodes, but rather to provide a comprehensive access to a large set of high performance SDR nodes. The CorteXlab facility offers a 167 m2 electromagnetically (EM) shielded room and integrates a set of 24 universal software radio peripherals (USRPs) from National Instruments, 18 PicoSDR nodes from Nutaq and 42 IoT-Lab wireless sensor nodes from Hikob. CorteXlab is built upon the foundations of the SensLAB testbed and is based the free and open-source toolkit GNU Radio. Automation in scenario deployment, experiment start, stop and results collection is performed by an experiment controller, called Minus. CorteXlab is in its final stages of development and is already capable of running test scenarios. In this contribution, we show that CorteXlab is able to easily cope with the usual issues faced by other testbeds providing a reproducible experiment environment for CR experimentation.
Monitoring is an important issue in cloud environments because it assures that acquired cloud slices attend the user's expectations. However, these environments are multitenant and dynamic, requiring automation techniques to offload cloud administrators. In a previous work, we proposed FlexACMS: a framework to automate monitoring configuration related to cloud slices using multiple monitoring solutions. In this work, we enhanced FlexACMS to allow dynamic and automatic attribution of monitoring configuration tasks to servers without administrator intervention, which was not available in previous version. FlexACMS also considers the monitoring server load when attributing configuration tasks, which allows load balancing between monitoring servers. The evaluation showed that enhancements reduced FlexACMS response time up to 60% in comparison to previous version. The scalability evaluation of enhanced version demonstrated the feasibility of our approach in large scale cloud environments.
The emergence of new technologies, in addition with the popularization of mobile devices and wireless communication systems, demands a variety of requirements that current Internet is not able to comply adequately. In this scenario, the innovative information-centric Entity Title Architecture (ETArch), a Future Internet (FI) clean slate approach, was design to efficiently cope with the increasing demand of beyond-IP networking services. Nevertheless, despite all ETArch capabilities, it was not projected with reliable networking functions, which limits its operability in mobile multimedia networking, and will seriously restrict its scope in Future Internet scenarios. Therefore, our work extends ETArch mobility control with advanced quality-oriented mobility functions, to deploy mobility prediction, Point of Attachment (PoA) decision and handover setup meeting both session quality requirements of active session flows and current wireless quality conditions of neighbouring PoA candidates. The effectiveness of the proposed additions were confirmed through a preliminary evaluation carried out by MATLAB, in which we have considered distinct applications scenario, and showed that they were able to outperform the most relevant alternative solutions in terms of performance and quality of service.
For efficient deployment of sensor nodes required in many logistic applications, it's necessary to build security mechanisms for a secure wireless communication. End-to-end security plays a crucial role for the communication in these networks. This provides the confidentiality, the authentication and mostly the prevention from many attacks at high level. In this paper, we propose a lightweight key exchange protocol WSKE (Wireless Sensor Key Exchange) for IP-based wireless sensor networks. This protocol proposes techniques that allows to adapt IKEv2 (Internet Key Exchange version 2) mechanisms of IPSEC/6LoWPAN networks. In order to check these security properties, we have used a formal verification tools called AVISPA.
The emergence of new network applications, such as the network intrusion detection system and packet-level accounting, requires packet classification to report all matched rules instead of only the best matched rule. Although several schemes have been proposed recently to address the multimatch packet classification problem, most of them require either huge memory or expensive ternary content addressable memory (TCAM) to store the intermediate data structure, or they suffer from steep performance degradation under certain types of classifiers. In this paper, we decompose the operation of multimatch packet classification from the complicated multidimensional search to several single-dimensional searches, and present an asynchronous pipeline architecture based on a signature tree structure to combine the intermediate results returned from single-dimensional searches. By spreading edges of the signature tree across multiple hash tables at different stages, the pipeline can achieve a high throughput via the interstage parallel access to hash tables. To exploit further intrastage parallelism, two edge-grouping algorithms are designed to evenly divide the edges associated with each stage into multiple work-conserving hash tables. To avoid collisions involved in hash table lookup, a hybrid perfect hash table construction scheme is proposed. Extensive simulation using realistic classifiers and traffic traces shows that the proposed pipeline architecture outperforms HyperCuts and B2PC schemes in classification speed by at least one order of magnitude, while having a similar storage requirement. Particularly, with different types of classifiers of 4K rules, the proposed pipeline architecture is able to achieve a throughput between 26.8 and 93.1 Gb/s using perfect hash tables.
Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.
In this paper, we have proposed the IBE-RAOLSR and ECDSA-RAOLSR protocols for WMNs (Wireless Mesh Networks), which contributes to security routing protocols. We have implemented the IBE (Identity Based Encryption) and ECDSA (Elliptic Curve Digital Signature Algorithm) methods to secure messages in RAOLSR (Radio Aware Optimized Link State Routing), namely TC (Topology Control) and Hello messages. We then compare the ECDSA-based RAOLSR with IBE-based RAOLSR protocols. This study shows the great benefits of the IBE technique in securing RAOLSR protocol for WMNs. Through extensive ns-3 (Network Simulator-3) simulations, results have shown that the IBE-RAOLSR outperforms the ECDSA-RAOLSR in terms of overhead and delay. Simulation results show that the utilize of the IBE-based RAOLSR provides a greater level of security with light overhead.
The future Internet has been a hot topic during the past decade and many approaches towards this future Internet, ranging from incremental evolution to complete clean slate ones, have been proposed. One of the proposition, LISP, advocates for the separation of the identifier and the locator roles of IP addresses to reduce BGP churn and BGP table size. Up to now, however, most studies concerning LISP have been theoretical and, in fact, little is known about the actual LISP deployment performance. In this paper, we fill this gap through measurement campaigns carried out on the LISP Beta Network. More precisely, we evaluate the performance of the two key components of the infrastructure: the control plane (i.e., the mapping system) and the interworking mechanism (i.e., communication between LISP and non-LISP sites). Our measurements highlight that performance offered by the LISP interworking infrastructure is strongly dependent on BGP routing policies. If we exclude misconfigured nodes, the mapping system typically provides reliable performance and relatively low median mapping resolution delays. Although the bias is not very important, control plane performance favors USA sites as a result of its larger LISP user base but also because European infrastructure appears to be less reliable.
We propose a distributed continuous-time algorithm to solve a network optimization problem where the global cost function is a strictly convex function composed of the sum of the local cost functions of the agents. We establish that our algorithm, when implemented over strongly connected and weight-balanced directed graph topologies, converges exponentially fast when the local cost functions are strongly convex and their gradients are globally Lipschitz. We also characterize the privacy preservation properties of our algorithm and extend the convergence guarantees to the case of time-varying, strongly connected, weight-balanced digraphs. When the network topology is a connected undirected graph, we show that exponential convergence is still preserved if the gradients of the strongly convex local cost functions are locally Lipschitz, while it is asymptotic if the local cost functions are convex. We also study discrete-time communication implementations. Specifically, we provide an upper bound on the stepsize of a synchronous periodic communication scheme that guarantees convergence over connected undirected graph topologies and, building on this result, design a centralized event-triggered implementation that is free of Zeno behavior. Simulations illustrate our results.
We propose a distributed continuous-time algorithm to solve a network optimization problem where the global cost function is a strictly convex function composed of the sum of the local cost functions of the agents. We establish that our algorithm, when implemented over strongly connected and weight-balanced directed graph topologies, converges exponentially fast when the local cost functions are strongly convex and their gradients are globally Lipschitz. We also characterize the privacy preservation properties of our algorithm and extend the convergence guarantees to the case of time-varying, strongly connected, weight-balanced digraphs. When the network topology is a connected undirected graph, we show that exponential convergence is still preserved if the gradients of the strongly convex local cost functions are locally Lipschitz, while it is asymptotic if the local cost functions are convex. We also study discrete-time communication implementations. Specifically, we provide an upper bound on the stepsize of a synchronous periodic communication scheme that guarantees convergence over connected undirected graph topologies and, building on this result, design a centralized event-triggered implementation that is free of Zeno behavior. Simulations illustrate our results.
This paper proposes a security architecture for an IoT transparent middleware. Focused on bringing real life objects to the virtual realm, the proposed architecture is deployable and comprises protection measures based on existent technologies for security such as AES, TLS and oAuth. This way, privacy, authenticity, integrity and confidentiality on data exchange services are integrated to provide security for generated smart objects and for involved users and services in a reliable and deployable manner.
Documents such as the Geneva (1949) and Hague Conventions (1899 and 1907) that have clearly outlined the rules of engagement for warfare find themselves challenged by the presence of a new arena: cyber. Considering the potential nature of these offenses, operations taking place in the realm of cyber cannot simply be generalized as “cyber-warfare,” as they may also be acts of cyber-espionage, cyber-terrorism, cyber-sabaotge, etc. Cyber-attacks, such as those on Estonia in 2007, have begun to test the limits of NATO's Article 5 and the UN Charter's Article 2(4) against the use of force. What defines “force” as it relates to cyber, and what kind of response is merited in the case of uncertainty regarding attribution? In 2009, NATO's Cooperative Cyber Defence Centre of Excellence commissioned a group of experts to publish a study on the application of international law to cyber-warfare. This document, the Tallinn Manual, was published in 2013 as a non-binding exercise to stimulate discussion on the codification of international law on the subject. After analysis, this paper concludes that the Tallinn Manual classifies the 2010 Stuxnet attack on Iran's nuclear program as an illegal act of force. The purpose of this paper is the following: (1) to analyze the historical and technical background of cyber-warfare, (2) to evaluate the Tallinn Manual as it relates to the justification cyber-warfare, and (3) to examine the applicability of the Tallinn Manual in a case study of a historical example of a cyber-attacks.
The trend towards Cloud computing infrastructure has increased the need for new methods that allow data owners to share their data with others securely taking into account the needs of multiple stakeholders. The data owner should be able to share confidential data while delegating much of the burden of access control management to the Cloud and trusted enterprises. The lack of such methods to enhance privacy and security may hinder the growth of cloud computing. In particular, there is a growing need to better manage security keys of data shared in the Cloud. BYOD provides a first step to enabling secure and efficient key management, however, the data owner cannot guarantee that the data consumers device itself is secure. Furthermore, in current methods the data owner cannot revoke a particular data consumer or group efficiently. In this paper, we address these issues by incorporating a hardware-based Trusted Platform Module (TPM) mechanism called the Trusted Extension Device (TED) together with our security model and protocol to allow stronger privacy of data compared to software-based security protocols. We demonstrate the concept of using TED for stronger protection and management of cryptographic keys and how our secure data sharing protocol will allow a data owner (e.g, author) to securely store data via untrusted Cloud services. Our work prevents keys to be stolen by outsiders and/or dishonest authorised consumers, thus making it particularly attractive to be implemented in a real-world scenario.
The reliability theory used in the design of complex systems including electric grids assumes random component failures and is thus unsuited to analyzing security risks due to attackers that intentionally damage several components of the system. In this paper, a security risk analysis methodology is proposed consisting of vulnerability analysis and impact analysis. Vulnerability analysis is a method developed by security engineers to identify the attacks that are relevant for the system under study, and in this paper, the analysis is applied on the communications network topology of the electric grid automation system. Impact analysis is then performed through co-simulation of automation and the electric grid to assess the potential damage from the attacks. This paper makes an extensive review of vulnerability and impact analysis methods and relevant system modeling techniques from the fields of security and industrial automation engineering, with a focus on smart grid automation, and then applies and combines approaches to obtain a security risk analysis methodology. The methodology is demonstrated with a case study of fault location, isolation and supply restoration smart grid automation.
Social networking sites (SNSs), with their large number of users and large information base, seem to be the perfect breeding ground for exploiting the vulnerabilities of people, who are considered the weakest link in security. Deceiving, persuading, or influencing people to provide information or to perform an action that will benefit the attacker is known as "social engineering." Fraudulent and deceptive people use social engineering traps and tactics through SNSs to trick users into obeying them, accepting threats, and falling victim to various crimes such as phishing, sexual abuse, financial abuse, identity theft, and physical crime. Although organizations, researchers, and practitioners recognize the serious risks of social engineering, there is a severe lack of understanding and control of such threats. This may be partly due to the complexity of human behaviors in approaching, accepting, and failing to recognize social engineering tricks. This research aims to investigate the impact of source characteristics on users' susceptibility to social engineering victimization in SNSs, particularly Facebook. Using grounded theory method, we develop a model that explains what and how source characteristics influence Facebook users to judge the attacker as credible.
One of the criticisms of traditional security approaches is that they present a static target for attackers. Critics state, with good justification, that by allowing the attacker to reconnoiter a system at leisure to plan an attack, defenders are immediately disadvantaged. To address this, the concept of moving-target defense (MTD) has recently emerged as a new paradigm for protecting computer networks and systems.
Due to design and fabrication outsourcing to foundries, the problem of malicious modifications to integrated circuits known as hardware Trojans has attracted attention in academia as well as industry. To reduce the risks associated with Trojans, researchers have proposed different approaches to detect them. Among these approaches, test-time detection approaches have drawn the greatest attention and most approaches assume the existence of a “golden model”. Prior works suggest using reverse-engineering to identify such Trojan-free ICs for the golden model but they did not state how to do this efficiently. In this paper, we propose an innovative and robust reverseengineering approach to identify the Trojan-free ICs. We adapt a well-studied machine learning method, one-class support vector machine, to solve our problem. Simulation results using state-of-the-art tools on several publicly available circuits show that our approach can detect hardware Trojans with high accuracy rate across different modeling and algorithm parameters.