Biblio
Wide area monitoring, protection and control for power network systems are one of the fundamental components of the smart grid concept. Synchronized measurement technology such as the Phasor Measurement Units (PMUs) will play a major role in implementing these components and they have the potential to provide reliable and secure full system observability. The problem of Optimal Placement of PMUs (OPP) consists of locating a minimal set of power buses where the PMUs must be placed in order to provide full system observability. In this paper a novel solution to the OPP problem using a Memetic Algorithm (MA) is proposed. The implemented MA combines the global optimization power of genetic algorithms with local solution tuning using the hill-climbing method. The performance of the proposed approach was demonstrated on IEEE benchmark power networks as well as on a segment of the Idaho region power network. It was shown that the proposed solution using a MA features significantly faster convergence rate towards the optimum solution.
This paper reports the results and findings of a historical analysis of open source intelligence (OSINT) information (namely Twitter data) surrounding the events of the September 11, 2012 attack on the US Diplomatic mission in Benghazi, Libya. In addition to this historical analysis, two prototype capabilities were combined for a table top exercise to explore the effectiveness of using OSINT combined with a context aware handheld situational awareness framework and application to better inform potential responders as the events unfolded. Our experience shows that the ability to model sentiment, trends, and monitor keywords in streaming social media, coupled with the ability to share that information to edge operators can increase their ability to effectively respond to contingency operations as they unfold.
The United States, including the Department of Defense, relies heavily on information systems and networking technologies to efficiently conduct a wide variety of missions across the globe. With the ever-increasing rate of cyber attacks, this dependency places the nation at risk of a loss of confidentiality, integrity, and availability of its critical information resources; degrading its ability to complete the mission. In this paper, we introduce the operational data classes for establishing situational awareness in cyberspace. A system effectively using our key information components will be able to provide the nation's leadership timely and accurate information to gain an understanding of the operational cyber environment to enable strategic, operational, and tactical decision-making. In doing so, we present, define and provide examples of our key classes of operational data for cyber situational awareness and present a hypothetical case study demonstrating how they must be consolidated to provide a clear and relevant picture to a commander. In addition, current organizational and technical challenges are discussed, and areas for future research are addressed.
Cover time measures the time (or number of steps) required for a mobile agent to visit each node in a network (graph) at least once. A short cover time is important for search or foraging applications that require mobile agents to quickly inspect or monitor nodes in a network, such as providing situational awareness or security. Speed can be achieved if details about the graph are known or if the agent maintains a history of visited nodes, however, these requirements may not be feasible for agents with limited resources, they are difficult in dynamic graph topologies, and they do not easily scale to large networks. This paper introduces a set-based form of heading (directional bias) that allows an agent to more efficiently explore any connected graph, static or dynamic. When deciding the next node to visit, agents are discouraged from visiting nodes that neighbor both their previous and current locations. Modifying a traditional movement method, e.g., random walk, with this concept encourages an agent to move toward nodes that are less likely to have been previously visited, reducing cover time. Simulation results with grid, scale-free, and minimum distance graphs demonstrate heading can consistently reduce cover time as compared to non-heading movement techniques.
The University of Illinois at Urbana Champaign (Illinois), Pacific Northwest National Labs (PNNL), and the University of Southern California Information Sciences Institute (USC-ISI) consortium is working toward providing tools and expertise to enable collaborative research to improve security and resiliency of cyber physical systems. In this extended abstract we discuss the challenges and the solution space. We demonstrate the feasibility of some of the proposed components through a wide-area situational awareness experiment for the power grid across the three sites.
A system implementing real-time situational awareness through discovery, prevention, detection, response, audit, and management capabilities is seen as central to facilitating the protection of critical infrastructure systems. The effectiveness of providing such awareness technologies for electrical distribution companies is being evaluated in a series of field trials: (i) Substation Intrusion Detection / Prevention System (IDPS) and (ii) Security Information and Event Management (SIEM) System. These trials will help create a realistic case study on the effectiveness of such technologies with the view of forming a framework for critical infrastructure cyber security defense systems of the future.
Complexity is ever increasing within our information environment and organisations, as interdependent dynamic relationships within sociotechnical systems result in high variety and uncertainty from a lack of information or control. A net-centric approach is a strategy to improve information value, to enable stakeholders to extend their reach to additional data sources, share Situational Awareness (SA), synchronise effort and optimise resource use to deliver maximum (or proportionate) effect in support of goals. This paper takes a systems perspective to understand the dynamics within a net-centric information system. This paper presents the first stages of the Soft Systems Methodology (SSM), to develop a conceptual model of the human activity system and develop a system dynamics model to represent system behaviour, that will inform future research into a net-centric approach with information security. Our model supports the net-centric hypothesis that participation within a information sharing community extends information reach, improves organisation SA allowing proactive action to mitigate vulnerabilities and reduce overall risk within the community. The system dynamics model provides organisations with tools to better understand the value of a net-centric approach, a framework to determine their own maturity and evaluate strategic relationships with collaborative communities.
Very high resolution satellite imagery used to be a rare commodity, with infrequent satellite pass-over times over a specific area-of-interest obviating many useful applications. Today, more and more such satellite systems are available, with visual analysis and interpretation of imagery still important to derive relevant features and changes from satellite data. In order to allow efficient, robust and routine image analysis for humanitarian purposes, semi-automated feature extraction is of increasing importance for operational emergency mapping tasks. In the frame of the European Earth Observation program COPERNICUS and related research activities under the European Union's Seventh Framework Program, substantial scientific developments and mapping services are dedicated to satellite based humanitarian mapping and monitoring. In this paper, recent results in methodological research and development of routine services in satellite mapping for humanitarian situational awareness are reviewed and discussed. Ethical aspects of sensitivity and security of humanitarian mapping are deliberated. Furthermore methods for monitoring and analysis of refugee/internally displaced persons camps in humanitarian settings are assessed. Advantages and limitations of object-based image analysis, sample supervised segmentation and feature extraction are presented and discussed.
This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
One of the primary challenges when developing or implementing a security framework for any particular environment is determining the efficacy of the implementation. Does the implementation address all of the potential vulnerabilities in the environment, or are there still unaddressed issues? Further, if there is a choice between two frameworks, what objective measure can be used to compare the frameworks? To address these questions, we propose utilizing a technique of attack graph analysis to map the attack surface of the environment and identify the most likely avenues of attack. We show that with this technique we can quantify the baseline state of an application and compare that to the attack surface after implementation of a security framework, while simultaneously allowing for comparison between frameworks in the same environment or a single framework across multiple applications.
We provide a generic framework that, with the help of a preprocessing phase that is independent of the inputs of the users, allows an arbitrary number of users to securely outsource a computation to two non-colluding external servers. Our approach is shown to be provably secure in an adversarial model where one of the servers may arbitrarily deviate from the protocol specification, as well as employ an arbitrary number of dummy users. We use these techniques to implement a secure recommender system based on collaborative filtering that becomes more secure, and significantly more efficient than previously known implementations of such systems, when the preprocessing efforts are excluded. We suggest different alternatives for preprocessing, and discuss their merits and demerits.
The longstanding debate on a fundamental science of security has led to advances in systems, software, and network security. However, existing efforts have done little to inform how an environment should react to emerging and ongoing threats and compromises. The authors explore the goals and structures of a new science of cyber-decision-making in the Cyber-Security Collaborative Research Alliance, which seeks to develop a fundamental theory for reasoning under uncertainty the best possible action in a given cyber environment. They also explore the needs and limitations of detection mechanisms; agile systems; and the users, adversaries, and defenders that use and exploit them, and conclude by considering how environmental security can be cast as a continuous optimization problem.
Distributed and parallel applications are critical information technology systems in multiple industries, including academia, military, government, financial, medical, and transportation. These applications present target rich environments for malicious attackers seeking to disrupt the confidentiality, integrity and availability of these systems. Applying the military concept of defense cyber maneuver to these systems can provide protection and defense mechanisms that allow survivability and operational continuity. Understanding the tradeoffs between information systems security and operational performance when applying maneuver principles is of interest to administrators, users, and researchers. To this end, we present a model of a defensive maneuver cyber platform using Stochastic Petri Nets. This model enables the understanding and evaluation of the costs and benefits of maneuverability in a distributed application environment, specifically focusing on moving target defense and deceptive defense strategies.
Many common cyberdefenses (like firewalls and intrusion-detection systems) are static, giving attackers the freedom to probe them at will. Moving-target defense (MTD) adds dynamism, putting the systems to be defended in motion, potentially at great cost to the defender. An alternative approach is a mobile resilient defense that removes attackers' ability to rely on prior experience without requiring motion in the protected infrastructure. The defensive technology absorbs most of the cost of motion, is resilient to attack, and is unpredictable to attackers. The authors' mobile resilient defense, Ant-Based Cyber Defense (ABCD), is a set of roaming, bio-inspired, digital-ant agents working with stationary agents in a hierarchy headed by a human supervisor. ABCD provides a resilient, extensible, and flexible defense that can scale to large, multi-enterprise infrastructures such as the smart electric grid.
In this paper, we propose techniques for combating source selective jamming attacks in tactical cognitive MANETs. Secure, reliable and seamless communications are important for facilitating tactical operations. Selective jamming attacks pose a serious security threat to the operations of wireless tactical MANETs since selective strategies possess the potential to completely isolate a portion of the network from other nodes without giving a clear indication of a problem. Our proposed mitigation techniques use the concept of address manipulation, which differ from other techniques presented in open literature since our techniques employ de-central architecture rather than a centralized framework and our proposed techniques do not require any extra overhead. Experimental results show that the proposed techniques enable communications in the presence of source selective jamming attacks. When the presence of a source selective jammer blocks transmissions completely, implementing a proposed flipped address mechanism increases the expected number of required transmission attempts only by one in such scenario. The probability that our second approach, random address assignment, fails to solve the correct source MAC address can be as small as 10-7 when using accurate parameter selection.
Moving target defense is an area of network security research in which machines are moved logically around a network in order to avoid detection. This is done by leveraging the immense size of the IPv6 address space and the statistical improbability of two machines selecting the same IPv6 address. This defensive technique forces a malicious actor to focus on the reconnaissance phase of their attack rather than focusing only on finding holes in a machine's static defenses. We have a current implementation of an IPv6 moving target defense entitled MT6D, which works well although is limited to functioning in a peer to peer scenario. As we push our research forward into client server networks, we must discover what the limits are in reference to the client server ratio. In our current implementation of a simple UDP echo server that binds large numbers of IPv6 addresses to the ethernet interface, we discover limits in both the number of addresses that we can successfully bind to an interface and the speed at which UDP requests can be successfully handled across a large number of bound interfaces.
One of the major threats against web applications is Cross-Site Scripting (XSS). The final target of XSS attacks is the client running a particular web browser. During this last decade, several competing web browsers (IE, Netscape, Chrome, Firefox) have evolved to support new features. In this paper, we explore whether the evolution of web browsers is done using systematic security regression testing. Beginning with an analysis of their current exposure degree to XSS, we extend the empirical study to a decade of most popular web browser versions. We use XSS attack vectors as unit test cases and we propose a new method supported by a tool to address this XSS vector testing issue. The analysis on a decade releases of most popular web browsers including mobile ones shows an urgent need of XSS regression testing. We advocate the use of a shared security testing benchmark as a good practice and propose a first set of publicly available XSS vectors as a basis to ensure that security is not sacrificed when a new version is delivered.
Due to the frequent usage of online web applications for various day-to-day activities, web applications are becoming most suitable target for attackers. Cross-Site Scripting also known as XSS attack, one of the most prominent defacing web based attack which can lead to compromise of whole browser rather than just the actual web application, from which attack has originated. Securing web applications using server side solutions is not profitable as developers are not necessarily security aware. Therefore, browser vendors have tried to evolve client side filters to defend against these attacks. This paper shows that even the foremost prevailing XSS filters deployed by latest versions of most widely used web browsers do not provide appropriate defense. We evaluate three browsers - Internet Explorer 11, Google Chrome 32, and Mozilla Firefox 27 for reflected XSS attack against different type of vulnerabilities. We find that none of above is completely able to defend against all possible type of reflected XSS vulnerabilities. Further, we evaluate Firefox after installing an add-on named XSS-Me, which is widely used for testing the reflected XSS vulnerabilities. Experimental results show that this client side solution can shield against greater percentage of vulnerabilities than other browsers. It is witnessed to be more propitious if this add-on is integrated inside the browser instead being enforced as an extension.
Cross-Site Scripting (XSS) is a common attack technique that lets attackers insert the code in the output application of web page which is referred to the web browser of visitor and then the inserted code executes automatically and steals the sensitive information. In order to prevent the users from XSS attack, many client- side solutions have been implemented; most of them being used are the filters that sanitize the malicious input. However, many of these filters do not provide prevention to the newly designed sophisticated attacks such as multiple points of injection, injection into script etc. This paper proposes and implements an approach based on encoding unfiltered reflections for detecting vulnerable web applications which can be exploited using above mentioned sophisticated attacks. Results prove that the proposed approach provides accurate higher detection rate of exploits. In addition to this, an implementation of blocking the execution of malicious scripts have contributed to XSS-Me: an open source Mozilla Firefox security extension that detects for reflected XSS vulnerabilities which can be considered as an effective solution if it is integrated inside the browser rather than being enforced as an extension.
Cloud-based communications system is now widely used in many application fields such as medicine, security, environment protection, etc. Its use is being extended to the most demanding services like multimedia delivery. However, there are a lot of constraints when cloud-based sensor networks use the standard IEEE 802.15.3 or IEEE 802.15.4 technologies. This paper proposes a channel characterization scheme combined to a cross-layer admission control in dynamic cloud-based multimedia sensor networks to share the network resources among any two nodes. The analysis shows the behavior of two nodes using different network access technologies and the channel effects for each technology. Moreover, the existence of optimal node arrival rates in order to improve the usage of dynamic admission control when network resources are used is also shown. An extensive simulation study was performed to evaluate and validate the efficiency of the proposed dynamic admission control for cloud-based multimedia sensor networks.
An application of two Cyber-Physical System (CPS) security countermeasures - Intelligent Checker (IC) and Cross-correlator - for enhancing CPS safety and achieving required CPS safety integrity level is presented. ICs are smart sensors aimed at detecting attacks in CPS and alerting the human operators. Cross-correlator is an anomaly detection technique for detecting deception attacks. We show how ICs could be implemented at three different CPS safety protection layers to maintain CPS in a safe state. In addition, we combine ICs with the cross-correlator technique to assure high probability of failure detection. Performance simulations show that a combination of these two security countermeasures is effective in detecting and mitigating CPS failures, including catastrophic failures.
Commercial Wireless Sensor Networks (WSNs) can be accessed through sensor web portals. However, associated security implications and threats to the 1) users/subscribers 2) investors and 3) third party operators regarding sensor web portals are not seen in completeness, rather the contemporary work handles them in parts. In this paper, we discuss different kind of security attacks and vulnerabilities at different layers to the users, investors including Wireless Sensor Network Service Providers (WSNSPs) and WSN itself in relation with the two well-known documents i.e., “Department of Homeland Security” (DHS) and “Department of Defense (DOD)”, as these are standard security documents till date. Further we propose a comprehensive cross layer security solution in the light of guidelines given in the aforementioned documents that is minimalist in implementation and achieves the purported security goals.
Security is a major challenge preventing wide deployment of the smart grid technology. Typically, the classical power grid is protected with a set of isolated security tools applied to individual grid components and layers ignoring their cross-layer interaction. Such an approach does not address the smart grid security requirements because usually intricate attacks are cross-layer exploiting multiple vulnerabilities at various grid layers and domains. We advance a conceptual layering model of the smart grid and a high-level overview of a security framework, termed CyNetPhy, towards enabling cross-layer security of the smart grid. CyNetPhy tightly integrates and coordinates between three interrelated, and highly cooperative real-time security systems crossing section various layers of the grid cyber and physical domains to simultaneously address the grid's operational and security requirements. In this article, we present in detail the physical security layer (PSL) in CyNetPhy. We describe an attack scenario raising the emerging hardware Trojan threat in process control systems (PCSes) and its novel PSL resolution leveraging the model predictive control principles. Initial simulation results illustrate the feasibility and effectiveness of the PSL.
With the growing demand for increased spectral efficiencies, there has been renewed interest in enabling full-duplex communications. However, existing approaches to enable full-duplex require a clean-slate approach to address the key challenge in full-duplex, namely self-interference suppression. This serves as a big deterrent to enabling full-duplex in existing cellular networks. Towards our vision of enabling full-duplex in legacy cellular, specifically LTE networks, with no modifications to existing hardware at BS and client as well as technology specific industry standards, we present the design of our experimental system FD-LTE, that incorporates a combination of passive SI cancellation schemes, with legacy LTE half-duplex BS and client devices. We build a prototype of FD-LTE, integrate it with LTE's evolved packet core and conduct over-the-air experiments to explore the feasibility and potential for full-duplex with legacy LTE networks. We report promising experimental results from FD-LTE, which currently applies to scenarios with limited ranges that is typical of small cells.
Conventional cellular systems are dimensioned according to a worst case scenario, and they are designed to ensure ubiquitous coverage with an always-present wireless channel irrespective of the spatial and temporal demand of service. A more energy conscious approach will require an adaptive system with a minimum amount of overhead that is available at all locations and all times but becomes functional only when needed. This approach suggests a new clean slate system architecture with a logical separation between the ability to establish availability of the network and the ability to provide functionality or service. Focusing on the physical layer frame of such an architecture, this paper discusses and formulates the overhead reduction that can be achieved in next generation cellular systems as compared with the Long Term Evolution (LTE). Considering channel estimation as a performance metric whilst conforming to time and frequency constraints of pilots spacing, we show that the overhead gain does not come at the expense of performance degradation.