Biblio
Electronic power grid is a distributed network used for transferring electricity and power from power plants to consumers. Based on sensor readings and control system signals, power grid states are measured and estimated. As a result, most conventional attacks, such as denial-of-service attacks and random attacks, could be found by using the Kalman filter. However, false data injection attacks are designed against state estimation models. Currently, distributed Kalman filtering is proved effective in sensor networks for detection and estimation problems. Since meters are distributed in smart power grids, distributed estimation models can be used. Thus in this paper, we propose a diffusion Kalman filter for the power grid to have a good performance in estimating models and to effectively detect false data injection attacks.
A novel method, consisting of fault detection, rough set generation, element isolation and parameter estimation is presented for multiple-fault diagnosis on analog circuit with tolerance. Firstly, a linear-programming concept is developed to transform fault detection of circuit with limited accessible terminals into measurement to check existence of a feasible solution under tolerance constraints. Secondly, fault characteristic equation is deduced to generate a fault rough set. It is proved that the node voltages of nominal circuit can be used in fault characteristic equation with fault tolerance. Lastly, fault detection of circuit with revised deviation restriction for suspected fault elements is proceeded to locate faulty elements and estimate their parameters. The diagnosis accuracy and parameter identification precision of the method are verified by simulation results.
This research in progress paper describes the role of cyber security measures undertaken in an ICT system for integrating electric storage technologies into the grid. To do so, it defines security requirements for a communications gateway and gives detailed information and hands-on configuration advice on node and communication line security, data storage, coping with backend M2M communications protocols and examines privacy issues. The presented research paves the road for developing secure smart energy communications devices that allow enhancing energy efficiency. The described measures are implemented in an actual gateway device within the HORIZON 2020 project STORY, which aims at developing new ways to use storage and demonstrating these on six different demonstration sites.
Internet of Things (IoT) distributed secure data management system is characterized by authentication, privacy policies to preserve data integrity. Multi-phase security and privacy policies ensure confidentiality and trust between the users and service providers. In this regard, we present a novel Two-phase Incentive-based Secure Key (TISK) system for distributed data management in IoT. The proposed system classifies the IoT user nodes and assigns low-level, high-level security keys for data transactions. Low-level secure keys are generic light-weight keys used by the data collector nodes and data aggregator nodes for trusted transactions. TISK phase-I Generic Service Manager (GSM-C) module verifies the IoT devices based on self-trust incentive and server-trust incentive levels. High-level secure keys are dedicated special purpose keys utilized by data manager nodes and data expert nodes for authorized transactions. TISK phase-II Dedicated Service Manager (DSM-C) module verifies the certificates issued by GSM-C module. DSM-C module further issues high-level secure keys to data manager nodes and data expert nodes for specific purpose transactions. Simulation results indicate that the proposed TISK system reduces the key complexity and key cost to ensure distributed secure data management in IoT network.
Usually, the air gap will appear inside the composite insulators and it will lead to serious accident. In order to detect these internal defects in composite insulators operated in the transmission lines, a new non-destructive technique has been proposed. In the study, the mathematical analysis model of the composite insulators inner defects, which is about heat diffusion, has been build. The model helps to analyze the propagation process of heat loss and judge the structure and defects under the surface. Compared with traditional detection methods and other non-destructive techniques, the technique mentioned above has many advantages. In the study, air defects of composite insulators have been made artificially. Firstly, the artificially fabricated samples are tested by flash thermography, and this method shows a good performance to figure out the structure or defects under the surface. Compared the effect of different excitation between flash and hair drier, the artificially samples have a better performance after heating by flash. So the flash excitation is better. After testing by different pollution on the surface, it can be concluded that different pollution don't have much influence on figuring out the structure or defect under the surface, only have some influence on heat diffusion. Then the defective composite insulators from work site are detected and the image of defect is clear. This new active thermography system can be detected quickly, efficiently and accurately, ignoring the influence of different pollution and other environmental restrictions. So it will have a broad prospect of figuring out the defeats and structure in composite insulators even other styles of insulators.
In this paper, we propose a scheme to protect the Software Defined Network(SDN) controller from Distributed Denial-of-Service(DDoS) attacks. We first predict the amount of new requests for each openflow switch periodically based on Taylor series, and the requests will then be directed to the security gateway if the prediction value is beyond the threshold. The requests that caused the dramatic decrease of entropy will be filtered out and rules will be made in security gateway by our algorithm; the rules of these requests will be sent to the controller. The controller will send the rules to each switch to make them direct the flows matching with the rules to the honey pot. The simulation shows the averages of both false positive and false negative are less than 2%.
Software systems nowadays communicate via a number of complex languages. This is often the cause of security vulnerabilities like arbitrary code execution, or injections. Whereby injections such as cross-site scripting are widely known from textual languages such as HTML and JSON that constantly gain more popularity. These systems use parsers to read input and unparsers write output, where these security vulnerabilities arise. Therefore correct parsing and unparsing of messages is of the utmost importance when developing secure and reliable systems. Part of the challenge developers face is to correctly encode data during unparsing and decode it during parsing. This paper presents McHammerCoder, an (un)parser and encoding generator supporting textual and binary languages. Those (un)parsers automatically apply the generated encoding, that is derived from the language's grammar. Therefore manually defining and applying encoding is not required to effectively prevent injections when using McHammerCoder. By specifying the communication language within a grammar, McHammerCoder provides developers with correct input and output handling code for their custom language.
Currently, the networking of everyday objects, socalled Internet of Things (IoT), such as vehicles and home automation environments is progressing rapidly. Formerly deployed as domain-specific solutions, the development is continuing to link different domains together to form a large heterogeneous IoT ecosystem. This development raises challenges in different fields such as scalability of billions of devices, interoperability across different IoT domains and the need of mobility support. The Information-Centric Networking (ICN) paradigm is a promising candidate to form a unified platform to connect different IoT domains together including infrastructure, wireless, and ad-hoc environments. This paper describes a vision of a harmonized architectural design providing dynamic access of data and services based on an ICN. Within the context of connected vehicles, the paper introduces requirements and challenges of the vision and contributes in open research directions in Information-Centric Networking.
Radio-Frequency Identification (RFID) tags have been widely used as a low-cost wireless method for detection of counterfeit product injection in supply chains. In order to adequately perform authentication, current RFID monitoring schemes need to either have a persistent online connection between supply chain partners and the back-end database or have a local database on each partner site. A persistent online connection is not guaranteed and local databases on each partner site impose extra cost and security issues. We solve this problem by introducing a new scheme in which a small Non-Volatile Memory (NVM) embedded in RFID tag is used to function as a tiny “encoded local database”. In addition our scheme resists “tag tracing” so that each partner's operation remains private. Our scheme can be implemented in less than 1200 gates satisfying current RFID technology requirements.
Ideally, minimizing the flow completion time (FCT) requires millions of priorities supported by the underlying network so that each flow has its unique priority. However, in production datacenters, the available switch priority queues for flow scheduling are very limited (merely 2 or 3). This practical constraint seriously degrades the performance of previous approaches. In this paper, we introduce Explicit Priority Notification (EPN), a novel scheduling mechanism which emulates fine-grained priorities (i.e., desired priorities or DP) using only two switch priority queues. EPN can support various flow scheduling disciplines with or without flow size information. We have implemented EPN on commodity switches and evaluated its performance with both testbed experiments and extensive simulations. Our results show that, with flow size information, EPN achieves comparable FCT as pFabric that requires clean-slate switch hardware. And EPN also outperforms TCP by up to 60.5% if it bins the traffic into two priority queues according to flow size. In information-agnostic setting, EPN outperforms PIAS with two priority queues by up to 37.7%. To the best of our knowledge, EPN is the first system that provides millions of priorities for flow scheduling with commodity switches.
The Internet of things (IoT) is revolutionizing the management and control of automated systems leading to a paradigm shift in areas such as smart homes, smart cities, health care, transportation, etc. The IoT technology is also envisioned to play an important role in improving the effectiveness of military operations in battlefields. The interconnection of combat equipment and other battlefield resources for coordinated automated decisions is referred to as the Internet of battlefield things (IoBT). IoBT networks are significantly different from traditional IoT networks due to the battlefield specific challenges such as the absence of communication infrastructure, and the susceptibility of devices to cyber and physical attacks. The combat efficiency and coordinated decision-making in war scenarios depends highly on real-time data collection, which in turn relies on the connectivity of the network and the information dissemination in the presence of adversaries. This work aims to build the theoretical foundations of designing secure and reconfigurable IoBT networks. Leveraging the theories of stochastic geometry and mathematical epidemiology, we develop an integrated framework to study the communication of mission-critical data among different types of network devices and consequently design the network in a cost effective manner.
Code signing which at present is the only methodology of trusting a code that is distributed to others. It heavily relies on the security of the software providers private key. Attackers employ targeted attacks on the code signing infrastructure for stealing the signing keys which are used later for distributing malware in disguise of genuine software. Differentiating a malware from a benign software becomes extremely difficult once it gets signed by a trusted software providers private key as the operating systems implicitly trusts this signed code. In this paper, we analyze the growing menace of signed malware by examining several real world incidents and present a threat model for the current code signing infrastructure. We also propose a novel solution that prevents this issue of malicious code signing by requiring additional verification of the executable. We also present the serious threat it poses and it consequences. To our knowledge this is the first time this specific issue of Malicious code signing has been thoroughly studied and an implementable solution is proposed.
Complex systems are prevalent in many fields such as finance, security and industry. A fundamental problem in system management is to perform diagnosis in case of system failure such that the causal anomalies, i.e., root causes, can be identified for system debugging and repair. Recently, invariant network has proven a powerful tool in characterizing complex system behaviors. In an invariant network, a node represents a system component, and an edge indicates a stable interaction between two components. Recent approaches have shown that by modeling fault propagation in the invariant network, causal anomalies can be effectively discovered. Despite their success, the existing methods have a major limitation: they typically assume there is only a single and global fault propagation in the entire network. However, in real-world large-scale complex systems, it's more common for multiple fault propagations to grow simultaneously and locally within different node clusters and jointly define the system failure status. Inspired by this key observation, we propose a two-phase framework to identify and rank causal anomalies. In the first phase, a probabilistic clustering is performed to uncover impaired node clusters in the invariant network. Then, in the second phase, a low-rank network diffusion model is designed to backtrack causal anomalies in different impaired clusters. Extensive experimental results on real-life datasets demonstrate the effectiveness of our method.
In Energy Internet mode, a large number of alarm information is generated when equipment exception and multiple faults in large power grid, which seriously affects the information collection, fault analysis and delays the accident treatment for the monitors. To this point, this paper proposed a method for power grid monitoring to monitor and diagnose fault in real time, constructed the equipment fault logical model based on five section alarm information, built the standard fault information set, realized fault information optimization, fault equipment location, fault type diagnosis, false-report message and missing-report message analysis using matching algorithm. The validity and practicality of the proposed method by an actual case was verified, which can shorten the time of obtaining and analyzing fault information, accelerate the progress of accident treatment, ensure the safe and stable operation of power grid.
As the development of technology increases, the security risk also increases. This has affected most organizations, irrespective of size, as they depend on the increasingly pervasive technology to perform their daily tasks. However, the dependency on technology has introduced diverse security vulnerabilities in organizations which requires a reliable preparedness for probable forensic investigation of the unauthorized incident. Keystroke dynamics is one of the cost-effective methods for collecting potential digital evidence. This paper presents a keystroke pattern analysis technique suitable for the collection of complementary potential digital evidence for forensic readiness. The proposition introduced a technique that relies on the extraction of reliable behavioral signature from user activity. Experimental validation of the proposition demonstrates the effectiveness of proposition using a multi-scheme classifier. The overall goal is to have forensically sound and admissible keystroke evidence that could be presented during the forensic investigation to minimize the costs and time of the investigation.
The paper considers an issues of protecting data from unauthorized access by users' authentication through keystroke dynamics. It proposes to use keyboard pressure parameters in combination with time characteristics of keystrokes to identify a user. The authors designed a keyboard with special sensors that allow recording complementary parameters. The paper presents an estimation of the information value for these new characteristics and error probabilities of users' identification based on the perceptron algorithms, Bayes' rule and quadratic form networks. The best result is the following: 20 users are identified and the error rate is 0.6%.
The growing popularity of Android and the increasing amount of sensitive data stored in mobile devices have lead to the dissemination of Android ransomware. Ransomware is a class of malware that makes data inaccessible by blocking access to the device or, more frequently, by encrypting the data; to recover the data, the user has to pay a ransom to the attacker. A solution for this problem is to backup the data. Although backup tools are available for Android, these tools may be compromised or blocked by the ransomware itself. This paper presents the design and implementation of RANSOMSAFEDROID, a TrustZone based backup service for mobile devices. RANSOMSAFEDROID is protected from malware by leveraging the ARM TrustZone extension and running in the secure world. It does backup of files periodically to a secure local persistent partition and pushes these backups to external storage to protect them from ransomware. Initially, RANSOMSAFEDROID does a full backup of the device filesystem, then it does incremental backups that save the changes since the last backup. As a proof-of-concept, we implemented a RANSOMSAFEDROID prototype and provide a performance evaluation using an i.MX53 development board.
This work investigates the anonymous tag cardinality estimation problem in radio frequency identification systems with frame slotted aloha-based protocol. Each tag, instead of sending its identity upon receiving the reader's request, randomly responds by only one bit in one of the time slots of the frame due to privacy and security. As a result, each slot with no response is observed as in an empty state, while the others are non-empty. Those information can be used for the tag cardinality estimation. Nevertheless, under effects of fading and noise, time slots with tags' response might be observed as empty, while those with no response might be detected as non-empty, which is known as a false detection phenomenon. The performance of conventional estimation methods is, thus, degraded because of inaccurate observations. In order to cope with this issue, we propose a new estimation algorithm using expectation-maximization method. Both the tag cardinality and a probability of false detection are iteratively estimated to maximize a likelihood function. Computer simulations will be provided to show the merit of the proposed method.
The ever-increasing sophistication of malware has made malicious binary collection and analysis an absolute necessity for proactive defenses. Meanwhile, malware authors seek to harden their binaries against analysis by incorporating environment detection techniques, in order to identify if the binary is executing within a virtual environment or in the presence of monitoring tools. For security researchers, it is still an open question regarding how to remove the artifacts from virtual machines to effectively build deceptive "honeypots" for malware collection and analysis. In this paper, we explore a completely different and yet promising approach by using Linux containers. Linux containers, in theory, have minimal virtualization artifacts and are easily deployable on low-power devices. Our work performs the first controlled experiments to compare Linux containers with bare metal and 5 major types of virtual machines. We seek to measure the deception capabilities offered by Linux containers to defeat mainstream virtual environment detection techniques. In addition, we empirically explore the potential weaknesses in Linux containers to help defenders to make more informed design decisions.
The proliferation of Internet-of-Things (IoT) devices within homes raises many security and privacy concerns. Recent headlines highlight the lack of effective security mechanisms in IoT devices. Security threats in IoT arise not only from vulnerabilities in individual devices but also from the composition of devices in unanticipated ways and the ability of devices to interact through both cyber and physical channels. Existing approaches provide methods for monitoring cyber interactions between devices but fail to consider possible physical interactions. To overcome this challenge, it is essential that security assessments of IoT networks take a holistic view of the network and treat it as a "system of systems", in which security is defined, not solely by the individual systems, but also by the interactions and trust dependencies between systems. In this paper, we propose a way of modeling cyber and physical interactions between IoT devices of a given network. By verifying the cyber and physical interactions against user-defined policies, our model can identify unexpected chains of events that may be harmful. It can also be applied to determine the impact of the addition (or removal) of a device into an existing network with respect to dangerous device interactions. We demonstrate the viability of our approach by instantiating our model using Alloy, a language and tool for relational models. In our evaluation, we considered three realistic IoT use cases and demonstrate that our model is capable of identifying potentially dangerous device interactions. We also measure the performance of our approach with respect to the CPU runtime and memory consumption of the Alloy model finder, and show that it is acceptable for smart-home IoT networks.
Internet of Things (IoT) is an integral part of application domains such as smart-home and digital healthcare. Various standard public key cryptography techniques (e.g., key exchange, public key encryption, signature) are available to provide fundamental security services for IoTs. However, despite their pervasiveness and well-proven security, they also have been shown to be highly energy costly for embedded devices. Hence, it is a critical task to improve the energy efficiency of standard cryptographic services, while preserving their desirable properties simultaneously. In this paper, we exploit synergies among various cryptographic primitives with algorithmic optimizations to substantially reduce the energy consumption of standard cryptographic techniques on embedded devices. Our contributions are: (i) We harness special precomputation techniques, which have not been considered for some important cryptographic standards to boost the performance of key exchange, integrated encryption, and hybrid constructions. (ii) We provide self-certification for these techniques to push their performance to the edge. (iii) We implemented our techniques and their counterparts on 8-bit AVR ATmega 2560 and evaluated their performance. We used microECC library and made the implementations on NIST-recommended secp192 curve, due to its standardization. Our experiments confirmed significant improvements on the battery life (up to 7x) while preserving the desirable properties of standard techniques. Moreover, to the best of our knowledge, we provide the first open-source framework including such set of optimizations on low-end devices.
There is a long-standing need for improved cybersecurity through automation of attack signature detection, classification, and response. In this paper, we present experimental test bed results from an implementation of autonomic control plane feedback based on the Observe, Orient, Decide, Act (OODA) framework. This test bed modeled the building blocks for a proposed zero trust cloud data center network. We present test results of trials in which identity management with automated threat response and packet-based authentication were combined with dynamic management of eight distinct network trust levels. The log parsing and orchestration software we created work alongside open source log management tools to coordinate and integrate threat response from firewalls, authentication gateways, and other network devices. Threat response times are measured and shown to be a significant improvement over conventional methods.
Recently, Jung et al. [1] proposed a data access privilege scheme and claimed that their scheme addresses data and identity privacy as well as multi-authority, and provides data access privilege for attribute-based encryption. In this paper, we show that this scheme, and also its former and latest versions (i.e. [2] and [3] respectively) suffer from a number of weaknesses in terms of finegrained access control, users and authorities collusion attack, user authorization, and user anonymity protection. We then propose our new scheme that overcomes these shortcomings. We also prove the security of our scheme against user collusion attacks, authority collusion attacks and chosen plaintext attacks. Lastly, we show that the efficiency of our scheme is comparable with existing related schemes.
Multi-agent simulations are useful for exploring collective patterns of individual behavior in social, biological, economic, network, and physical systems. However, there is no provenance support for multi-agent models (MAMs) in a distributed setting. To this end, we introduce ProvMASS, a novel approach to capture provenance of MAMs in a distributed memory by combining inter-process identification, lightweight coordination of in-memory provenance storage, and adaptive provenance capture. ProvMASS is built on top of the Multi-Agent Spatial Simulation (MASS) library, a framework that combines multi-agent systems with large-scale fine-grained agent-based models, or MAMs. Unlike other environments supporting MAMs, MASS parallelizes simulations with distributed memory, where agents and spatial data are shared application resources. We evaluate our approach with provenance queries to support three use cases and performance measures. Initial results indicate that our approach can support various provenance queries for MAMs at reasonable performance overhead.