Biblio
Though the deep penetration of cyber systems across the smart grid sub-domains enrich the operation of the wide-area protection, control, and smart grid applications, the stochastic nature of cyber-attacks by adversaries inflict their performance and the system operation. Various hardware-in-the-loop (HIL) cyber-physical system (CPS) testbeds have attempted to evaluate the cyberattack dynamics and power system perturbations for robust wide-area protection algorithms. However, physical resource constraints and modular integration designs have been significant barriers while modeling large-scale grid models (scalability) and have limited many of the CPS testbeds to either small-scale HIL environment or complete simulation environments. This paper proposes a meticulous design and efficient modeling of IEC-61850 logical nodes in physical relays to simulate large-scale grid models in a HIL real-time digital simulator environment integrated with industry-grade hardware and software systems for wide-area power system applications. The proposed meticulous design includes multi-breaker emulation in the physical relays, which extends the capacity of a physical relay to accommodate more number of CPS interfaces in the HIL CPS security testbed environment. We have used our existing HIL CPS security testbed to demonstrate scalability by the real-time performance of ten simultaneous IEEE-39 CPS grid models. The experiments demonstrated significant results by 100% real-time performance with zero overruns, and low latency while receiving and executing control signals from physical SEL relays via IEC-61850 and DNP-3 protocols to real-time digital simulator, substation remote terminal unit (RTU) software and supervisory control and data acquisition (SCADA) software at control center.
We consider the problem of protecting cloud services from simultaneous white-box and black-box attacks. Recent research in cryptographic program obfuscation considers the problem of protecting the confidentiality of programs and any secrets in them. In this model, a provable program obfuscation solution makes white-box attacks to the program not more useful than black-box attacks. Motivated by very recent results showing successful black-box attacks to machine learning programs run by cloud servers, we propose and study the approach of augmenting the program obfuscation solution model so to achieve, in at least some class of application scenarios, program confidentiality in the presence of both white-box and black-box attacks.We propose and formally define encrypted-input program obfuscation, where a key is shared between the entity obfuscating the program and the entity encrypting the program's inputs. We believe this model might be of interest in practical scenarios where cloud programs operate over encrypted data received by associated sensors (e.g., Internet of Things, Smart Grid).Under standard intractability assumptions, we show various results that are not known in the traditional cryptographic program obfuscation model; most notably: Yao's garbled circuit technique implies encrypted-input program obfuscation hiding all gates of an arbitrary polynomial circuit; and very efficient encrypted-input program obfuscation for range membership programs and a class of machine learning programs (i.e., decision trees). The performance of the latter solutions has only a small constant overhead over the equivalent unobfuscated program.
This study reviews the development of shared (community) solar and community choice aggregation in the U.S. states of California and New York. Both states are leaders in energy-transition policy in the U.S., but they have different trajectories for the two forms of energy decentralization. Shared solar is more advanced in New York, but community choice is more advanced in California. Using a field theory framework, the comparative review of the trajectories of energy decentralization shows how differences in restructuring and regulatory rules affect outcomes. Differences in the rules for retail competition and authority for utilities to own distributed generation assets, plus the role of civil society and the attention from elected officials, shape the intensity of conflict and outcomes. They also contribute to the development of different types of community choice in the two states. In addition to showing how institutional conditions associated with different types of restructured markets shape the opportunities for decentralized energy, the study also examines how the efforts of actors to gain support for and to legitimate their policy preferences involve reference to broad social values.
Artificial Intelligence systems have enabled significant benefits for users and society, but whilst the data for their feeding are always increasing, a side to privacy and security leaks is offered. The severe vulnerabilities to the right to privacy obliged governments to enact specific regulations to ensure privacy preservation in any kind of transaction involving sensitive information. In the case of digital and/or physical documents comprising sensitive information, the right to privacy can be preserved by data obfuscation procedures. The capability of recognizing sensitive information for obfuscation is typically entrusted to the experience of human experts, who are over-whelmed by the ever increasing amount of documents to process. Artificial intelligence could proficiently mitigate the effort of the human officers and speed up processes. Anyway, until enough knowledge won't be available in a machine readable format, automatic and effectively working systems can't be developed. In this work we propose a methodology for transferring and leveraging general knowledge across specific-domain tasks. We built, from scratch, specific-domain knowledge data sets, for training artificial intelligence models supporting human experts in privacy preserving tasks. We exploited a mixture of natural language processing techniques applied to unlabeled domain-specific documents corpora for automatically obtain labeled documents, where sensitive information are recognized and tagged. We performed preliminary tests just over 10.000 documents from the healthcare and justice domains. Human experts supported us during the validation. Results we obtained, estimated in terms of precision, recall and F1-score metrics across these two domains, were promising and encouraged us to further investigations.
Code randomization is considered as the basis of mitigation against code reuse attacks, fundamentally supporting some recent proposals such as execute-only memory (XOM) that aims at dynamic return-oriented programming (ROP) attacks. However, existing code randomization methods are hard to achieve a good balance between high-randomization entropy and semantic consistency. In particular, they always ignore code semantic consistency, incurring performance loss and incompatibility with current security schemes, e.g., control flow integrity (CFI). In this paper, we present an enhanced code randomization method termed as HCRESC, which can improve the randomization entropy significantly, meanwhile ensure the semantic consistency between variants and the original code. HCRESC reschedules instructions within the range of functions rather than basic blocks, thus producing more variants of the original code and preserving the code's semantic. We implement HCRESC on Linux platform of x86-64 architecture and demonstrate that HCRESC can increase the randomization entropy of x86-64 code over than 120% compared with existing methods while ensuring control flow and size of the code unaltered.
Experimentation focused on assessing the value of complex visualisation approaches when compared with alternative methods for data analysis is challenging. The interaction between participant prior knowledge and experience, a diverse range of experimental or real-world data sets and a dynamic interaction with the display system presents challenges when seeking timely, affordable and statistically relevant experimentation results. This paper outlines a hybrid approach proposed for experimentation with complex interactive data analysis tools, specifically for computer network traffic analysis. The approach involves a structured survey completed after free engagement with the software platform by expert participants. The survey captures objective and subjective data points relating to the experience with the goal of making an assessment of software performance which is supported by statistically significant experimental results. This work is particularly applicable to field of network analysis for cyber security and also military cyber operations and intelligence data analysis.
Experts often design security and privacy technology with specific use cases and threat models in mind. In practice however, end users are not aware of these threats and potential countermeasures. Furthermore, mis-conceptions about the benefits and limitations of security and privacy technology inhibit large-scale adoption by end users. In this paper, we address this challenge and contribute a qualitative study on end users' and security experts' perceptions of threat models and potential countermeasures. We follow an inductive research approach to explore perceptions and mental models of both security experts and end users. We conducted semi-structured interviews with 8 security experts and 13 end users. Our results suggest that in contrast to security experts, end users neglect acquaintances and friends as attackers in their threat models. Our findings highlight that experts value technical countermeasures whereas end users try to implement trust-based defensive methods.
Recent advances in machine learning enable wider applications of prediction models in cyber-physical systems. Smart grids are increasingly using distributed sensor settings for distributed sensor fusion and information processing. Load forecasting systems use these sensors to predict future loads to incorporate into dynamic pricing of power and grid maintenance. However, these inference predictors are highly complex and thus vulnerable to adversarial attacks. Moreover, the adversarial attacks are synthetic norm-bounded modifications to a limited number of sensors that can greatly affect the accuracy of the overall predictor. It can be much cheaper and effective to incorporate elements of security and resilience at the earliest stages of design. In this paper, we demonstrate how to analyze the security and resilience of learning-based prediction models in power distribution networks by utilizing a domain-specific deep-learning and testing framework. This framework is developed using DeepForge and enables rapid design and analysis of attack scenarios against distributed smart meters in a power distribution network. It runs the attack simulations in the cloud backend. In addition to the predictor model, we have integrated an anomaly detector to detect adversarial attacks targeting the predictor. We formulate the stealthy adversarial attacks as an optimization problem to maximize prediction loss while minimizing the required perturbations. Under the worst-case setting, where the attacker has full knowledge of both the predictor and the detector, an iterative attack method has been developed to solve for the adversarial perturbation. We demonstrate the framework capabilities using a GridLAB-D based power distribution network model and show how stealthy adversarial attacks can affect smart grid prediction systems even with a partial control of network.
Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and can detect problems that the system is unaware of. One way of achieving this is by placing the human operator on the loop – i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, explanation is sometimes helpful to allow the human to understand why the system is making certain decisions and calibrate confidence from the human perspective. However, explanations come with costs in terms of delayed actions and the possibility that a human may make a bad judgement. Hence, it is not always obvious whether explanations will improve overall utility and, if so, what kinds of explanation to provide to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation content, effect, and cost. We then present a dynamic adaptation approach that leverages a probabilistic reasoning technique to determine when the explanation should be used in order to improve overall system utility.
The paper considers an expert system that provides an assessment of the state of information security in authorities and organizations of various forms of ownership. The proposed expert system allows to evaluate the state of compliance with the requirements of both organizational and technical measures to ensure the protection of information, as well as the level of compliance with the requirements of the information protection system in general. The expert assessment method is used as a basic method for assessing the state of information protection. The developed expert system provides a significant reduction in routine operations during the audit of information security. The results of the assessment are presented quite clearly and provide an opportunity for the leadership of the authorities and organizations to make informed decisions to further improve the information protection system.
Nowadays, Vehicular Ad hoc Networks (VANETs) are popularly known as they can reduce traffic and road accidents. These networks need several security requirements, such as anonymity, data authentication, confidentiality, traceability and cancellation of offending users, unlinkability, integrity, undeniability and access control. Authentication of the data and sender are most important security requirements in these networks. So many authentication schemes have been proposed up to now. One of the well-known techniques to provide users authentication in these networks is the authentication based on the smartcard (ASC). In this paper, we propose an ASC scheme that not only provides necessary security requirements such as anonymity, traceability and unlinkability in the VANETs but also is more efficient than the other schemes in the literatures.
To our best knowledge, the p-sensitive k-anonymity model is a sophisticated model to resist linking attacks and homogeneous attacks in data publishing. However, if the distribution of sensitive values is skew, the model is difficult to defend against skew attacks and even faces sensitive attacks. In practice, the privacy requirements of different sensitive values are not always identical. The “one size fits all” unified privacy protection level may cause unnecessary information loss. To address these problems, the paper quantifies privacy requirements with the concept of IDF and concerns more about sensitive groups. Two enhanced anonymous models with personalized protection characteristic, that is, (p,αisg) -sensitive k-anonymity model and (pi,αisg)-sensitive k-anonymity model, are then proposed to resist skew attacks and sensitive attacks. Furthermore, two clustering algorithms with global search and local search are designed to implement our models. Experimental results show that the two enhanced models have outstanding advantages in better privacy at the expense of a little data utility.
This paper describes a novel distributed mobility management (DMM) scheme for the "named-object" information centric network (ICN) architecture in which the routers forward data based on unique identifiers which are dynamically mapped to the current network addresses of a device. The work proposes and evaluates two specific handover schemes namely, hard handoff with rebinding and soft handoff with multihoming intended to provide seamless data transfer with improved throughput during handovers. The evaluation of the proposed handover schemes using system simulation along with proof-of-concept implementation in ORBIT testbed is described. The proposed handoff and scheduling throughput gains are 12.5% and 44% respectively over multiple interfaces when compared to traditional IP network with equal share split scheme. The handover performance with respect to RTT and throughput demonstrate the benefits of clean slate network architecture for beyond 5G networks.
Analyzing multi-dimensional geospatial data is difficult and immersive analytics systems are used to visualize geospatial data and models. There is little previous work evaluating when immersive and non-immersive visualizations are the most suitable for data analysis and more research is needed.