Biblio
Modern cyber-physical systems are increasingly complex and vulnerable to attacks like false data injection aimed at destabilizing and confusing the systems. We develop and evaluate an attack-detection framework aimed at learning a dynamic invariant network, data-driven temporal causal relationships between components of cyber-physical systems. We evaluate the relative performance in attack detection of the proposed model relative to traditional anomaly detection approaches. In this paper, we introduce Granger Causality based Kalman Filter with Adaptive Robust Thresholding (G-KART) as a framework for anomaly detection based on data-driven functional relationships between components in cyber-physical systems. In particular, we select power systems as a critical infrastructure with complex cyber-physical systems whose protection is an essential facet of national security. The system presented is capable of learning with or without network topology the task of detection of false data injection attacks in power systems. Kalman filters are used to learn and update the dynamic state of each component in the power system and in-turn monitor the component for malicious activity. The ego network for each node in the invariant graph is treated as an ensemble model of Kalman filters, each of which captures a subset of the node's interactions with other parts of the network. We finally also introduce an alerting mechanism to surface alerts about compromised nodes.
Intellectual property (IP) and integrated circuit (IC) piracy are of increasing concern to IP/IC providers because of the globalization of IC design flow and supply chains. Such globalization is driven by the cost associated with the design, fabrication, and testing of integrated circuits and allows avenues for piracy. To protect the designs against IC piracy, we propose a fingerprinting scheme based on side-channel power analysis and machine learning methods. The proposed method distinguishes the ICs which realize a modified netlist, yet same functionality. Our method doesn't imply any hardware overhead. We specifically focus on the ability to detect minimal design variations, as quantified by the number of logic gates changed. Accuracy of the proposed scheme is greater than 96 percent, and typically 99 percent in detecting one or more gate-level netlist changes. Additionally, the effect of temperature has been investigated as part of this work. Results depict 95.4 percent accuracy in detecting the exact number of gate changes when data and classifier use the same temperature, while training with different temperatures results in 33.6 percent accuracy. This shows the effectiveness of building temperature-dependent classifiers from simulations at known operating temperatures.
Cloud computing has a major role in the development of commercial systems. It enables companies like Microsoft, Amazon, IBM and Google to deliver their services on a large scale to its users. A cloud service provider manages cloud computing based services and applications. For any organization a cloud service provider (CSP) is an entity which works within it. So it suffers from vulnerabilities associated with organization, including internal and external attacks. So its challenge to organization to secure a cloud service provider while providing quality of service. Attribute based encryption can be used to provide data security with Key policy attribute based encryption (KP-ABE) or ciphertext policy attribute based encryption (CP-ABE). But these schemes has lack of scalability and flexibility. Hierarchical CP-ABE scheme is proposed here to provide fine grained access control. Data security is achieved using encryption, authentication and authorization mechanisms. Attribute key generation is proposed for implementing authorization of users. The proposed system is prevented by SQL Injection attack.
This paper proposes a distributed fixed-time based secondary controller for the DC microgrids (MGs) to overcome the drawbacks of conventional droop control. The controller, based on a distributed fixed-time control approach, can remove the DC voltage deviation and provide proportional current sharing simultaneously within a fixed-time. Comparing with the conventional centralized secondary controller, the controller, using the dynamic consensus, on each converter communicates only with its neighbors on a communication graph which increases the convergence speed and gets an improved performance. The proposed control strategy is simulated in PLECS to test the controller performance, link-failure resiliency, plug and play capability and the feasibility under different time delays.
In this study, delays between data packets were read by using different window sizes to detect data transmitted from covert timing channel in computer networks, and feature vectors were extracted from them and detection of hidden data by some classification algorithms was achieved with high performance rate.
Cyber-Physical Systems (CPS) are playing important roles in the critical infrastructure now. A prominent family of CPSs are networked control systems in which the control and feedback signals are carried over computer networks like the Internet. Communication over insecure networks make system vulnerable to cyber attacks. In this article, we design an intrusion detection and compensation framework based on system/plant identification to fight covert attacks. We collect error statistics of the output estimation during the learning phase of system operation and after that, monitor the system behavior to see if it significantly deviates from the expected outputs. A compensating controller is further designed to intervene and replace the classic controller once the attack is detected. The proposed model is tested on a DC motor as the plant and is put against a deception signal amplification attack over the forward link. Simulation results show that the detection algorithm well detects the intrusion and the compensator is also successful in alleviating the attack effects.
The smart grid is a complex cyber-physical system (CPS) that poses challenges related to scale, integration, interoperability, processes, governance, and human elements. The US National Institute of Standards and Technology (NIST) and its government, university and industry collaborators, developed an approach, called CPS Framework, to reasoning about CPS across multiple levels of concern and competency, including trustworthiness, privacy, reliability, and regulatory. The approach uses ontology and reasoning techniques to achieve a greater understanding of the interdependencies among the elements of the CPS Framework model applied to use cases. This paper demonstrates that the approach extends naturally to automated and manual decision-making for smart grids: we apply it to smart grid use cases, and illustrate how it can be used to analyze grid topologies and address concerns about the smart grid. Smart grid stakeholders, whose decision making may be assisted by this approach, include planners, designers and operators.
Development of an attack-resilient smart grid depends heavily on the availability of a representative environment, such as a Cyber Physical Security (CPS) testbed, to accelerate the transition of state-of-the-art research work to industry deployment by experimental testing and validation. There is an ongoing initiative to develop an interconnected federated testbed to build advanced computing systems and integrated data sharing networks. In this paper, we present a distributed simulation for power system using federated testbed in the context of Wide Area Monitoring System (WAMS) cyber-physical security. In particular, we have applied the transmission line modeling (TLM) technique to split a first order two-bus system into two subsystems: source and load subsystems, which are running in geographically dispersed simulators, while exchanging system variables over the internet. We have leveraged the resources available at Iowa State University's Power Cyber Laboratory (ISU PCL) and the US Army Research Laboratory (US ARL) to perform the distributed simulation, emulate substation and control center networks, and further implement a data integrity attack and physical disturbances targeting WAMS application. Our experimental results reveal the computed wide-area network latency; and model validation errors. Further, we also discuss the high-level conceptual architecture, inspired by NASPInet, necessary for developing the CPS testbed federation.
In the last few years, cryptocurrency mining has become more and more important on the Internet activity and nowadays is even having a noticeable impact on the global economy. This has motivated the emergence of a new malicious activity called cryptojacking, which consists of compromising other machines connected to the Internet and leverage their resources to mine cryptocurrencies. In this context, it is of particular interest for network administrators to detect possible cryptocurrency miners using network resources without permission. Currently, it is possible to detect them using IP address lists from known mining pools, processing information from DNS traffic, or directly performing Deep Packet Inspection (DPI) over all the traffic. However, all these methods are still ineffective to detect miners using unknown mining servers or result too expensive to be deployed in real-world networks with large traffic volume. In this paper, we present a machine learning-based method able to detect cryptocurrency miners using NetFlow/IPFIX network measurements. Our method does not require to inspect the packets' payload; as a result, it achieves cost-efficient miner detection with similar accuracy than DPI-based techniques.
Cryptojacking is the permissionless use of a target device to covertly mine cryptocurrencies. With cryptojacking attackers use malicious JavaScript codes to force web browsers into solving proof-of-work puzzles, thus making money by exploiting resources of the website visitors. To understand and counter such attacks, we systematically analyze the static, dynamic, and economic aspects of in-browser cryptojacking. For static analysis, we perform content-, currency-, and code-based categorization of cryptojacking samples to 1) measure their distribution across websites, 2) highlight their platform affinities, and 3) study their code complexities. We apply unsupervised learning to distinguish cryptojacking scripts from benign and other malicious JavaScript samples with 96.4% accuracy. For dynamic analysis, we analyze the effect of cryptojacking on critical system resources, such as CPU and battery usage. Additionally, we perform web browser fingerprinting to analyze the information exchange between the victim node and the dropzone cryptojacking server. We also build an analytical model to empirically evaluate the feasibility of cryptojacking as an alternative to online advertisement. Our results show a large negative profit and loss gap, indicating that the model is economically impractical. Finally, by leveraging insights from our analyses, we build countermeasures for in-browser cryptojacking that improve upon the existing remedies.
With the rapid development of the Internet, the dark network has also been widely used in the Internet [1]. Due to the anonymity of the dark network, many illegal elements have committed illegal crimes on the dark. It is difficult for law enforcement officials to track the identity of these cyber criminals using traditional network survey techniques based on IP addresses [2]. The threat information is mainly from the dark web forum and the dark web market. In this paper, we introduce the current mainstream dark network communication system TOR and develop a visual dark web forum post association analysis system to graphically display the relationship between various forum messages and posters, and help law enforcement officers to explore deep levels. Clues to analyze crimes in the dark network.
Cybercrimes and cyber criminals widely use dark web and illegal functionalities of the dark web towards the world crisis. More than half of the criminal activities and the terror activities conducted through the dark web such as, cryptocurrency, selling human organs, red rooms, child pornography, arm deals, drug deals, hire assassins and hackers, hacking software and malware programs, etc. The law enforcement agencies such as FBI, NSA, Interpol, Mossad, FSB etc, are always conducting surveillance programs through the dark web to trace down the mass criminals and terrorists while stopping the crimes and the terror activities. This paper is about the dark web marketing and surveillance programs. In the deep end research will discuss the dark web access with securely and how the law enforcement agencies exponentially tracking down the users with terror behaviours and activities. Moreover, the paper discusses dark web sites which users can grab the dark web jihadist services and anonymous markets including safety precautions.
This paper studies the deletion propagation problem in terms of minimizing view side-effect. It is a problem funda-mental to data lineage and quality management which could be a key step in analyzing view propagation and repairing data. The investigated problem is a variant of the standard deletion propagation problem, where given a source database D, a set of key preserving conjunctive queries Q, and the set of views V obtained by the queries in Q, we try to identify a set T of tuples from D whose elimination prevents all the tuples in a given set of deletions on views △V while preserving any other results. The complexity of this problem has been well studied for the case with only a single query. Dichotomies, even trichotomies, for different settings are developed. However, no results on multiple queries are given which is a more realistic case. We study the complexity and approximations of optimizing the side-effect on the views, i.e., find T to minimize the additional damage on V after removing all the tuples of △V. We focus on the class of key-preserving conjunctive queries which is a dichotomy for the single query case. It is surprising to find that except the single query case, this problem is NP-hard to approximate within any constant even for a non-trivial set of multiple project-free conjunctive queries in terms of view side-effect. The proposed algorithm shows that it can be approximated within a bound depending on the number of tuples of both V and △V. We identify a class of polynomial tractable inputs, and provide a dynamic programming algorithm to solve the problem. Besides data lineage, study on this problem could also provide important foundations for the computational issues in data repairing. Furthermore, we introduce some related applications of this problem, especially for query feedback based data cleaning.
Scala programming language combines object-oriented and functional programming in one concise, high-level language, and the language supports static types that help to avoid bugs in complex programs. This paper proposes a dynamic taint analyzer called ScalaTaint for Scala applications. The analyzer traces the propagation of malicious inputs from untrusted sources to sensitive sink methods in programs that can be exploited by adversaries. In this work, we evaluated the accuracy of ScalaTaint with a security benchmark suite including 7 projects in Scala. As a result, our analyzer could report 49 vulnerabilities within 753,372 lines of code. Moreover, the result of our performance measurement on ScalaBench shows 67% runtime overhead that demonstrates the usefulness and efficiently of our technique in comparison with similar tools.
In order to protect individuals' privacy, data have to be "well-sanitized" before sharing them, i.e. one has to remove any personal information before sharing data. However, it is not always clear when data shall be deemed well-sanitized. In this paper, we argue that the evaluation of sanitized data should be based on whether the data allows the inference of sensitive information that is specific to an individual, instead of being centered around the concept of re-identification. We propose a framework to evaluate the effectiveness of different sanitization techniques on a given dataset by measuring how much an individual's record from the sanitized dataset influences the inference of his/her own sensitive attribute. Our intent is not to accurately predict any sensitive attribute but rather to measure the impact of a single record on the inference of sensitive information. We demonstrate our approach by sanitizing two real datasets in different privacy models and evaluate/compare each sanitized dataset in our framework.
Sparse and low rank matrix decomposition is a method that has recently been developed for estimating different components of hyperspectral data. The rank component is capable of preserving global data structures of data, while a sparse component can select the discriminative information by preserving details. In order to take advantage of both, we present a novel decision fusion based on joint low rank and sparse component (DFJLRS) method for hyperspectral imagery in this paper. First, we analyzed the effects of different components on classification results. Then a novel method adopts a decision fusion strategy which combines a SVM classifier with the information provided by joint sparse and low rank components. With combination of the advantages, the proposed method is both representative and discriminative. The proposed algorithm is evaluated using several hyperspectral images when compared with traditional counterparts.
Deep packet inspection via regular expression (RE) matching is a crucial task of network intrusion detection systems (IDSes), which secure Internet connection against attacks and suspicious network traffic. Monitoring high-speed computer networks (100 Gbps and faster) in a single-box solution demands that the RE matching, traditionally based on finite automata (FAs), is accelerated in hardware. In this paper, we describe a novel FPGA architecture for RE matching that is able to process network traffic beyond 100 Gbps. The key idea is to reduce the required FPGA resources by leveraging approximate nondeterministic FAs (NFAs). The NFAs are compiled into a multi-stage architecture starting with the least precise stage with a high throughput and ending with the most precise stage with a low throughput. To obtain the reduced NFAs, we propose new approximate reduction techniques that take into account the profile of the network traffic. Our experiments showed that using our approach, we were able to perform matching of large sets of REs from SNORT, a popular IDS, on unprecedented network speeds.
Virtual platforms provide a full hardware/software platform to study device limitations in an early stages of the design flow and to develop software without requiring a physical implementation. This paper describes the development process of a virtual platform for Deep Packet Inspection (DPI) hardware accelerators by using Transaction Level Modeling (TLM). We propose two DPI architectures oriented to System-on-Chip FPGA. The first architecture, CPU-DMA based architecture, is a hybrid CPU/FPGA where the packets are filtered in the software domain. The second architecture, Hardware-IP based architecture, is mainly implemented in the hardware domain. We have created two virtual platforms and performed the simulation, the debugging and the analysis of the hardware/software features, in order to compare results for both architectures.
In the past few years, visual information collection and transmission is increased significantly for various applications. Smart vehicles, service robotic platforms and surveillance cameras for the smart city applications are collecting a large amount of visual data. The preservation of the privacy of people presented in this data is an important factor in storage, processing, sharing and transmission of visual data across the Internet of Robotic Things (IoRT). In this paper, a novel anonymisation method for information security and privacy preservation in visual data in sharing layer of the Web of Robotic Things (WoRT) is proposed. The proposed framework uses deep neural network based semantic segmentation to preserve the privacy in video data base of the access level of the applications and users. The data is anonymised to the applications with lower level access but the applications with higher legal access level can analyze and annotated the complete data. The experimental results show that the proposed method while giving the required access to the authorities for legal applications of smart city surveillance, is capable of preserving the privacy of the people presented in the data.