Biblio
A lot of research in security of cyber physical systems focus on threat models where an attacker can spoof sensor readings by compromising the communication channel. A little focus is given to attacks on physical components. In this paper a method to detect potential attacks on physical components in a Cyber Physical System (CPS) is proposed. Physical attacks are detected through a comparison of noise pattern from sensor measurements to a reference noise pattern. If an adversary has physically modified or replaced a sensor, the proposed method issues an alert indicating that a sensor is probably compromised or is defective. A reference noise pattern is established from the sensor data using a deterministic model. This pattern is referred to as a fingerprint of the corresponding sensor. The fingerprint so derived is used as a reference to identify measured data during the operation of a CPS. Extensive experimentation with ultrasonic level sensors in a realistic water treatment testbed point to the effectiveness of the proposed fingerprinting method in detecting physical attacks.
In this paper, we propose a theoretical framework to investigate the eavesdropping behavior in underwater acoustic sensor networks. In particular, we quantify the eavesdropping activities by the eavesdropping probability. Our derived results show that the eavesdropping probability heavily depends on acoustic signal frequency, underwater acoustic channel characteristics (such as spreading factor and wind speed) and different hydrophones (such as isotropic hydrophones and array hydrophones). Simulation results have further validate the effectiveness and the accuracy of our proposed model.
The underlying element that supports the device communication in the MANET is the wireless connection capability. Each node has the ability to communicate with other nodes via the creation of routing path. However, due to the fact that nodes in MANET are autonomous and the routing paths created are only based on current condition of the network, some of the paths are extremely instable. In light of these shortcomings, many research works emphasizes on the improvement of routing path algorithm. Regardless of the application the MANET can support, the MANET possesses unique characteristics, which enables mobile nodes to form dynamic communication irrespective the availability of a fixed network. However the inherent nature of MANET has led to nodes in MANET to be vulnerable to denied services. A typical Denial of Service (DoS) in MANET is the Black Hole attack, caused by a malicious node, or a set of nodes advertising false routing updates. Typically, the malicious nodes are difficult to be detected. Each node is equipped with a particular type of routing protocol and voluntarily participates in relaying the packets. However, some nodes may not be genuine and has been tampered to behave maliciously, which causes the Black Hole attack. Several on demand routing protocol e.g. Ad hoc On Demand Distance Vector (AODV) and Dynamic Source Routing (DSR) are susceptible to such attack. In principle, the attack exploits the Route Request (RREQ) discovery operation and falsifies the sequence number and the shortest path information. The malicious nodes are able to utilize the loophole in the RREQ discovery process due to the absence of validation process. As a result, genuine RREQ packets are exploited and erroneously relayed to a false node(s). This paper highlights the effect Black Hole nodes to the network performance and therefore substantiates the previous work done [1]. In this paper, several simulation experiments are iterated using NS-2, which employed various scenarios and traffic loads. The simulation results show the presence of Black Hole nodes in a network can substantially affects the packet delivery ratio and throughput by as much as 100%.
An important topic in cybersecurity is validating Active Indicators (AI), which are stimuli that can be implemented in systems to trigger responses from individuals who might or might not be Insider Threats (ITs). The way in which a person responds to the AI is being validated for identifying a potential threat and a non-threat. In order to execute this validation process, it is important to create a paradigm that allows manipulation of AIs for measuring response. The scenarios are posed in a manner that require participants to be situationally aware that they are being monitored and have to act deceptively. In particular, manipulations in the environment should no differences between conditions relative to immersion and ease of use, but the narrative should be the driving force behind non-deceptive and IT responses. The success of the narrative and the simulation environment to induce such behaviors is determined by immersion, usability, and stress response questionnaires, and performance. Initial results of the feasibility to use a narrative reliant upon situation awareness of monitoring and evasion are discussed.
This paper describes a unified framework for the simulation and analysis of cyber physical systems (CPSs). The framework relies on the FreeBSD-based IMUNES network simulator. Components of the CPS are modeled as nodes within the IMUNES network simulator; nodes that communicate using real TCP/IP traffic. Furthermore, the simulated system can be exposed to other networks and the Internet to make it look like a real SCADA system. The frame-work has been used to simulate a TRIGA nuclear reactor. This is accomplished by creating nodes within the IMUNES network capable of running system modules simulating different CPS components. Nodes communicate using MODBUS/TCP, a widely used process control protocol. A goal of this work is to eventually integrate the simulator with a honeynet. This allows researchers to not only simulate a digital control system using real TCP/IP traffic to test control strategies and network topologies, but also to explore possible cyber attacks and mitigation strategies.
Small Unmanned Aircraft Systems (sUAS) are already revolutionizing agricultural and environmental monitoring through the acquisition of high-resolution multi-spectral imagery on-demand. However, in order to accurately understand various complex environmental and agricultural processes, it is often necessary to collect physical samples of pests, pathogens, and insects from the field for ex-situ analysis. In this paper, we describe a sUAS for autonomous deployment and recovery of a novel environmental sensor probe. We present the UAS software and hardware stack, and a probe design that can be adapted to collect a variety of environmental samples and can be transported autonomously for off-site analysis. Our team participated in an NSF-sponsored student unmanned aerial vehicle (UAV) challenge, where we used our sUAS to deploy and recover a scale-model mosquito trap outdoors. Results from indoor and field trials are presented, and the challenges experienced in detecting and docking with the probe in outdoor conditions are discussed.
Verifying attacks against cyber physical systems can be a costly and time-consuming process. By using a simulated environment, attacks can be verified quickly and accurately. By combining the simulation of a cyber physical system with a hybrid attack graph, the effects of a series of exploits can be accurately analysed. Furthermore, the use of a simulated environment to verify attacks may uncover new information about the nature of the attacks.
Wikipedia is one of the most popular information platforms on the Internet. The user access pattern to Wikipedia pages depends on their relevance in the current worldwide social discourse. We use publically available statistics about the top-1000 most popular pages on each day to estimate the efficiency of caches for support of the platform. While the data volumes are moderate, the main goal of Wikipedia caches is to reduce access times for page views and edits. We study the impact of most popular pages on the achievable cache hit rate in comparison to Zipf request distributions and we include daily dynamics in popularity.
The increasing demand for secure interactions between network domains brings in new challenges to access control technologies. In this paper we design an access control framework which provides a multilevel mapping method between hierarchical access control structures for achieving multilevel security protection in cross-domain networks. Hierarchical access control structures ensure rigorous multilevel security in intra domains. And the mapping method based on subject attributes is proposed to determine the subject's security level in its target domain. Experimental results we obtained from simulations are also reported in this paper to verify the effectiveness of the proposed access control model.
The heterogeneous SIS model for virus spread in any finite size graph characterizes the influence of factors of SIS model and could be analyzed by the extended N-Intertwined model introduced in [1]. We specifically focus on the heterogeneous virus spread in the star network in this paper. The epidemic threshold and the average meta-stable state fraction of infected nodes are derived for virus spread in the star network. Our results illustrate the effect of the factors of SIS model on the steady state infection.
Existing data management and searching system for Internet of Things uses centralized database. For this reason, security vulnerabilities are found in this system which consists of server such as IP spoofing, single point of failure and Sybil attack. This paper proposes data management system is based on blockchain which ensures security by using ECDSA digital signature and SHA-256 hash function. Location that is indicated as IP address of data owner and data name are transcribed in block which is included in the blockchain. Furthermore, we devise data manegement and searching method through analyzing block hash value. By using security properties of blockchain such as authentication, non-repudiation and data integrity, this system has advantage of security comparing to previous data management and searching system using centralized database or P2P networks.
Sophisticated cyber attacks by state-sponsored and criminal actors continue to plague government and industrial infrastructure. Intuitively, partitioning cyber systems into survivable, intrusion tolerant compartments is a good idea. This prevents witting and unwitting insiders from moving laterally and reaching back to their command and control (C2) servers. However, there is a lack of artifacts that can predict the effectiveness of this approach in a realistic setting. We extend earlier work by relaxing simplifying assumptions and providing a new attacker-facing metric. In this article, we propose new closed-form mathematical models and a discrete time simulation to predict three critical statistics: probability of compromise, probability of external host compromise and probability of reachback. The results of our new artifacts agree with one another and with previous work, which suggests they are internally valid and a viable method to evaluate the effectiveness of cyber zone defense.
Compute-intensive simulations typically charge substantial workloads on an online simulation platform backed by limited computing clusters and storage resources. Some (or most) of the simulations initiated by users may accompany input parameters/files that have been already provided by other (or same) users in the past. Unfortunately, these duplicate simulations may aggravate the performance of the platform by drastic consumption of the limited resources shared by a number of users on the platform. To minimize or avoid conducting repeated simulations, we present a novel system, called SUPERMAN (SimUlation ProvEnance Recycling MANager) that can record simulation provenances and recycle the results of past simulations. This system presents a great opportunity to not only reutilize existing results but also perform various analytics helpful for those who are not familiar with the platform. The system also offers interoperability across other systems by collecting the provenances in a standardized format. In our simulated experiments we found that over half of past computing jobs could be answered without actual executions by our system.
Given the proliferation of digital assistants in everyday mobile technology, it appears inevitable that next generation vehicles will be embodied by similar agents, offering engaging, natural language interactions. However, speech can be cognitively captivating. It is therefore important to understand the demand that such interfaces may place on drivers. Twenty-five participants undertook four drives (counterbalanced), in a medium-fidelity driving simulator: 1. Interacting with a state-of-the-art digital driving assistant ('DDA') (presented using Wizard-of-Oz); 2. Engaged in a hands-free mobile phone conversation; 3. Undertaking the delayed-digit recall ('2-back') task and 4. With no secondary task (baseline). Physiological arousal, subjective workload assessment, tactile detection task (TDT) and driving performance measures consistently revealed the '2-back' drive as the most cognitively demanding (highest workload, poorest TDT performance). Mobile phone and DDA conditions were largely equivalent, attracting low/medium cognitive workload. Findings are discussed in the context of designing in-vehicle natural language interfaces to mitigate cognitive demand.
We develop a novel model-based hardware-in-the-loop (HIL) framework for optimising energy consumption of embedded software controllers. Controller and plant models are specified as networks of parameterised timed input/output automata and translated into executable code. The controller is encoded into the target embedded hardware, which is connected to a power monitor and interacts with the simulation of the plant model. The framework then generates a power consumption model that maps controller transitions to distributions over power measurements, and is used to optimise the timing parameters of the controller, without compromising a given safety requirement. The novelty of our approach is that we measure the real power consumption of the controller and use thus obtained data for energy optimisation. We employ timed Petri nets as an intermediate representation of the executable specification, which facilitates efficient code generation and fast simulations. Our framework uniquely combines the advantages of rigorous specifications with accurate power measurements and methods for online model estimation, thus enabling automated design of correct and energy-efficient controllers.
In the last decades, there have been much more public health crises in the world such as H1N1, H7N9 and Ebola out-break. In the same time, it has been proved that our world has come into the time when public crisis accidents number was growing fast. Sometimes, crisis response to these public emergency accidents is involved in a complex system consisting of cyber, physics and society domains (CPS Model). In order to collect and analyze these accidents with higher efficiency, we need to design and adopt some new tools and models. In this paper, we used CPS Model based Online Opinion Governance system which constructed on cellphone APP for data collection and decision making in the back end. Based on the online opinion data we collected, we also proposed the graded risk classification. By the risk classification method, we have built an efficient CPS Model based simulated emergency accident replying and handling system. It has been proved useful in some real accidents in China in recent years.
Enormous amount of educational data has been accumulated through Massive Open Online Courses (MOOCs), as well as commercial and non-commercial learning platforms. This is in addition to the educational data released by US government since 2012 to facilitate disruption in education by making data freely available. The high volume, variety and velocity of collected data necessitate use of big data tools and storage systems such as distributed databases for storage and Apache Spark for analysis. This tutorial will introduce researchers and faculty to real-world applications involving data mining and predictive analytics in learning sciences. In addition, the tutorial will introduce statistics required to validate and accurately report results. Topics will cover how big data is being used to transform education. Specifically, we will demonstrate how exploratory data analysis, data mining, predictive analytics, machine learning, and visualization techniques are being applied to educational big data to improve learning and scale insights driven from millions of student's records. The tutorial will be held over a half day and will be hands on with pre-posted material. Due to the interdisciplinary nature of work, the tutorial appeals to researchers from a wide range of backgrounds including big data, predictive analytics, learning sciences, educational data mining, and in general, those interested in how big data analytics can transform learning. As a prerequisite, attendees are required to have familiarity with at least one programming language.
We investigate the problem of constructing exponentially converging estimates of the state of a continuous-time system from state measurements transmitted via a limited-data-rate communication channel, so that only quantized and sampled measurements of continuous signals are available to the estimator. Following prior work on topological entropy of dynamical systems, we introduce a notion of estimation entropy which captures this data rate in terms of the number of system trajectories that approximate all other trajectories with desired accuracy. We also propose a novel alternative definition of estimation entropy which uses approximating functions that are not necessarily trajectories of the system. We show that the two entropy notions are actually equivalent. We establish an upper bound for the estimation entropy in terms of the sum of the system's Lipschitz constant and the desired convergence rate, multiplied by the system dimension. We propose an iterative procedure that uses quantized and sampled state measurements to generate state estimates that converge to the true state at the desired exponential rate. The average bit rate utilized by this procedure matches the derived upper bound on the estimation entropy. We also show that no other estimator (based on iterative quantized measurements) can perform the same estimation task with bit rates lower than the estimation entropy. Finally, we develop an application of the estimation procedure in determining, from the quantized state measurements, which of two competing models of a dynamical system is the true model. We show that under a mild assumption of exponential separation of the candidate models, detection is always possible in finite time. Our numerical experiments with randomly generated affine dynamical systems suggest that in practice the algorithm always works.
We study the trade-off between the benefits obtained by communication, vs. the risks due to exposure of the location of the transmitter. To study this problem, we introduce a game between two teams of mobile agents, the P-bots team and the E-bots team. The E-bots attempt to eavesdrop and collect information, while evading the P-bots; the P-bots attempt to prevent this by performing patrol and pursuit. The game models a typical use-case of micro-robots, i.e., their use for (industrial) espionage. We evaluate strategies for both teams, using analysis and simulations.
We consider the problem of translating a deterministic \textbackslashemph\simulation model\ (like Matlab-Simunk, Modelica or Ptolemy models) into a \textbackslashemphěrification model\ expressed by a network of hybrid automata. The goal is to verify safety using reachability analysis on the verification model. Simulation models typically use transitions with urgent semantics, which must be taken as soon as possible. Urgent transitions also make it possible to decompose systems that would otherwise need to be modeled with a monolithic hybrid automaton. In this paper, we include urgent transitions in our verification models and propose a suitable adaptation of our reachability algorithm. However, the simulation model, due to its imperfections, may be unsafe even though the corresponding hybrid automata are safe. Conversely, set-based reachability may not be able to show safety of an ideal formal model, since complex dynamics necessarily entail overapproximations. Taken as a whole, the formal modeling and verification process can both falsely claim safety and fail to show safety of the concrete system. We address this inconsistency by relaxing the model as follows. The standard semantics of hybrid automata is a mathematical idealization, where reactions are considered to be instantaneous and physical measurements infinitely precise. We propose semantics that relax these assumptions, where guard conditions are sampled in discrete time and admit measurement errors. The relaxed semantics can be translated to an equivalent relaxed model in standard semantics. The relaxed model is realistic in the sense that it can be implemented on hardware fast and precise enough, and in a way that safety is preserved. Finally, we show that overapproximative reachability analysis can show safety of relaxed models, which is not the case in general.
Cyber-Physical Systems (CPSs) are often tested at different test levels following "X-in-the-Loop" configurations: Model-, Software- and Hardware-in-the-loop (MiL, SiL and HiL). While MiL and SiL test levels aim at testing functional requirements at the system level, the HiL test level tests functional as well as non-functional requirements by performing a real-time simulation. As testing CPS product line configurations is costly due to the fact that there are many variants to test, test cases are long, the physical layer has to be simulated and co-simulation is often necessary. It is therefore extremely important to select the appropriate test cases that cover the objectives of each level in an allowable amount of time. We propose an efficient test case selection approach adapted to the "X-in-the-Loop" test levels. Search algorithms are employed to reduce the amount of time required to test configurations of CPS product lines while achieving the test objectives of each level. We empirically evaluate three commonly-used search algorithms, i.e., Genetic Algorithm (GA), Alternating Variable Method (AVM) and Greedy (Random Search (RS) is used as a baseline) by employing two case studies with the aim of integrating the best algorithm into our approach. Results suggest that as compared with RS, our approach can reduce the costs of testing CPS product line configurations by approximately 80% while improving the overall test quality.
We present a technique for performing secure location verification of position claims by measuring the time-difference of arrival (TDoA) between a fixed receiver node and a mobile one. The mobile node moves randomly in order to substantially increase the difficulty for an attacker to make false messages appear genuine. We explore the performance and requirements of such a system in the context of verifying aircraft position claims made over the Automatic Dependent Surveillance - Broadcast (ADS-B) system through the use of simulation and find that it correctly detects false claims with a peak accuracy of over 97\textbackslash% for the most complex attack modelled; requiring only 75m of deviation between the reported position and the actual position in order for a false claim to be detected. We then report on our design for a mobile receiver and our construction of a prototype using low-cost COTS equipment. We discuss some additional benefits of incorporating a mobile node, examine the difficulties to be overcome and explore the applicability of the approach in other location verification use-cases.