Visible to the public Biblio

Found 609 results

Filters: Keyword is Cyber-physical systems  [Clear All Filters]
2018-08-23
Ziegler, A., Luisier, M..  2017.  Phonon confinement effects in diffusive quantum transport simulations with the effective mass approximation and k·p method. 2017 International Conference on Simulation of Semiconductor Processes and Devices (SISPAD). :25–28.

Despite the continuous shrinking of the transistor dimensions, advanced modeling tools going beyond the ballistic limit of transport are still critically needed to ensure accurate device investigations. For that purpose we present here a straight-forward approach to include phonon confinement effects into dissipative quantum transport calculations based on the effective mass approximation (EMA) and the k·p method. The idea is to scale the magnitude of the deformation potentials describing the electron-phonon coupling to obtain the same low-field mobility as with full-band simulations and confined phonons. This technique is validated by demonstrating that after adjusting the mobility value of n- and p-type silicon nanowire transistors, the resulting EMA and k·p I-V characteristics agree well with those derived from full-band studies.

Ji, X., Yao, X., Tadayon, M. A., Mohanty, A., Hendon, C. P., Lipson, M..  2017.  High confinement and low loss Si3N4waveguides for miniaturizing optical coherence tomography. 2017 Conference on Lasers and Electro-Optics (CLEO). :1–2.

We show high confinement thermally tunable, low loss (0.27 ± 0.04 dB/cm) Si3N4waveguides that are 42 cm long. We show that this platform can enable the miniaturization of traditionally bulky active OCT components.

Bader, S., Gerlach, P., Michalzik, R..  2017.  Optically controlled current confinement in parallel-driven VCSELs. 2017 Conference on Lasers and Electro-Optics Europe European Quantum Electronics Conference (CLEO/Europe-EQEC). :1–1.

We have presented a unique PT-VCSEL arrangement which experimentally demonstrates the process of optically controlled current confinement. Lessons learned will be transferred to future generations of solitary device which will be optimized with respect to the degree of confinement (depending on the parameters of the PT, in particular the current gain), threshold current and electro-optic efficiency.

Keeler, G. A., Campione, S., Wood, M. G., Serkland, D. K., Parameswaran, S., Ihlefeld, J., Luk, T. S., Wendt, J. R., Geib, K. M..  2017.  Reducing optical confinement losses for fast, efficient nanophotonic modulators. 2017 IEEE Photonics Society Summer Topical Meeting Series (SUM). :201–202.

We demonstrate high-speed operation of ultracompact electroabsorption modulators based on epsilon-near-zero confinement in indium oxide (In$_\textrm2$$_\textrm3$\$) on silicon using field-effect carrier density tuning. Additionally, we discuss strategies to enhance modulator performance and reduce confinement-related losses by introducing high-mobility conducting oxides such as cadmium oxide (CdO).

Ning, F., Wen, Y., Shi, G., Meng, D..  2017.  Efficient tamper-evident logging of distributed systems via concurrent authenticated tree. 2017 IEEE 36th International Performance Computing and Communications Conference (IPCCC). :1–9.
Secure logging as an indispensable part of any secure system in practice is well-understood by both academia and industry. However, providing security for audit logs on an untrusted machine in a large distributed system is still a challenging task. The emergence and wide availability of log management tools prompted plenty of work in the security community that allows clients or auditors to verify integrity of the log data. Most recent solutions to this problem focus on the space-efficiency or public verifiability of forward security. Unfortunately, existing secure audit logging schemes have significant performance limitations that make them impractical for realtime large-scale distributed applications: Existing cryptographic hashing is computationally expensive for logging in task intensive or resource-constrained systems especially to prove individual log events, while Merkle-tree approach has fundamental limitations when face with highly concurrent, large-scale log streams due to its serially appending feature. The verification step of Merkle-tree based approach requiring a logarithmic number of hash computations is becoming a bottleneck to improve the overall performance. There is a huge gap between the flux of log streams collected and the computational efficiency of integrity verification in the large-scale distributed systems. In this work, we develop a novel scheme, performance of which favorably compares with the existing solutions. The performance guarantees that we achieve stem from a novel data structure called concurrent authenticated tree, which allows log events concurrently appending and removes the need to wait for append operations to complete sequentially. We implement a prototype using chameleon hashing based on discrete log and Merkle history tree. A comprehensive experimental evaluation of the proposed and existing approaches is used to validate the analytical models and verify our claims. The results demonstrate that our proposed scheme verifying in a concurrent way is significantly more efficient than the previous tree-based approach.
Haq, M. S., Anwar, Z., Ahsan, A., Afzal, H..  2017.  Design pattern for secure object oriented information systems development. 2017 14th International Bhurban Conference on Applied Sciences and Technology (IBCAST). :456–460.
There are many object oriented design patterns and frameworks; to make the Information System robust, scalable and extensible. The objected oriented patterns are classified in the category of creational, structural, behavioral, security, concurrency, and user interface, relational, social and distributed. All the above classified design pattern doesn't work to provide a pathway and standards to make the Information system, to fulfill the requirement of confidentiality, Integrity and availability. This research work will explore the gap and suggest possible object oriented design pattern focusing the information security perspectives of the information system. At application level; this object oriented design pattern/framework shall try to ensure the Confidentiality, Integrity and Availability of the information systems intuitively. The main objective of this research work is to create a theoretical background of object oriented framework and design pattern which ensure confidentiality, integrity and availability of the system developed through the object oriented paradigm.
Vassena, M., Breitner, J., Russo, A..  2017.  Securing Concurrent Lazy Programs Against Information Leakage. 2017 IEEE 30th Computer Security Foundations Symposium (CSF). :37–52.
Many state-of-the-art information-flow control (IFC) tools are implemented as Haskell libraries. A distinctive feature of this language is lazy evaluation. In his influencal paper on why functional programming matters, John Hughes proclaims:,,Lazy evaluation is perhaps the most powerful tool for modularization in the functional programmer's repertoire.,,Unfortunately, lazy evaluation makes IFC libraries vulnerable to leaks via the internal timing covert channel. The problem arises due to sharing, the distinguishing feature of lazy evaluation, which ensures that results of evaluated terms are stored for subsequent re-utilization. In this sense, the evaluation of a term in a high context represents a side-effect that eludes the security mechanisms of the libraries. A naïve approach to prevent that consists in forcing the evaluation of terms before entering a high context. However, this is not always possible in lazy languages, where terms often denote infinite data structures. Instead, we propose a new language primitive, lazyDup, which duplicates terms lazily. By using lazyDup to duplicate terms manipulated in high contexts, we make the security library MAC robust against internal timing leaks via lazy evaluation. We show that well-typed programs satisfy progress-sensitive non-interference in our lazy calculus with non-strict references. Our security guarantees are supported by mechanized proofs in the Agda proof assistant.
Zave, Pamela, Ferreira, Ronaldo A., Zou, Xuan Kelvin, Morimoto, Masaharu, Rexford, Jennifer.  2017.  Dynamic Service Chaining with Dysco. Proceedings of the Conference of the ACM Special Interest Group on Data Communication. :57–70.
Middleboxes are crucial for improving network security and performance, but only if the right traffic goes through the right middleboxes at the right time. Existing traffic-steering techniques rely on a central controller to install fine-grained forwarding rules in network elements—at the expense of a large number of rules, a central point of failure, challenges in ensuring all packets of a session traverse the same middleboxes, and difficulties with middleboxes that modify the "five tuple." We argue that a session-level protocol is a fundamentally better approach to traffic steering, while naturally supporting host mobility and multihoming in an integrated fashion. In addition, a session-level protocol can enable new capabilities like dynamic service chaining, where the sequence of middleboxes can change during the life of a session, e.g., to remove a load-balancer that is no longer needed, replace a middlebox undergoing maintenance, or add a packet scrubber when traffic looks suspicious. Our Dysco protocol steers the packets of a TCP session through a service chain, and can dynamically reconfigure the chain for an ongoing session. Dysco requires no changes to end-host and middlebox applications, host TCP stacks, or IP routing. Dysco's distributed reconfiguration protocol handles the removal of proxies that terminate TCP connections, middleboxes that change the size of a byte stream, and concurrent requests to reconfigure different parts of a chain. Through formal verification using Spin and experiments with our Linux-based prototype, we show that Dysco is provably correct, highly scalable, and able to reconfigure service chains across a range of middleboxes.
Giotsas, Vasileios, Richter, Philipp, Smaragdakis, Georgios, Feldmann, Anja, Dietzel, Christoph, Berger, Arthur.  2017.  Inferring BGP Blackholing Activity in the Internet. Proceedings of the 2017 Internet Measurement Conference. :1–14.
The Border Gateway Protocol (BGP) has been used for decades as the de facto protocol to exchange reachability information among networks in the Internet. However, little is known about how this protocol is used to restrict reachability to selected destinations, e.g., that are under attack. While such a feature, BGP blackholing, has been available for some time, we lack a systematic study of its Internet-wide adoption, practices, and network efficacy, as well as the profile of blackholed destinations. In this paper, we develop and evaluate a methodology to automatically detect BGP blackholing activity in the wild. We apply our method to both public and private BGP datasets. We find that hundreds of networks, including large transit providers, as well as about 50 Internet exchange points (IXPs) offer blackholing service to their customers, peers, and members. Between 2014–2017, the number of blackholed prefixes increased by a factor of 6, peaking at 5K concurrently blackholed prefixes by up to 400 Autonomous Systems. We assess the effect of blackholing on the data plane using both targeted active measurements as well as passive datasets, finding that blackholing is indeed highly effective in dropping traffic before it reaches its destination, though it also discards legitimate traffic. We augment our findings with an analysis of the target IP addresses of blackholing. Our tools and insights are relevant for operators considering offering or using BGP blackholing services as well as for researchers studying DDoS mitigation in the Internet.
Ayoob, Mustafa, Adi, Wael, Prevelakis, Vassilis.  2017.  Using Ciphers for Failure-Recovery in ITS Systems. Proceedings of the 12th International Conference on Availability, Reliability and Security. :98:1–98:7.
Combining Error-Correction Coding ECC and cryptography was proposed in the recent decade making use of bit-quality parameters to improve the error correction capability. Most of such techniques combine authentication crypto-functions jointly with ECC codes to improve system reliability, while fewer proposals involve ciphering functions with ECC to improve reliability. In this work, we propose practical and pragmatic low-cost approaches for making use of existing ciphering functions for reliability improvement. The presented techniques show that ciphering functions (as deterministic, non-linear bijective functions) can serve to achieve error correction enhancement and hence allow error recovery and scalable security trade-offs with or without additional ECC components. We demonstrate two best-effort error-correcting strategies. It is further shown, that the targeted reliability improvement is scalable to attain practical usability. The first proposed technique is pure-cipher-based error correction procedure deploying hard decision, best-effort operations to improve the system-survivability without changing system configuration. The second strategy is making use of ECC in combination with the ciphering function to enhance system-survivability. The correction procedures are based on simple experimental search-and-modify the corrupted ciphertext until predefined criteria become valid. This procedure may, however, turn out to become equivalent to a successful integrity/authenticity attack that may reduce the system security level, however in a scalable and predictable non-significant fashion.
Malavolta, Giulio, Moreno-Sanchez, Pedro, Kate, Aniket, Maffei, Matteo, Ravi, Srivatsan.  2017.  Concurrency and Privacy with Payment-Channel Networks. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :455–471.
Permissionless blockchains protocols such as Bitcoin are inherently limited in transaction throughput and latency. Current efforts to address this key issue focus on off-chain payment channels that can be combined in a Payment-Channel Network (PCN) to enable an unlimited number of payments without requiring to access the blockchain other than to register the initial and final capacity of each channel. While this approach paves the way for low latency and high throughput of payments, its deployment in practice raises several privacy concerns as well as technical challenges related to the inherently concurrent nature of payments that have not been sufficiently studied so far. In this work, we lay the foundations for privacy and concurrency in PCNs, presenting a formal definition in the Universal Composability framework as well as practical and provably secure solutions. In particular, we present Fulgor and Rayo. Fulgor is the first payment protocol for PCNs that provides provable privacy guarantees for PCNs and is fully compatible with the Bitcoin scripting system. However, Fulgor is a blocking protocol and therefore prone to deadlocks of concurrent payments as in currently available PCNs. Instead, Rayo is the first protocol for PCNs that enforces non-blocking progress (i.e., at least one of the concurrent payments terminates). We show through a new impossibility result that non-blocking progress necessarily comes at the cost of weaker privacy. At the core of Fulgor and Rayo is Multi-Hop HTLC, a new smart contract, compatible with the Bitcoin scripting system, that provides conditional payments while reducing running time and communication overhead with respect to previous approaches. Our performance evaluation of Fulgor and Rayo shows that a payment with 10 intermediate users takes as few as 5 seconds, thereby demonstrating their feasibility to be deployed in practice.
Li, BaoHong, Xu, Guoqing, Zhao, Yinliang.  2017.  Attribute-based Concurrent Signatures. Proceedings of the 6th International Conference on Information Engineering. :15:1–15:7.

This paper1 introduces the notion of attribute-based concurrent signatures. This primitive can be considered as an interesting extension of concurrent signatures in the attribute-based setting. It allows two parties fairly exchange their signatures only if each of them has convinced the opposite party possesses certain attributes satisfying a given signing policy. Due to this new feature, this primitive can find useful applications in online contract signing, electronic transactions and so on. We formalize this notion and present a construction which is secure in the random oracle model under the Strong Diffie-Hellman assumption and the eXternal Diffie-Hellman assumption.

Laszka, Aron, Abbas, Waseem, Vorobeychik, Yevgeniy, Koutsoukos, Xenofon.  2017.  Synergic Security for Smart Water Networks: Redundancy, Diversity, and Hardening. Proceedings of the 3rd International Workshop on Cyber-Physical Systems for Smart Water Networks. :21–24.

Smart water networks can provide great benefits to our society in terms of efficiency and sustainability. However, smart capabilities and connectivity also expose these systems to a wide range of cyber attacks, which enable cyber-terrorists and hostile nation states to mount cyber-physical attacks. Cyber-physical attacks against critical infrastructure, such as water treatment and distribution systems, pose a serious threat to public safety and health. Consequently, it is imperative that we improve the resilience of smart water networks. We consider three approaches for improving resilience: redundancy, diversity, and hardening. Even though each one of these "canonical" approaches has been throughly studied in prior work, a unified theory on how to combine them in the most efficient way has not yet been established. In this paper, we address this problem by studying the synergy of these approaches in the context of protecting smart water networks from cyber-physical contamination attacks.

Bailer, Werner.  2017.  Efficient Approximate Medoids of Temporal Sequences. Proceedings of the 15th International Workshop on Content-Based Multimedia Indexing. :3:1–3:6.
In order to compactly represent a set of data, its medoid (the element with minimum summed distance to all other elements) is a useful choice. This has applications in clustering, compression and visualisation of data. In multimedia data, the set of data is often sampled as a sequence in time or space, such as a video shot or views of a scene. The exact calculation of the medoid may be costly, especially if the distance function between elements is not trivial. While approximation methods for medoid selection exist, we show in this work that they do not perform well on sequences of images. We thus propose a novel algorithm for efficiently selecting an approximate medoid of a temporal sequence and assess its performance on two large-scale video data sets.
Tian, Sen, Ye, Songtao, Iqbal, Muhammad Faisal Buland, Zhang, Jin.  2017.  A New Approach to the Block-based Compressive Sensing. Proceedings of the 2017 International Conference on Computer Graphics and Digital Image Processing. :21:1–21:5.
The traditional block-based compressive sensing (BCS) approach considers the image to be segmented. However, there is not much literature available on how many numbers of blocks or segments per image would be the best choice for the compression and recovery methods. In this article, we propose a BCS method to find out the optimal way of image retrieval, and the number of the blocks to which into image should be divided. In the theoretical analysis, we analyzed the effect of noise under compression perspective and derived the range of error probability. Experimental results show that the number of blocks of an image has a strong correlation with the image recovery process. As the sampling rate M/N increases, we can find the appropriate number of image blocks by comparing each line.
Yu, Chenhan D., Levitt, James, Reiz, Severin, Biros, George.  2017.  Geometry-oblivious FMM for Compressing Dense SPD Matrices. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. :53:1–53:14.
We present GOFMM (geometry-oblivious FMM), a novel method that creates a hierarchical low-rank approximation, or "compression," of an arbitrary dense symmetric positive definite (SPD) matrix. For many applications, GOFMM enables an approximate matrix-vector multiplication in N log N or even N time, where N is the matrix size. Compression requires N log N storage and work. In general, our scheme belongs to the family of hierarchical matrix approximation methods. In particular, it generalizes the fast multipole method (FMM) to a purely algebraic setting by only requiring the ability to sample matrix entries. Neither geometric information (i.e., point coordinates) nor knowledge of how the matrix entries have been generated is required, thus the term "geometry-oblivious." Also, we introduce a shared-memory parallel scheme for hierarchical matrix computations that reduces synchronization barriers. We present results on the Intel Knights Landing and Haswell architectures, and on the NVIDIA Pascal architecture for a variety of matrices.
Zheng, Yan, Phillips, Jeff M..  2017.  Coresets for Kernel Regression. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :645–654.
Kernel regression is an essential and ubiquitous tool for non-parametric data analysis, particularly popular among time series and spatial data. However, the central operation which is performed many times, evaluating a kernel on the data set, takes linear time. This is impractical for modern large data sets. In this paper we describe coresets for kernel regression: compressed data sets which can be used as proxy for the original data and have provably bounded worst case error. The size of the coresets are independent of the raw number of data points; rather they only depend on the error guarantee, and in some cases the size of domain and amount of smoothing. We evaluate our methods on very large time series and spatial data, and demonstrate that they incur negligible error, can be constructed extremely efficiently, and allow for great computational gains.
Zhang, Kai, Liu, Chuanren, Zhang, Jie, Xiong, Hui, Xing, Eric, Ye, Jieping.  2017.  Randomization or Condensation?: Linear-Cost Matrix Sketching Via Cascaded Compression Sampling Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :615–623.
Matrix sketching is aimed at finding compact representations of a matrix while simultaneously preserving most of its properties, which is a fundamental building block in modern scientific computing. Randomized algorithms represent state-of-the-art and have attracted huge interest from the fields of machine learning, data mining, and theoretic computer science. However, it still requires the use of the entire input matrix in producing desired factorizations, which can be a major computational and memory bottleneck in truly large problems. In this paper, we uncover an interesting theoretic connection between matrix low-rank decomposition and lossy signal compression, based on which a cascaded compression sampling framework is devised to approximate an m-by-n matrix in only O(m+n) time and space. Indeed, the proposed method accesses only a small number of matrix rows and columns, which significantly improves the memory footprint. Meanwhile, by sequentially teaming two rounds of approximation procedures and upgrading the sampling strategy from a uniform probability to more sophisticated, encoding-orientated sampling, significant algorithmic boosting is achieved to uncover more granular structures in the data. Empirical results on a wide spectrum of real-world, large-scale matrices show that by taking only linear time and space, the accuracy of our method rivals those state-of-the-art randomized algorithms consuming a quadratic, O(mn), amount of resources.
Birch, G. C., Woo, B. L., LaCasse, C. F., Stubbs, J. J., Dagel, A. L..  2017.  Computational optical physical unclonable functions. 2017 International Carnahan Conference on Security Technology (ICCST). :1–6.

Physical unclonable functions (PUFs) are devices which are easily probed but difficult to predict. Optical PUFs have been discussed within the literature, with traditional optical PUFs typically using spatial light modulators, coherent illumination, and scattering volumes; however, these systems can be large, expensive, and difficult to maintain alignment in practical conditions. We propose and demonstrate a new kind of optical PUF based on computational imaging and compressive sensing to address these challenges with traditional optical PUFs. This work describes the design, simulation, and prototyping of this computational optical PUF (COPUF) that utilizes incoherent polychromatic illumination passing through an additively manufactured refracting optical polymer element. We demonstrate the ability to pass information through a COPUF using a variety of sampling methods, including the use of compressive sensing. The sensitivity of the COPUF system is also explored. We explore non-traditional PUF configurations enabled by the COPUF architecture. The double COPUF system, which employees two serially connected COPUFs, is proposed and analyzed as a means to authenticate and communicate between two entities that have previously agreed to communicate. This configuration enables estimation of a message inversion key without the calculation of individual COPUF inversion keys at any point in the PUF life cycle. Our results show that it is possible to construct inexpensive optical PUFs using computational imaging. This could lead to new uses of PUFs in places where electrical PUFs cannot be utilized effectively, as low cost tags and seals, and potentially as authenticating and communicating devices.

Li, Q., Xu, B., Li, S., Liu, Y., Cui, D..  2017.  Reconstruction of measurements in state estimation strategy against cyber attacks for cyber physical systems. 2017 36th Chinese Control Conference (CCC). :7571–7576.

To improve the resilience of state estimation strategy against cyber attacks, the Compressive Sensing (CS) is applied in reconstruction of incomplete measurements for cyber physical systems. First, observability analysis is used to decide the time to run the reconstruction and the damage level from attacks. In particular, the dictionary learning is proposed to form the over-completed dictionary by K-Singular Value Decomposition (K-SVD). Besides, due to the irregularity of incomplete measurements, sampling matrix is designed as the measurement matrix. Finally, the simulation experiments on 6-bus power system illustrate that the proposed method achieves the incomplete measurements reconstruction perfectly, which is better than the joint dictionary. When only 29% available measurements are left, the proposed method has generality for four kinds of recovery algorithms.

Lagunas, E., Rugini, L..  2017.  Performance of compressive sensing based energy detection. 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC). :1–5.

This paper investigates closed-form expressions to evaluate the performance of the Compressive Sensing (CS) based Energy Detector (ED). The conventional way to approximate the probability density function of the ED test statistic invokes the central limit theorem and considers the decision variable as Gaussian. This approach, however, provides good approximation only if the number of samples is large enough. This is not usually the case in CS framework, where the goal is to keep the sample size low. Moreover, working with a reduced number of measurements is of practical interest for general spectrum sensing in cognitive radio applications, where the sensing time should be sufficiently short since any time spent for sensing cannot be used for data transmission on the detected idle channels. In this paper, we make use of low-complexity approximations based on algebraic transformations of the one-dimensional Gaussian Q-function. More precisely, this paper provides new closed-form expressions for accurate evaluation of the CS-based ED performance as a function of the compressive ratio and the Signal-to-Noise Ratio (SNR). Simulation results demonstrate the increased accuracy of the proposed equations compared to existing works.

Xu, W., Yan, Z., Tian, Y., Cui, Y., Lin, J..  2017.  Detection with compressive measurements corrupted by sparse errors. 2017 9th International Conference on Wireless Communications and Signal Processing (WCSP). :1–5.

Compressed sensing can represent the sparse signal with a small number of measurements compared to Nyquist-rate samples. Considering the high-complexity of reconstruction algorithms in CS, recently compressive detection is proposed, which performs detection directly in compressive domain without reconstruction. Different from existing work that generally considers the measurements corrupted by dense noises, this paper studies the compressive detection problem when the measurements are corrupted by both dense noises and sparse errors. The sparse errors exist in many practical systems, such as the ones affected by impulse noise or narrowband interference. We derive the theoretical performance of compressive detection when the sparse error is either deterministic or random. The theoretical results are further verified by simulations.

Ming, X., Shu, T., Xianzhong, X..  2017.  An energy-efficient wireless image transmission method based on adaptive block compressive sensing and softcast. 2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC). :712–717.

With the rapid and radical evolution of information and communication technology, energy consumption for wireless communication is growing at a staggering rate, especially for wireless multimedia communication. Recently, reducing energy consumption in wireless multimedia communication has attracted increasing attention. In this paper, we propose an energy-efficient wireless image transmission scheme based on adaptive block compressive sensing (ABCS) and SoftCast, which is called ABCS-SoftCast. In ABCS-SoftCast, the compression distortion and transmission distortion are considered in a joint manner, and the energy-distortion model is formulated for each image block. Then, the sampling rate (SR) and power allocation factors of each image block are optimized simultaneously. Comparing with conventional SoftCast scheme, experimental results demonstrate that the energy consumption can be greatly reduced even when the receiving image qualities are approximately the same.

2018-07-18
Kreimel, Philipp, Eigner, Oliver, Tavolato, Paul.  2017.  Anomaly-Based Detection and Classification of Attacks in Cyber-Physical Systems. Proceedings of the 12th International Conference on Availability, Reliability and Security. :40:1–40:6.

Cyber-physical systems are found in industrial and production systems, as well as critical infrastructures. Due to the increasing integration of IP-based technology and standard computing devices, the threat of cyber-attacks on cyber-physical systems has vastly increased. Furthermore, traditional intrusion defense strategies for IT systems are often not applicable in operational environments. In this paper we present an anomaly-based approach for detection and classification of attacks in cyber-physical systems. To test our approach, we set up a test environment with sensors, actuators and controllers widely used in industry, thus, providing system data as close as possible to reality. First, anomaly detection is used to define a model of normal system behavior by calculating outlier scores from normal system operations. This valid behavior model is then compared with new data in order to detect anomalies. Further, we trained an attack model, based on supervised attacks against the test setup, using the naive Bayes classifier. If an anomaly is detected, the classification process tries to classify the anomaly by applying the attack model and calculating prediction confidences for trained classes. To evaluate the statistical performance of our approach, we tested the model by applying an unlabeled dataset, which contains valid and anomalous data. The results show that this approach was able to detect and classify such attacks with satisfactory accuracy.

2018-07-13
Carmen Cheh, University of Illinois at Urbana-Champaign, Ken Keefe, University of Illinois at Urbana-Champaign, Brett Feddersen, University of Illinois at Urbana-Champaign, Binbin Chen, Advanced Digital Sciences Center Singapre, William G. Temple, Advance Digital Science Center Singapore, William H. Sanders, University of Illinois at Urbana-Champaign.  2017.  Developing Models for Physical Attacks in Cyber-Physical Systems Security and Privacy. ACM Workshop on Cyber-Physical Systems Security and Privacy.

In this paper, we analyze the security of cyber-physical systems using the ADversary VIew Security Evaluation (ADVISE) meta modeling approach, taking into consideration the efects of physical attacks. To build our model of the system, we construct an ontology that describes the system components and the relationships among them. The ontology also deines attack steps that represent cyber and physical actions that afect the system entities. We apply the ADVISE meta modeling approach, which admits as input our deined ontology, to a railway system use case to obtain insights regarding the system’s security. The ADVISE Meta tool takes in a system model of a railway station and generates an attack execution graph that shows the actions that adversaries may take to reach their goal. We consider several adversary proiles, ranging from outsiders to insider staf members, and compare their attack paths in terms of targeted assets, time to achieve the goal, and probability of detection. The generated results show that even adversaries with access to noncritical assets can afect system service by intelligently crafting their attacks to trigger a physical sequence of efects. We also identify the physical devices and user actions that require more in-depth monitoring to reinforce the system’s security.