Visible to the public Biblio

Found 12044 results

Filters: Keyword is Resiliency  [Clear All Filters]
2017-05-16
Calix, Ricardo A., Cabrera, Armando, Iqbal, Irshad.  2016.  Analysis of Parallel Architectures for Network Intrusion Detection. Proceedings of the 5th Annual Conference on Research in Information Technology. :7–12.

Intrusion detection systems need to be both accurate and fast. Speed is important especially when operating at the network level. Additionally, many intrusion detection systems rely on signature based detection approaches. However, machine learning can also be helpful for intrusion detection. One key challenge when using machine learning, aside from the detection accuracy, is using machine learning algorithms that are fast. In this paper, several processing architectures are considered for use in machine learning based intrusion detection systems. These architectures include standard CPUs, GPUs, and cognitive processors. Results of their processing speeds are compared and discussed.

Laszka, Aron, Abbas, Waseem, Sastry, S. Shankar, Vorobeychik, Yevgeniy, Koutsoukos, Xenofon.  2016.  Optimal Thresholds for Intrusion Detection Systems. Proceedings of the Symposium and Bootcamp on the Science of Security. :72–81.

In recent years, we have seen a number of successful attacks against high-profile targets, some of which have even caused severe physical damage. These examples have shown us that resourceful and determined attackers can penetrate virtually any system, even those that are secured by the "air-gap." Consequently, in order to minimize the impact of stealthy attacks, defenders have to focus not only on strengthening the first lines of defense but also on deploying effective intrusion-detection systems. Intrusion-detection systems can play a key role in protecting sensitive computer systems since they give defenders a chance to detect and mitigate attacks before they could cause substantial losses. However, an over-sensitive intrusion-detection system, which produces a large number of false alarms, imposes prohibitively high operational costs on a defender since alarms need to be manually investigated. Thus, defenders have to strike the right balance between maximizing security and minimizing costs. Optimizing the sensitivity of intrusion detection systems is especially challenging in the case when multiple inter-dependent computer systems have to be defended against a strategic attacker, who can target computer systems in order to maximize losses and minimize the probability of detection. We model this scenario as an attacker-defender security game and study the problem of finding optimal intrusion detection thresholds.

Yuan, Yali, Kaklamanos, Georgios, Hogrefe, Dieter.  2016.  A Novel Semi-Supervised Adaboost Technique for Network Anomaly Detection. Proceedings of the 19th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems. :111–114.

With the developing of Internet, network intrusion has become more and more common. Quickly identifying and preventing network attacks is getting increasingly more important and difficult. Machine learning techniques have already proven to be robust methods in detecting malicious activities and network threats. Ensemble-based and semi-supervised learning methods are some of the areas that receive most attention in machine learning today. However relatively little attention has been given in combining these methods. To overcome such limitations, this paper proposes a novel network anomaly detection method by using a combination of a tri-training approach with Adaboost algorithms. The bootstrap samples of tri-training are replaced by three different Adaboost algorithms to create the diversity. We run 30 iteration for every simulation to obtain the average results. Simulations indicate that our proposed semi-supervised Adaboost algorithm is reproducible and consistent over a different number of runs. It outperforms other state-of-the-art learning algorithms, even with a small part of labeled data in the training phase. Specifically, it has a very short execution time and a good balance between the detection rate as well as the false-alarm rate.

Kleinmann, Amit, Wool, Avishai.  2016.  Automatic Construction of Statechart-Based Anomaly Detection Models for Multi-Threaded SCADA via Spectral Analysis. Proceedings of the 2Nd ACM Workshop on Cyber-Physical Systems Security and Privacy. :1–12.

Traffic of Industrial Control System (ICS) between the Human Machine Interface (HMI) and the Programmable Logic Controller (PLC) is highly periodic. However, it is sometimes multiplexed, due to multi-threaded scheduling. In previous work we introduced a Statechart model which includes multiple Deterministic Finite Automata (DFA), one per cyclic pattern. We demonstrated that Statechart-based anomaly detection is highly effective on multiplexed cyclic traffic when the individual cyclic patterns are known. The challenge is to construct the Statechart, by unsupervised learning, from a captured trace of the multiplexed traffic, especially when the same symbols (ICS messages) can appear in multiple cycles, or multiple times in a cycle. Previously we suggested a combinatorial approach for the Statechart construction, based on Euler cycles in the Discrete Time Markov Chain (DTMC) graph of the trace. This combinatorial approach worked well in simple scenarios, but produced a false-alarm rate that was excessive on more complex multiplexed traffic. In this paper we suggest a new Statechart construction method, based on spectral analysis. We use the Fourier transform to identify the dominant periods in the trace. Our algorithm then associates a set of symbols with each dominant period, identifies the order of the symbols within each period, and creates the cyclic DFAs and the Statechart. We evaluated our solution on long traces from two production ICS: one using the Siemens S7-0x72 protocol and the other using Modbus. We also stress-tested our algorithms on a collection of synthetically-generated traces that simulate multiplexed ICS traces with varying levels of symbol uniqueness and time overlap. The resulting Statecharts model the traces with an overall median false-alarm rate as low as 0.16% on the synthetic datasets, and with zero false-alarms on production S7-0x72 traffic. Moreover, the spectral analysis Statecharts consistently out-performed the previous combinatorial Statecharts, exhibiting significantly lower false alarm rates and more compact model sizes.

AlEroud, Ahmed, Karabatis, George.  2016.  Beyond Data: Contextual Information Fusion for Cyber Security Analytics. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :73–79.

A major challenge of the existing attack detection approaches is the identification of relevant information to a particular situation, and the use of such information to perform multi-evidence intrusion detection. Addressing such a limitation requires integrating several aspects of context to better predict, avoid and respond to impending attacks. The quality and adequacy of contextual information is important to decrease uncertainty and correctly identify potential cyber-attacks. In this paper, a systematic methodology has been used to identify contextual dimensions that improve the effectiveness of detecting cyber-attacks. This methodology combines graph, probability, and information theories to create several context-based attack prediction models that analyze data at a high- and low-level. An extensive validation of our approach has been performed using a prototype system and several benchmark intrusion detection datasets yielding very promising results.

Yin, Shang-Nan, Kang, Ho-Seok, Chen, Zhi-Guo, Kim, Sung-Ryul.  2016.  Intrusion Detection System Based on Complex Event Processing in RFID Middleware. Proceedings of the International Conference on Research in Adaptive and Convergent Systems. :125–129.

Radio Frequency Identification (RFID) technology has been applied in many fields, such as tracking product through the supply chains, electronic passport (ePassport), proximity card, etc. Most companies will choose low-cost RFID tags. However, these RFID tags are almost no security mechanism so that criminals can easily clone these tags and get the user permissions. In this paper, we aim at more efficient detection proximity card be cloned and design a real-time intrusion detection system based on one tool of Complex Event Processing (Esper) in the RFID middleware. We will detect the cloned tags through training our system with the user's habits. When detected anomalous behavior which may clone tags have occurred, and then send the notification to user. We discuss the reliability of this intrusion detection system and describes in detail how to work.

Ghaeini, Hamid Reza, Tippenhauer, Nils Ole.  2016.  HAMIDS: Hierarchical Monitoring Intrusion Detection System for Industrial Control Systems. Proceedings of the 2Nd ACM Workshop on Cyber-Physical Systems Security and Privacy. :103–111.

In this paper, we propose a hierarchical monitoring intrusion detection system (HAMIDS) for industrial control systems (ICS). The HAMIDS framework detects the anomalies in both level 0 and level 1 of an industrial control plant. In addition, the framework aggregates the cyber-physical process data in one point for further analysis as part of the intrusion detection process. The novelty of this framework is its ability to detect anomalies that have a distributed impact on the cyber-physical process. The performance of the proposed framework evaluated as part of SWaT security showdown (S3) in which six international teams were invited to test the framework in a real industrial control system. The proposed framework outperformed other proposed academic IDS in term of detection of ICS threats during the S3 event, which was held from July 25-29, 2016 at Singapore University of Technology and Design.

Pearson, Carl J., Welk, Allaire K., Boettcher, William A., Mayer, Roger C., Streck, Sean, Simons-Rudolph, Joseph M., Mayhorn, Christopher B..  2016.  Differences in Trust Between Human and Automated Decision Aids. Proceedings of the Symposium and Bootcamp on the Science of Security. :95–98.

Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that humans often rely on automation or other humans, but not both simultaneously. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measures how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.

Chen, Di, Zhang, Qin.  2016.  Streaming Algorithms for Robust Distinct Elements. Proceedings of the 2016 International Conference on Management of Data. :1433–1447.

We study the problem of estimating distinct elements in the data stream model, which has a central role in traffic monitoring, query optimization, data mining and data integration. Different from all previous work, we study the problem in the noisy data setting, where two different looking items in the stream may reference the same entity (determined by a distance function and a threshold value), and the goal is to estimate the number of distinct entities in the stream. In this paper, we formalize the problem of robust distinct elements, and develop space and time-efficient streaming algorithms for datasets in the Euclidean space, using a novel technique we call bucket sampling. We also extend our algorithmic framework to other metric spaces by establishing a connection between bucket sampling and the theory of locality sensitive hashing. Moreover, we formally prove that our algorithms are still effective under small distinct elements ambiguity. Our experiments demonstrate the practicality of our algorithms.

Shin, Mincheol, Roh, Hongchan, Jung, Wonmook, Park, Sanghyun.  2016.  Optimizing Hash Partitioning for Solid State Drives. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :1000–1007.

The use of flashSSDs has increased rapidly in a wide range of areas due to their superior energy efficiency, shorter access time, and higher bandwidth when compared to HDDs. The internal parallelism created by multiple flash memory packages embedded in a flashSSDs, is one of the unique features of flashSSDs. Many new DBMS technologies have been developed for flashSSDs, but query processing for flashSSDs have drawn less attention than other DBMS technologies. Hash partitioning is popularly used in query processing algorithms to materialize their intermediate results in an efficient manner. In this paper, we propose a novel hash partitioning algorithm that exploits the internal parallelism of flashSSDs. The devised hash partitioning method outperforms the traditional hash partitioning technique regardless of the amount of available main memory independently from the buffer management strategies (blocked I/O vs page sized I/O). We implemented our method based on the source code of the PostgreSQL storage manager. PostgreSQL relation files created by the TPC-H workload were employed in the experiments. Our method was found to be up to 3.55 times faster than the traditional method with blocked I/O, and 2.36 times faster than the traditional method with pagesized I/O.

Bandyopadhyay, Bortik, Fuhry, David, Chakrabarti, Aniket, Parthasarathy, Srinivasan.  2016.  Topological Graph Sketching for Incremental and Scalable Analytics. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :1231–1240.

We propose a novel, scalable, and principled graph sketching technique based on minwise hashing of local neighborhood. For an n-node graph with e-edges (e textgreatertextgreater n), we incrementally maintain in real-time a minwise neighbor sampled subgraph using k hash functions in O(n x k) memory, limit being user-configurable by the parameter k. Symmetrization and similarity based techniques can recover from these data structures a significant portion of the original graph. We present theoretical analysis of the minwise sampling strategy and also derive unbiased estimators for important graph properties such as triangle count and neighborhood overlap. We perform an extensive empirical evaluation of our graph sketch and it's derivatives on a wide variety of real-world graph data sets drawn from different application domains using important large network analysis algorithms: local and global clustering coefficient, PageRank, and local graph sparsification. With bounded memory, the quality of results using the sketch representation is competitive against baselines which use the full graph, and the computational performance is often better. Our framework is flexible and configurable to be leveraged by numerous other graph analytics algorithms, potentially reducing the information mining time on large streamed graphs for a variety of applications.

Yan, Ting-Kun, Xu, Xin-Shun, Guo, Shanqing, Huang, Zi, Wang, Xiao-Lin.  2016.  Supervised Robust Discrete Multimodal Hashing for Cross-Media Retrieval. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :1271–1280.

Recently, multimodal hashing techniques have received considerable attention due to their low storage cost and fast query speed for multimodal data retrieval. Many methods have been proposed; however, there are still some problems that need to be further considered. For example, some of these methods just use a similarity matrix for learning hash functions which will discard some useful information contained in original data; some of them relax binary constraints or separate the process of learning hash functions and binary codes into two independent stages to bypass the obstacle of handling the discrete constraints on binary codes for optimization, which may generate large quantization error; some of them are not robust to noise. All these problems may degrade the performance of a model. To consider these problems, in this paper, we propose a novel supervised hashing framework for cross-modal retrieval, i.e., Supervised Robust Discrete Multimodal Hashing (SRDMH). Specifically, SRDMH tries to make final binary codes preserve label information as same as that in original data so that it can leverage more label information to supervise the binary codes learning. In addition, it learns hashing functions and binary codes directly instead of relaxing the binary constraints so as to avoid large quantization error problem. Moreover, to make it robust and easy to solve, we further integrate a flexible l2,p loss with nonlinear kernel embedding and an intermediate presentation of each instance. Finally, an alternating algorithm is proposed to solve the optimization problem in SRDMH. Extensive experiments are conducted on three benchmark data sets. The results demonstrate that the proposed method (SRDMH) outperforms or is comparable to several state-of-the-art methods for cross-modal retrieval task.

Xu, Xing, Shen, Fumin, Yang, Yang, Shen, Heng Tao.  2016.  Discriminant Cross-modal Hashing. Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval. :305–308.

Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to effectively integrate heterogeneous features from different modalities to learn hash functions using available supervising information, e.g., class labels. Existing hashing based methods generally project heterogeneous features to a common space for hash codes generation, and the supervising information is incrementally used for improving performance. However, these methods may produce ineffective hash codes, due to the failure to explore the discriminative property of supervising information and to effectively bridge the semantic gap between different modalities. To address these challenges, we propose a novel hashing based method in a linear classification framework, in which the proposed method learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective optimization algorithm is developed for the proposed method to jointly learn the modality-specific hash function, the unified binary codes and a linear classifier. Extensive experiments on three benchmark datasets highlight the advantage of the proposed method and show that it achieves the state-of-the-art performance.

Anh, Pham Nguyen Quang, Fan, Rui, Wen, Yonggang.  2016.  Balanced Hashing and Efficient GPU Sparse General Matrix-Matrix Multiplication. Proceedings of the 2016 International Conference on Supercomputing. :36:1–36:12.

General sparse matrix-matrix multiplication (SpGEMM) is a core component of many algorithms. A number of recent works have used high throughput graphics processing units (GPUs) to accelerate SpGEMM. However, exploiting the power of GPUs for SpGEMM requires addressing a number of challenges, including highly imbalanced workloads and large numbers of inefficient random global memory accesses. This paper presents a SpGEMM algorithm which uses several novel techniques to overcome these problems. We first propose two low cost methods to achieve perfect load balancing during the most expensive step in SpGEMM. Next, we show how to eliminate nearly all random global memory accesses using shared memory based hash tables. To optimize the performance of the hash tables, we propose a lightweight method to estimate the number of nonzeros in the output matrix. We compared our algorithm to the CUSP, CUSPARSE and the state-of-the-art BHSPARSE GPU SpGEMM algorithms, and show that it performs 5.6x, 2.4x and 1.5x better on average, and up to 11.8x, 9.5x and 2.5x better in the best case, respectively. Furthermore, we show that our algorithm performs especially well on highly imbalanced and unstructured matrices.

Shrivastava, Anshumali, Konig, Arnd Christian, Bilenko, Mikhail.  2016.  Time Adaptive Sketches (Ada-Sketches) for Summarizing Data Streams. Proceedings of the 2016 International Conference on Management of Data. :1417–1432.

Obtaining frequency information of data streams, in limited space, is a well-recognized problem in literature. A number of recent practical applications (such as those in computational advertising) require temporally-aware solutions: obtaining historical count statistics for both time-points as well as time-ranges. In these scenarios, accuracy of estimates is typically more important for recent instances than for older ones; we call this desirable property Time Adaptiveness. With this observation, [20] introduced the Hokusai technique based on count-min sketches for estimating the frequency of any given item at any given time. The proposed approach is problematic in practice, as its memory requirements grow linearly with time, and it produces discontinuities in the estimation accuracy. In this work, we describe a new method, Time-adaptive Sketches, (Ada-sketch), that overcomes these limitations, while extending and providing a strict generalization of several popular sketching algorithms. The core idea of our method is inspired by the well-known digital Dolby noise reduction procedure that dates back to the 1960s. The theoretical analysis presented could be of independent interest in itself, as it provides clear results for the time-adaptive nature of the errors. An experimental evaluation on real streaming datasets demonstrates the superiority of the described method over Hokusai in estimating point and range queries over time. The method is simple to implement and offers a variety of design choices for future extensions. The simplicity of the procedure and the method's generalization of classic sketching techniques give hope for wide applicability of Ada-sketches in practice.

Yang, Yang, Luo, Yadan, Chen, Weilun, Shen, Fumin, Shao, Jie, Shen, Heng Tao.  2016.  Zero-Shot Hashing via Transferring Supervised Knowledge. Proceedings of the 2016 ACM on Multimedia Conference. :1286–1295.

Hashing has shown its efficiency and effectiveness in facilitating large-scale multimedia applications. Supervised knowledge (\textbackslashemph\e.g.\, semantic labels or pair-wise relationship) associated to data is capable of significantly improving the quality of hash codes and hash functions. However, confronted with the rapid growth of newly-emerging concepts and multimedia data on the Web, existing supervised hashing approaches may easily suffer from the scarcity and validity of supervised information due to the expensive cost of manual labelling. In this paper, we propose a novel hashing scheme, termed \textbackslashemph\zero-shot hashing\ (ZSH), which compresses images of "unseen" categories to binary codes with hash functions learned from limited training data of "seen" categories. Specifically, we project independent data labels (i.e., 0/1-form label vectors) into semantic embedding space, where semantic relationships among all the labels can be precisely characterized and thus seen supervised knowledge can be transferred to unseen classes. Moreover, in order to cope with the semantic shift problem, we rotate the embedded space to more suitably align the embedded semantics with the low-level visual feature space, thereby alleviating the influence of semantic gap. In the meantime, to exert positive effects on learning high-quality hash functions, we further propose to preserve local structural property and discrete nature in binary codes. Besides, we develop an efficient alternating algorithm to solve the ZSH model. Extensive experiments conducted on various real-life datasets show the superior zero-shot image retrieval performance of ZSH as compared to several state-of-the-art hashing methods.

Guo, Huan, Li, Zhengmin, Liu, Qingyun, Li, Jia, Zhou, Zhou, Sun, Bo.  2016.  A High Performance IPv6 Flow Table Lookup Algorithm Based on Hash. Proceedings of the 2016 ACM International on Workshop on Traffic Measurements for Cybersecurity. :35–39.

With the rapid increasing IPv6 network traffic, some network process systems like DPI and firewall cannot meet the demand of high network bandwidth. Flow table based on hash is one of the bottlenecks. In this paper, we measure the characteristics of IPv6 address and propose an entropy based revision hash algorithm, which can produce a better distribution within acceptable time. Moreover, we use a hierarchical hash strategy to reduce hash table lookup times further more even in extreme cases.

2017-04-24
Yagan, Osman, Makowski, Armand M..  2016.  Wireless Sensor Networks Under the Random Pairwise Key Predistribution Scheme: Can Resiliency Be Achieved With Small Key Rings? IEEE/ACM Trans. Netw.. 24:3383–3396.

We investigate the resiliency of wireless sensor networks against sensor capture attacks when the network uses the random pairwise key distribution scheme of Chan et al. We present conditions on the model parameters so that the network is: 1 unassailable and 2 unsplittable, both with high probability, as the number \$n\$ of sensor nodes becomes large. Both notions are defined against an adversary who has unlimited computing resources and full knowledge of the network topology, but can only capture a negligible fraction \$on\$ of sensors. We also show that the number of cryptographic keys needed to ensure unassailability and unsplittability under the pairwise key predistribution scheme is an order of magnitude smaller than it is under the key predistribution scheme of Eschenauer and Gligor.

Laguna, Ignacio, Schulz, Martin, Richards, David F., Calhoun, Jon, Olson, Luke.  2016.  IPAS: Intelligent Protection Against Silent Output Corruption in Scientific Applications. Proceedings of the 2016 International Symposium on Code Generation and Optimization. :227–238.

This paper presents IPAS, an instruction duplication technique that protects scientific applications from silent data corruption (SDC) in their output. The motivation for IPAS is that, due to natural error masking, only a subset of SDC errors actually affects the output of scientific codes—we call these errors silent output corruption (SOC) errors. Thus applications require duplication only on code that, when affected by a fault, yields SOC. We use machine learning to learn code instructions that must be protected to avoid SOC, and, using a compiler, we protect only those vulnerable instructions by duplication, thus significantly reducing the overhead that is introduced by instruction duplication. In our experiments with five workloads, IPAS reduces the percentage of SOC by up to 90% with a slowdown that ranges between 1.04x and 1.35x, which corresponds to as much as 47% less slowdown than state-of-the-art instruction duplication techniques.

Levy, Scott, Ferreira, Kurt B..  2016.  An Examination of the Impact of Failure Distribution on Coordinated Checkpoint/Restart. Proceedings of the ACM Workshop on Fault-Tolerance for HPC at Extreme Scale. :35–42.

Fault tolerance is a key challenge to building the first exa\textbackslash-scale system. To understand the potential impacts of failures on next-generation systems, significant effort has been devoted to collecting, characterizing and analyzing failures on current systems. These studies require large volumes of data and complex analysis. Because the occurrence of failures in large-scale systems is unpredictable, failures are commonly modeled as a stochastic process. Failure data from current systems is examined in an attempt to identify the underlying probability distribution and its statistical properties. In this paper, we use modeling to examine the impact of failure distributions on the time-to-solution and the optimal checkpoint interval of applications that use coordinated checkpoint/restart. Using this approach, we show that as failures become more frequent, the failure distribution has a larger influence on application performance. We also show that as failure times are less tightly grouped (i.e., as the standard deviation increases) the underlying probability distribution has a greater impact on application performance. Finally, we show that computing the checkpoint interval based on the assumption that failures are exponentially distributed has a modest impact on application performance even when failures are drawn from a different distribution. Our work provides critical analysis and guidance to the process of analyzing failure data in the context of coordinated checkpoint/restart. Specifically, the data presented in this paper helps to distinguish cases where the failure distribution has a strong influence on application performance from those cases when the failure distribution has relatively little impact.

Rauf, Usman, Gillani, Fida, Al-Shaer, Ehab, Halappanavar, Mahantesh, Chatterjee, Samrat, Oehmen, Christopher.  2016.  Formal Approach for Resilient Reachability Based on End-System Route Agility. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :117–127.

The deterministic nature of existing routing protocols has resulted into an ossified Internet with static and predictable network routes. This gives persistent attackers (e.g. eavesdroppers and DDoS attackers) plenty of time to study the network and identify the vulnerable (critical) links to plan devastating and stealthy attacks. Recently, Moving Target Defense (MTD) based approaches have been proposed to to defend against DoS attacks. However, MTD based approaches for route mutation are oriented towards re-configuring the parameters in Local Area Networks (LANs), and do not provide any protection against infrastructure level attacks, which inherently limits their use for mission critical services over the Internet infrastructure. To cope with these issues, we extend the current routing architecture to consider end-hosts as routing elements, and present a formal method based agile defense mechanism to embed resiliency in the existing cyber infrastructure. The major contributions of this paper include: (1) formalization of efficient and resilient End to End (E2E) reachability problem as a constraint satisfaction problem, which identifies the potential end-hosts to reach a destination while satisfying resilience and QoS constraints, (2) design and implementation of a novel decentralized End Point Route Mutation (EPRM) protocol, and (3) design and implementation of planning algorithm to minimize the overlap between multiple flows, for the sake of maximizing the agility in the system. Our PlanetLab based implementation and evaluation validates the correctness, effectiveness and scalability of the proposed approach.

Neema, Himanshu, Volgyesi, Peter, Potteiger, Bradley, Emfinger, William, Koutsoukos, Xenofon, Karsai, Gabor, Vorobeychik, Yevgeniy, Sztipanovits, Janos.  2016.  SURE: An Experimentation and Evaluation Testbed for CPS Security and Resilience: Demo Abstract. Proceedings of the 7th International Conference on Cyber-Physical Systems. :27:1–27:1.

In-depth consideration and evaluation of security and resilience is necessary for developing the scientific foundations and technology of Cyber-Physical Systems (CPS). In this demonstration, we present SURE [1], a CPS experimentation and evaluation testbed for security and resilience focusing on transportation networks. The testbed includes (1) a heterogeneous modeling and simulation integration platform, (2) a Web-based tool for modeling CPS in adversarial environments, and (3) a framework for evaluating resilience using attacker-defender games. Users such as CPS designers and operators can interact with the testbed to evaluate monitoring and control schemes that include sensor placement and traffic signal configuration.

Delic, Kemal A..  2016.  On Resilience of IoT Systems: The Internet of Things (Ubiquity Symposium). Ubiquity. 2016:1:1–1:7.

At the very high level of abstraction, the Internet of Things (IoT) can be modeled as the hyper-scale, hyper-complex cyber-physical system. Study of resilience of IoT systems is the first step towards engineering of the future IoT eco-systems. Exploration of this domain is highly promising avenue for many aspiring Ph.D. and M.Sc. students.

Cheng, Eric, Mirkhani, Shahrzad, Szafaryn, Lukasz G., Cher, Chen-Yong, Cho, Hyungmin, Skadron, Kevin, Stan, Mircea R., Lilja, Klas, Abraham, Jacob A., Bose, Pradip et al..  2016.  CLEAR: Cross-Layer Exploration for Architecting Resilience - Combining Hardware and Software Techniques to Tolerate Soft Errors in Processor Cores. Proceedings of the 53rd Annual Design Automation Conference. :68:1–68:6.

We present a first of its kind framework which overcomes a major challenge in the design of digital systems that are resilient to reliability failures: achieve desired resilience targets at minimal costs (energy, power, execution time, area) by combining resilience techniques across various layers of the system stack (circuit, logic, architecture, software, algorithm). This is also referred to as cross-layer resilience. In this paper, we focus on radiation-induced soft errors in processor cores. We address both single-event upsets (SEUs) and single-event multiple upsets (SEMUs) in terrestrial environments. Our framework automatically and systematically explores the large space of comprehensive resilience techniques and their combinations across various layers of the system stack (798 cross-layer combinations in this paper), derives cost-effective solutions that achieve resilience targets at minimal costs, and provides guidelines for the design of new resilience techniques. We demonstrate the practicality and effectiveness of our framework using two diverse designs: a simple, in-order processor core and a complex, out-of-order processor core. Our results demonstrate that a carefully optimized combination of circuit-level hardening, logic-level parity checking, and micro-architectural recovery provides a highly cost-effective soft error resilience solution for general-purpose processor cores. For example, a 50× improvement in silent data corruption rate is achieved at only 2.1% energy cost for an out-of-order core (6.1% for an in-order core) with no speed impact. However, selective circuit-level hardening alone, guided by a thorough analysis of the effects of soft errors on application benchmarks, provides a cost-effective soft error resilience solution as well (with \textasciitilde1% additional energy cost for a 50× improvement in silent data corruption rate).

Shu, Rui, Wang, Peipei, Gorski III, Sigmund A, Andow, Benjamin, Nadkarni, Adwait, Deshotels, Luke, Gionta, Jason, Enck, William, Gu, Xiaohui.  2016.  A Study of Security Isolation Techniques. ACM Comput. Surv.. 49:50:1–50:37.

Security isolation is a foundation of computing systems that enables resilience to different forms of attacks. This article seeks to understand existing security isolation techniques by systematically classifying different approaches and analyzing their properties. We provide a hierarchical classification structure for grouping different security isolation techniques. At the top level, we consider two principal aspects: mechanism and policy. Each aspect is broken down into salient dimensions that describe key properties. We break the mechanism into two dimensions, enforcement location and isolation granularity, and break the policy aspect down into three dimensions: policy generation, policy configurability, and policy lifetime. We apply our classification to a set of representative articles that cover a breadth of security isolation techniques and discuss tradeoffs among different design choices and limitations of existing approaches.