Biblio

Filters: Keyword is Resilient Architectures  [Clear All Filters]
2021-09-16
Torkura, Kennedy A., Sukmana, Muhammad I. H., Cheng, Feng, Meinel, Christoph.  2020.  CloudStrike: Chaos Engineering for Security and Resiliency in Cloud Infrastructure. IEEE Access. 8:123044–123060.
Most cyber-attacks and data breaches in cloud infrastructure are due to human errors and misconfiguration vulnerabilities. Cloud customer-centric tools are imperative for mitigating these issues, however existing cloud security models are largely unable to tackle these security challenges. Therefore, novel security mechanisms are imperative, we propose Risk-driven Fault Injection (RDFI) techniques to address these challenges. RDFI applies the principles of chaos engineering to cloud security and leverages feedback loops to execute, monitor, analyze and plan security fault injection campaigns, based on a knowledge-base. The knowledge-base consists of fault models designed from secure baselines, cloud security best practices and observations derived during iterative fault injection campaigns. These observations are helpful for identifying vulnerabilities while verifying the correctness of security attributes (integrity, confidentiality and availability). Furthermore, RDFI proactively supports risk analysis and security hardening efforts by sharing security information with security mechanisms. We have designed and implemented the RDFI strategies including various chaos engineering algorithms as a software tool: CloudStrike. Several evaluations have been conducted with CloudStrike against infrastructure deployed on two major public cloud infrastructure: Amazon Web Services and Google Cloud Platform. The time performance linearly increases, proportional to increasing attack rates. Also, the analysis of vulnerabilities detected via security fault injection has been used to harden the security of cloud resources to demonstrate the effectiveness of the security information provided by CloudStrike. Therefore, we opine that our approaches are suitable for overcoming contemporary cloud security issues.
2020-01-29
Chuchu Fan, Sayan Mitra.  2019.  Data-Driven Safety Verification of Complex Cyber-Physical Systems. Design Automation of Cyber-Physical Systems. :107–142.

Data-driven verification methods utilize execution data together with models for establishing safety requirements. These are often the only tools available for analyzing complex, nonlinear cyber-physical systems, for which purely model-based analysis is currently infeasible. In this chapter, we outline the key concepts and algorithmic approaches for data-driven verification and discuss the guarantees they provide. We introduce some of the software tools that embody these ideas and present several practical case studies demonstrating their application in safety analysis of autonomous vehicles, advanced driver assist systems (ADAS), satellite control, and engine control systems.

2019-07-08
ellin zhao, Roykrong Sukkerd.  2019.  Interactive Explanation for Planning-Based Systems. ICCPS '19 Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems. :322-323.

As Cyber-Physical Systems (CPSs) become more autonomous, it becomes harder for humans who interact with the CPSs to understand the behavior of the systems. Particularly for CPSs that must perform tasks while optimizing for multiple quality objectives and acting under uncertainty, it can be difficult for humans to understand the system behavior generated by an automated planner. This work-in-progress presents an approach at clarifying system behavior through interactive explanation by allowing end-users to ask Why and Why-Not questions about specific behaviors of the system, and providing answers in the form of contrastive explanation.

Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter.  2019.  A General Framework for Adversarial Examples with Objectives. ACM Transactions on Privacy and Security (TOPS). 22(3)

Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains. Most research on adversarial examples takes as its only constraint that the perturbed images are similar to the originals. However, real-world application of these ideas often requires the examples to satisfy additional objectives, which are typically enforced through custom modifications of the perturbation process. In this article, we propose adversarial generative nets (AGNs), a general methodology to train a generator neural network to emit adversarial examples satisfying desired objectives. We demonstrate the ability of AGNs to accommodate a wide range of objectives, including imprecise ones difficult to model, in two application domains. In particular, we demonstrate physical adversarial examples—eyeglass frames designed to fool face recognition—with better robustness, inconspicuousness, and scalability than previous approaches, as well as a new attack to fool a handwritten-digit classifier.

2019-07-10
Olufogorehan Tunde-Onadele, Jingzhu He, Ting Dai, Xiaohui Gu.  2019.  A Study on Container Vulnerability Detection. IEEE International Conference on Cloud Engineering (IC2E).
2020-03-10
Cody Kinneer, Ryan Wagner, Fei Fang, Claire Le Goues, David Garlan.  2019.  Modeling Observability in Adaptive Systems to Defend Against Advanced Persistent Threats. 17th ACM-IEEE International Conference on Formal Methods and Models for System Design.

Advanced persistent threats (APTs) are a particularly troubling challenge for software systems. The adversarial nature of the security domain, and APTs in particular, poses unresolved challenges to the design of self-* systems, such as how to defend against multiple types of attackers with different goals and capabilities. In this interaction, the observability of each side is an important and under-investigated issue in the self-* domain. We propose a model of APT defense that elevates observability as a first-class concern. We evaluate this model by showing how an informed approach that uses observability improves the defender's utility compared to a uniform random strategy, can enable robust planning through sensitivity analysis, and can inform observability-related architectural design decisions.

2020-10-30
David Fridovich-Keil, Andrea Bajcsy, Jaime Fisac, Sylvia Herbert, Steven Wang, Anca Dragan, Claire J. Tomlin.  2019.  Confidence-aware motion prediction for real-time collision avoidance. The International Journal of Robotics Research. 39(2-3):250-265.

One of the most difficult challenges in robot motion planning is to account for the behavior of other moving agents, such as humans. Commonly, practitioners employ predictive models to reason about where other agents are going to move. Though there has been much recent work in building predictive models, no model is ever perfect: an agent can always move unexpectedly, in a way that is not predicted or not assigned sufficient probability. In such cases, the robot may plan trajectories that appear safe but, in fact, lead to collision. Rather than trust a model’s predictions blindly, we propose that the robot should use the model’s current predictive accuracy to inform the degree of confidence in its future predictions. This model confidence inference allows us to generate probabilistic motion predictions that exploit modeled structure when the structure successfully explains human motion, and degrade gracefully whenever the human moves unexpectedly. We accomplish this by maintaining a Bayesian belief over a single parameter that governs the variance of our human motion model. We couple this prediction algorithm with a recently proposed robust motion planner and controller to guide the construction of robot trajectories that are, to a good approximation, collision-free with a high, user-specified probability. We provide extensive analysis of the combined approach and its overall safety properties by establishing a connection to reachability analysis, and conclude with a hardware demonstration in which a small quadcopter operates safely in the same space as a human pedestrian.

2018-07-09
Anirudh Narasimman, Qiaozhi Wang, Fengjun Li, Dongwon Lee, Bo Luo.  2019.  Arcana: Enabling Private Posts on Public Microblog Platforms. 34rd International Information Security and Privacy Conference (IFIP SEC).

Many popular online social networks, such as Twitter, Tum-blr, and Sina Weibo, adopt too simple privacy models to satisfy users’diverse needs for privacy protection. In platforms with no (i.e., completely open) or binary (i.e., “public” and “friends-only”) access con-trol, users cannot control the dissemination boundary of the contentthey share. For instance, on Twitter, tweets in “public” accounts areaccessible to everyone including search engines, while tweets in “pro-tected” accounts are visible toallthe followers. In this work, we presentArcanato  enable  fine-grained access control for social network content sharing. In particular, we target the Twitter platform and intro-duce the “private tweet” function, which allows users to disseminateparticular tweets to designated group(s) of followers. Arcana employsCiphertext-Policy Attribute-based Encryption (CP-ABE) to implement social circle detection and private tweet encryption so that  access-controlled  tweets  are  only  readable  by  designated  recipients.  To  bestealthy, Arcana further embeds the protected content as digital water-marks in image tweets. We have implemented the Arcana prototype asa Chrome browser plug-in, and demonstrated its flexibility and effec-tiveness. Different from existing approaches that require trusted third-parties or additional server/broker/mediator, Arcana is light-weight andcompletely transparent to Twitter – all the communications, includingkey distribution and private tweet dissemination, are exchanged as Twit-ter messages. Therefore, with small API modifications, Arcana could beeasily ported to other online social networking platforms to support fine-grained access control.

2020-01-29
Hoang Hai Nguyen, Kartik Palani, David Nicol.  2019.  Extensions of Network Reliability Analysis. 49th IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2020). :88-99.

Network reliability studies properties of networks subjected to random failures of their components. It has been widely adopted to modeling and analyzing real-world problems across different domains, such as circuit design, genomics, databases, information propagation, network security, and many others. Two practical situations that usually arise from such problems are (i) the correlation between component failures and (ii) the uncertainty in failure probabilities. Previous work captured correlations by modeling component reliability using general Boolean expression of Bernoulli random variables. This paper extends such a model to address the second problem, where we investigate the use of Beta distributions to capture the variance of uncertainty. We call this new formalism the Beta uncertain graph. We study the reliability polynomials of Beta uncertain graphs as multivariate polynomials of Beta random variables and demonstrate the use of the model on two realistic examples. We also observe that the reliability distribution of a monotone Beta uncertain graph can be approximated by a Beta distribution, usually with high accuracy. Numerical results from Monte Carlo simulation of an approximation scheme and from two case studies strongly support this observation.

2020-03-09
Michael Bechtel, Heechul Yun.  2019.  Denial-of-Service Attacks on Shared Cache in Multicore: Analysis and Prevention. Real-Time and Embedded Technology and Applications Symposium (RTAS). :357-367.

In this paper we investigate the feasibility of denial-of-service (DoS) attacks on shared caches in multicore platforms. With carefully engineered attacker tasks, we are able to cause more than 300X execution time increases on a victim task running on a dedicated core on a popular embedded multicore platform, regardless of whether we partition its shared cache or not. Based on careful experimentation on real and simulated multicore platforms, we identify an internal hardware structure of a non-blocking cache, namely the cache writeback buffer, as a potential target of shared cache DoS attacks. We propose an OS-level solution to prevent such DoS attacks by extending a state-of-the-art memory bandwidth regulation mechanism. We implement the proposed mechanism in Linux on a real multicore platform and show its effectiveness in protecting against cache DoS attacks.

Waqar Ali, Heechul Yun.  2019.  RT-Gang: Real-Time Gang Scheduling Framework for Safety-Critical Systems. Real-Time and Embedded Technology and Applications Symposium (RTAS). :143-155.

In this paper, we present RT-Gang: a novel real-time gang scheduling framework that enforces a one-gang-at-a-time policy. We find that, in a multicore platform, co-scheduling multiple parallel real-time tasks would require highly pessimistic worst-case execution time (WCET) and schedulability analysis - even when there are enough cores - due to contention in shared hardware resources such as cache and DRAM controller. In RT-Gang, all threads of a parallel real-time task form a real-time gang and the scheduler globally enforces the one-gang-at-a-time scheduling policy to guarantee tight and accurate task WCET. To minimize under-utilization, we integrate a state-of-the-art memory bandwidth throttling framework to allow safe execution of best-effort tasks. Specifically, any idle cores, if exist, are used to schedule best-effort tasks but their maximum memory bandwidth usages are strictly throttled to tightly bound interference to real-time gang tasks. We implement RT-Gang in the Linux kernel and evaluate it on two representative embedded multicore platforms using both synthetic and real-world DNN workloads. The results show that RT-Gang dramatically improves system predictability and the overhead is negligible.

Farzad Farshchi, Qijing Huang, Heechul Yun.  2019.  Integrating NVIDIA Deep Learning Accelerator (NVDLA) with RISC-V SoC on FireSim. Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications.

NVDLA is an open-source deep neural network (DNN) accelerator which has received a lot of attention by the community since its introduction by Nvidia. It is a full-featured hardware IP and can serve as a good reference for conducting research and development of SoCs with integrated accelerators. However, an expensive FPGA board is required to do experiments with this IP in a real SoC. Moreover, since NVDLA is clocked at a lower frequency on an FPGA, it would be hard to do accurate performance analysis with such a setup. To overcome these limitations, we integrate NVDLA into a real RISC-V SoC on the Amazon could FPGA using FireSim, a cycle-exact FPGA-accelerated simulator. We then evaluate the performance of NVDLA by running YOLOv3 object-detection algorithm. Our results show that NVDLA can sustain 7.5 fps when running YOLOv3. We further analyze the performance by showing that sharing the last-level cache with NVDLA can result in up to 1.56x speedup. We then identify that sharing the memory system with the accelerator can result in unpredictable execution time for the real-time tasks running on this platform. We believe this is an important issue that must be addressed in order for on-chip DNN accelerators to be incorporated in real-time embedded systems.

Jacob Fustos, Farzad Farshchi, Heechul Yun.  2019.  SpectreGuard: An Efficient Data-Centric Defense Mechanism against Spectre Attacks. Proceedings of the 56th Annual Design Automation Conference 2019.

Speculative execution is an essential performance enhancing technique in modern processors, but it has been shown to be insecure. In this paper, we propose SpectreGuard, a novel defense mechanism against Spectre attacks. In our approach, sensitive memory blocks (e.g., secret keys) are marked using simple OS/library API, which are then selectively protected by hardware from Spectre attacks via low-cost micro-architecture extension. This technique allows microprocessors to maintain high performance, while restoring the control to software developers to make security and performance trade-offs.

2020-07-09
Dawei Chu, Jingqiang Lin, Fengjun Li, Xiaokun Zhang, Qiongxiao Wang, Guangqi Liu.  2019.  Ticket Transparency: Accountable Single Sign-On with Privacy-Preserving Public Logs. International Conference on Security and Privacy in Communication Systems (SecureComm).

Single sign-on (SSO) is becoming more and more popular in the Internet. An SSO ticket issued by the identity provider (IdP) allows an entity to sign onto a relying party (RP) on behalf of the account enclosed in the ticket. To ensure its authenticity, an SSO ticket is digitally signed by the IdP and verified by the RP. However, recent security incidents indicate that a signing system (e.g., certification authority) might be compromised to sign fraudulent messages, even when it is well protected in accredited commercial systems. Compared with certification authorities, the online signing components of IdPs are even more exposed to adversaries and thus more vulnerable to such threats in practice. This paper proposes ticket transparency to provide accountable SSO services with privacy-preserving public logs against potentially fraudulent tickets issued by a compromised IdP. With this scheme, an IdP-signed ticket is accepted by the RP only if it is recorded in the public logs. It enables a user to check all his tickets in the public logs and detect any fraudulent ticket issued without his participation or authorization. We integrate blind signatures, identity-based encryption and Bloom filters in the design, to balance transparency, privacy and efficiency in these security-enhanced SSO services. To the best of our knowledge, this is the first attempt to solve the security problems caused by potentially intruded or compromised IdPs in the SSO services.

2020-07-27
Torkura, Kennedy A., Sukmana, Muhammad I.H., Cheng, Feng, Meinel, Christoph.  2019.  Security Chaos Engineering for Cloud Services: Work In Progress. 2019 IEEE 18th International Symposium on Network Computing and Applications (NCA). :1–3.
The majority of security breaches in cloud infrastructure in recent years are caused by human errors and misconfigured resources. Novel security models are imperative to overcome these issues. Such models must be customer-centric, continuous, not focused on traditional security paradigms like intrusion detection and adopt proactive techniques. Thus, this paper proposes CloudStrike, a cloud security system that implements the principles of Chaos Engineering to enable the aforementioned properties. Chaos Engineering is an emerging discipline employed to prevent non-security failures in cloud infrastructure via Fault Injection Testing techniques. CloudStrike employs similar techniques with a focus on injecting failures that impact security i.e. integrity, confidentiality and availability. Essentially, CloudStrike leverages the relationship between dependability and security models. Preliminary experiments provide insightful and prospective results.
2018-10-09
Amin Ghafouri, Yevgeniy Vorobeychik, Xenofon D. Koutsoukos.  2018.  Adversarial Regression for Detecting Attacks in Cyber-Physical Systems. CoRR. abs/1804.11022

Attacks in cyber-physical systems (CPS) which manipulate sensor readings can cause enormous physical damage if undetected. Detection of attacks on sensors is crucial to mitigate this issue. We study supervised regression as a means to detect anomalous sensor readings, where each sensor's measurement is predicted as a function of other sensors. We show that several common learning approaches in this context are still vulnerable to \emph{stealthy attacks}, which carefully modify readings of compromised sensors to cause desired damage while remaining undetected. Next, we model the interaction between the CPS defender and attacker as a Stackelberg game in which the defender chooses detection thresholds, while the attacker deploys a stealthy attack in response. We present a heuristic algorithm for finding an approximately optimal threshold for the defender in this game, and show that it increases system resilience to attacks without significantly increasing the false alarm rate.

2018-07-16
Yang, Lei, Li, Fengjun.  2018.  Cloud-Assisted Privacy-Preserving Classification for IoT Applications. IEEE Conference on Communications and Network Security.

The explosive proliferation of Internet of Things (IoT) devices is generating an incomprehensible amount of data. Machine learning plays an imperative role in aggregating this data and extracting valuable information for improving operational and decision-making processes. In particular, emerging machine intelligence platforms that host pre-trained machine learning models are opening up new opportunities for IoT industries. While those platforms facilitate customers to analyze IoT data and deliver faster and accurate insights, end users and machine learning service providers (MLSPs) have raised concerns regarding security and privacy of IoT data as well as the pre-trained machine learning models for certain applications such as healthcare, smart energy, etc. In this paper, we propose a cloud-assisted, privacy-preserving machine learning classification scheme over encrypted data for IoT devices. Our scheme is based on a three-party model coupled with a two-stage decryption Paillier-based cryptosystem, which allows a cloud server to interact with MLSPs on behalf of the resource-constrained IoT devices in a privacy-preserving manner, and shift load of computation-intensive classification operations from them. The detailed security analysis and the extensive simulations with different key lengths and number of features and classes demonstrate that our scheme can effectively reduce the overhead for IoT devices in machine learning classification applications.

2018-07-13
Uttam Thakore, University of Illinois at Urbana-Champaign, Ahmed Fawaz, University of Illinois at Urbana-Champaign, William H. Sanders, University of Illinois at Urbana-Champaign.  2018.  Detecting Monitor Compromise using Evidential Reasoning.

Stealthy attackers often disable or tamper with system monitors to hide their tracks and evade detection. In this poster, we present a data-driven technique to detect such monitor compromise using evidential reasoning. Leveraging the fact that hiding from multiple, redundant monitors is difficult for an attacker, to identify potential monitor compromise, we combine alerts from different sets of monitors by using Dempster-Shafer theory, and compare the results to find outliers. We describe our ongoing work in this area.

2018-07-09
Symons, John.  2018.  Metaphysical and scientific accounts of emergence: varieties of fundamentality and theoretical completeness. Emergent Behavior in Complex Systems Engineering. :pp.2-20.

Fundamentality is the central conceptual component of discussions concerning the emergence. Most obviously, contemporary uses of the term "emergence" vary according to their users' views of fundamentality. This chapter provides a general characterization of fundamentality, explaining the challenges faced by the anti‐emergentist versions of fundamentalism. It discusses the limitations of one prominent account of ontological fundamentality, physicalism. Although physicalism does not present a viable alternative to emergentism, this does not mean that emergentists can declare victory. Completeness is essential to arguments against the possibility of strongly emergent properties. Three interlocking concepts: causation, completeness, and reality, are not straightforwardly scientific in nature, but are, instead, metaphysical, or at least conceptual. Scientific models are intended to provide guidance with respect to explanations and predictions of emergent properties or to offer possible interventions that would allow control over those properties.

2018-07-03
Wagner, Ryan, Garlan, David, Fredrikson, Matthew.  2018.  Quantitative underpinnings of secure, graceful degradation (Poster). HoTSoS '18 Proceedings of the 5th Annual Symposium and Bootcamp on Hot Topics in the Science of Security.

System administrators are slowly coming to accept that nearly all systems are vulnerable and many should be assumed to be compromised. Rather than preventing all vulnerabilities in complex systems, the approach is changing to protecting systems under the assumption that they are already under attack.

Administrators do not know all the latent vulnerabilities in the systems they are charged with protecting. This work builds on prior approaches that assume more a priori knowledge. [5]. Additionally, prior research does not necessarily guide administrators to gracefully degrade systems in response to threats [4]. Sophisticated attackers with high levels of resources, like advanced persistent threats (APTs), might use zero day exploits against novel vulnerabilities or be slow and stealthy to evade initial lines of detection.

However, defenders often have some knowledge of where attackers are. Additionally, it is possible to reasonably bound attacker resourcing. Exploits have a cost to create [1], and even the most sophisticated attacks use limited number of zero day exploits [3].

However, defenders need a way to reason about and react to the impact of an attacker with existing presence in a system. It may not be possible to maintain one hundred percent of the system's original utility; instead, the attacker might need to gracefully degrade the system, trading off some functional utility to keep an attacker away from the most critical functionality.

We propose a method to "think like an attacker" to evaluate architectures and alternatives in response to knowledge of attacker presence. For each considered alternative architecture, our approach determines the types of exploits an attacker would need to achieve particular attacks using the Datalog declarative logic programming language in a fashion that draws adapts others' prior work [2][4]. With knowledge of how difficult particular exploits are to create, we can approximate the cost to an attacker of a particular attack trace. A bounded search of traces within a limited cost provides a set of hypothetical attacks for a given architecture. These attacks have varying impacts to the system's ability to achieve its functions. Using this knowledge, our approach outputs an architectural alternative that optimally balances keeping an attacker away from critical functionality while preserving that functionality. In the process, it provides evidence in the form of hypothetical attack traces that can be used to explain the reasoning.

This thinking enables a defender to reason about how potential defensive tactics could close off avenues of attack or perhaps enable an ongoing attack. By thinking at the level of architecture, we avoid assumptions of knowledge of specific vulnerabilities. This enables reasoning in a highly uncertain domain.

We applied this to several small systems at varying levels of abstraction. These systems were chosen as exemplars of various "best practices" to see if the approach could quantitatively validate the underpinnings of general rules of thumb like using perimeter security or trading off resilience for security. Ultimately, our approach successfully places architectural components in places that correspond with current best practices and would be reasonable to system architects. In the process of applying the approach at different levels of abstraction, we were able to fine tune our understanding attacker movement through systems in a way that provides security-appropriate architectures despite poor knowledge of latent vulnerabilities; the result of the fine-tuning is a more granular way to understand and evaluate attacker movement in systems.

Future work will explore ways to enhance performance to this approach so it can provide real time planning to gracefully degrade systems as attacker knowledge is discovered. Additionally, we plan to explore ways to enhance expressiveness to the approach to address additional security related concerns; these might include aspects like timing and further levels of uncertainty.

2018-10-09
Aron Laszka, Waseem Abbas, Yevgeniy Vorobeychik, Xenofon Koutsoukos.  2018.  Synergistic Security for the Industrial Internet of Things: Integrating Redundancy, Diversity, and Hardening.

As the Industrial Internet of Things (IIot) becomes more prevalent in critical application domains, ensuring security and resilience in the face of cyber-attacks is becoming an issue of paramount importance. Cyber-attacks against critical infrastructures, for example, against smart water-distribution and transportation systems, pose serious threats to public health and safety. Owing to the severity of these threats, a variety of security techniques are available. However, no single technique can address the whole spectrum of cyber-attacks that may be launched by a determined and resourceful attacker. In light of this, we consider a multi-pronged approach for designing secure and resilient IIoT systems, which integrates redundancy, diversity, and hardening techniques. We introduce a framework for quantifying cyber-security risks and optimizing IIoT design by determining security investments in redundancy, diversity, and hardening. To demonstrate the applicability of our framework, we present two case studies in water distribution and transportation a case study in water-distribution systems. Our numerical evaluation shows that integrating redundancy, diversity, and hardening can lead to reduced security risk at the same cost.

2018-07-03
Sukkerd, Roykrong, Simmons, Reid, Garlan, David.  2018.  Towards Explainable Multi-Objective Probabilistic Planning. 4th International Workshop on Software Engineering for Smart Cyber-Physical Systems (SEsCPS\'18).

Use of multi-objective probabilistic planning to synthesize behavior of CPSs can play an important role in engineering systems that must self-optimize for multiple quality objectives and operate under uncertainty. However, the reasoning behind automated planning is opaque to end-users. They may not understand why a particular behavior is generated, and therefore not be able to calibrate their confidence in the systems working properly. To address this problem, we propose a method to automatically generate verbal explanation of multi-objective probabilistic planning, that explains why a particular behavior is generated on the basis of the optimization objectives. Our explanation method involves describing objective values of a generated behavior and explaining any tradeoff made to reconcile competing objectives. We contribute: (i) an explainable planning representation that facilitates explanation generation, and (ii) an algorithm for generating contrastive justification as explanation for why a generated behavior is best with respect to the planning objectives. We demonstrate our approach on a mobile robot case study.

Sharif, Mahmood, Bauer, Lujo, Reiter, Michael K..  2018.  On the Suitability of Lp-norms for Creating and Preventing Adversarial Examples. 2018 IEEE Conference.

Much research has been devoted to better understanding adversarial examples, which are specially crafted inputs to machine-learning models that are perceptually similar to benign inputs, but are classified differently (i.e., misclassified). Both algorithms that create adversarial examples and strategies for defending against adversarial examples typically use Lp-norms to measure the perceptual similarity between an adversarial input and its benign original. Prior work has already shown, however, that two images need not be close to each other as measured by an Lp-norm to be perceptually similar. In this work, we show that nearness according to an Lp-norm is not just unnecessary for perceptual similarity, but is also insufficient. Specifically, focusing on datasets (CIFAR10 and MNIST), Lp-norms, and thresholds used in prior work, we show through online user studies that “adversarial examples” that are closer to their benign counterparts than required by commonly used Lpnorm thresholds can nevertheless be perceptually distinct to humans from the corresponding benign examples. Namely, the perceptual distance between two images that are “near” each other according to an Lp-norm can be high enough that participants frequently classify the two images as representing different objects or digits. Combined with prior work, we thus demonstrate that nearness of inputs as measured by Lp-norms is neither necessary nor sufficient for perceptual similarity, which has implications for both creating and defending against adversarial examples. We propose and discuss alternative similarity metrics to stimulate future research in the area. 

2018-07-09
Farshchi, Farzad, Valsan, Prathap Kumar, Mancuso, Renato, Yun, Heechul.  2018.  Deterministic Memory Abstraction and Supporting Multicore System Architecture. Euromicro Conference on Real-Time Systems (ECRTS). :1:1-1:25.

Poor time predictability of multicore processors has been a long-standing challenge in the realtime systems community. In this paper, we make a case that a fundamental problem that prevents efficient and predictable real-time computing on multicore is the lack of a proper memory abstraction to express memory criticality, which cuts across various layers of the system: the application, OS, and hardware. We, therefore, propose a new holistic resource management approach driven by a new memory abstraction, which we call Deterministic Memory. The key characteristic of deterministic memory is that the platform–the OS and hardware–guarantees small and tightly bounded worst-case memory access timing. In contrast, we call the conventional memory abstraction as best-effort memory in which only highly pessimistic worst-case bounds can be achieved. We propose to utilize both abstractions to achieve high time predictability but without significantly sacrificing performance. We present deterministic memory-aware OS and architecture designs, including OS-level page allocator, hardware-level cache, and DRAM controller designs. We implement the proposed OS and architecture extensions on Linux and gem5 simulator. Our evaluation results, using a set of synthetic and real-world benchmarks, demonstrate the feasibility and effectiveness of our approach.

2018-09-30
Neema, Himanshu, Bradley Potteiger, Xenofon D. Koutsoukos, CheeYee Tang, Keith Stouffer.  2018.  Metrics-Driven Evaluation of Cybersecurity for Critical Railway Infrastructure. IEEE Resilience Week.

In the past couple of years, railway infrastructure has been growing more connected, resembling more of a traditional Cyber-Physical System model. Due to the tightly coupled nature between the cyber and physical domains, new attack vectors are emerging that create an avenue for remote hijacking of system components not designed to withstand such attacks. As such, best practice cybersecurity techniques need to be put in place to ensure the safety and resiliency of future railway designs, as well as infrastructure already in the field. However, traditional large-scale experimental evaluation that involves evaluating a large set of variables by running a design of experiments (DOE) may not always be practical and might not provide conclusive results. In addition, to achieve scalable experimentation, the modeling abstractions, simulation configurations, and experiment scenarios must be designed according to the analysis goals of the evaluations. Thus, it is useful to target a set of key operational metrics for evaluation and configure and extend the traditional DOE methods using these metrics. In this work, we present a metrics-driven evaluation approach for evaluating the security and resilience of railway critical infrastructure using a distributed simulation framework. A case study with experiment results is provided that demonstrates the capabilities of our testbed.