Biblio

Filters: Keyword is Foundations  [Clear All Filters]
2014-09-17
Maass, Michael, Scherlis, William L., Aldrich, Jonathan.  2014.  In-nimbo Sandboxing. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :1:1–1:12.

Sandboxes impose a security policy, isolating applications and their components from the rest of a system. While many sandboxing techniques exist, state of the art sandboxes generally perform their functions within the system that is being defended. As a result, when the sandbox fails or is bypassed, the security of the surrounding system can no longer be assured. We experiment with the idea of in-nimbo sandboxing, encapsulating untrusted computations away from the system we are trying to protect. The idea is to delegate computations that may be vulnerable or malicious to virtual machine instances in a cloud computing environment. This may not reduce the possibility of an in-situ sandbox compromise, but it could significantly reduce the consequences should that possibility be realized. To achieve this advantage, there are additional requirements, including: (1) A regulated channel between the local and cloud environments that supports interaction with the encapsulated application, (2) Performance design that acceptably minimizes latencies in excess of the in-situ baseline. To test the feasibility of the idea, we built an in-nimbo sandbox for Adobe Reader, an application that historically has been subject to significant attacks. We undertook a prototype deployment with PDF users in a large aerospace firm. In addition to thwarting several examples of existing PDF-based malware, we found that the added increment of latency, perhaps surprisingly, does not overly impair the user experience with respect to performance or usability.

Davis, Agnes, Shashidharan, Ashwin, Liu, Qian, Enck, William, McLaughlin, Anne, Watson, Benjamin.  2014.  Insecure Behaviors on Mobile Devices Under Stress. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :31:1–31:2.

One of the biggest challenges in mobile security is human behavior. The most secure password may be useless if it is sent as a text or in an email. The most secure network is only as secure as its most careless user. Thus, in the current project we sought to discover the conditions under which users of mobile devices were most likely to make security errors. This scaffolds a larger project where we will develop automatic ways of detecting such environments and eventually supporting users during these times to encourage safe mobile behaviors.

Layman, Lucas, Zazworka, Nico.  2014.  InViz: Instant Visualization of Security Attacks. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :15:1–15:2.

The InViz tool is a functional prototype that provides graphical visualizations of log file events to support real-time attack investigation. Through visualization, both experts and novices in cybersecurity can analyze patterns of application behavior and investigate potential cybersecurity attacks. The goal of this research is to identify and evaluate the cybersecurity information to visualize that reduces the amount of time required to perform cyber forensics.

Rao, Ashwini, Hibshi, Hanan, Breaux, Travis, Lehker, Jean-Michel, Niu, Jianwei.  2014.  Less is More?: Investigating the Role of Examples in Security Studies Using Analogical Transfer Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :7:1–7:12.

Information system developers and administrators often overlook critical security requirements and best practices. This may be due to lack of tools and techniques that allow practitioners to tailor security knowledge to their particular context. In order to explore the impact of new security methods, we must improve our ability to study the impact of security tools and methods on software and system development. In this paper, we present early findings of an experiment to assess the extent to which the number and type of examples used in security training stimuli can impact security problem solving. To motivate this research, we formulate hypotheses from analogical transfer theory in psychology. The independent variables include number of problem surfaces and schemas, and the dependent variable is the answer accuracy. Our study results do not show a statistically significant difference in performance when the number and types of examples are varied. We discuss the limitations, threats to validity and opportunities for future studies in this area.

Kästner, Christian, Pfeffer, Jürgen.  2014.  Limiting Recertification in Highly Configurable Systems: Analyzing Interactions and Isolation Among Configuration Options. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :23:1–23:2.

In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>102000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core.

King, Jason, Williams, Laurie.  2014.  Log Your CRUD: Design Principles for Software Logging Mechanisms. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :5:1–5:10.

According to a 2011 survey in healthcare, the most commonly reported breaches of protected health information involved employees snooping into medical records of friends and relatives. Logging mechanisms can provide a means for forensic analysis of user activity in software systems by proving that a user performed certain actions in the system. However, logging mechanisms often inconsistently capture user interactions with sensitive data, creating gaps in traces of user activity. Explicit design principles and systematic testing of logging mechanisms within the software development lifecycle may help strengthen the overall security of software. The objective of this research is to observe the current state of logging mechanisms by performing an exploratory case study in which we systematically evaluate logging mechanisms by supplementing the expected results of existing functional black-box test cases to include log output. We perform an exploratory case study of four open-source electronic health record (EHR) logging mechanisms: OpenEMR, OSCAR, Tolven eCHR, and WorldVistA. We supplement the expected results of 30 United States government-sanctioned test cases to include log output to track access of sensitive data. We then execute the test cases on each EHR system. Six of the 30 (20%) test cases failed on all four EHR systems because user interactions with sensitive data are not logged. We find that viewing protected data is often not logged by default, allowing unauthorized views of data to go undetected. Based on our results, we propose a set of principles that developers should consider when developing logging mechanisms to ensure the ability to capture adequate traces of user activity.

Liu, Qian, Bae, Juhee, Watson, Benjamin, McLaughhlin, Anne, Enck, William.  2014.  Modeling and Sensing Risky User Behavior on Mobile Devices. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :33:1–33:2.

As mobile technology begins to dominate computing, understanding how their use impacts security becomes increasingly important. Fortunately, this challenge is also an opportunity: the rich set of sensors with which most mobile devices are equipped provide a rich contextual dataset, one that should enable mobile user behavior to be modeled well enough to predict when users are likely to act insecurely, and provide cognitively grounded explanations of those behaviors. We will evaluate this hypothesis with a series of experiments designed first to confirm that mobile sensor data can reliably predict user stress, and that users experiencing such stress are more likely to act insecurely.

2015-03-03
Smith, Andrew, Vorobeychik, Yevgeniy, Letchford, Joshua.  2014.  Multi-Defender Security Games on Networks. SIGMETRICS Perform. Eval. Rev.. 41:4–7.

Stackelberg security game models and associated computational tools have seen deployment in a number of high- consequence security settings, such as LAX canine patrols and Federal Air Marshal Service. This deployment across essentially independent agencies raises a natural question: what global impact does the resulting strategic interaction among the defenders, each using a similar model, have? We address this question in two ways. First, we demonstrate that the most common solution concept of Strong Stackelberg equilibrium (SSE) can result in significant under-investment in security entirely because SSE presupposes a single defender. Second, we propose a framework based on a different solution concept which incorporates a model of interdependencies among targets, and show that in this framework defenders tend to over-defend, even under significant positive externalities of increased defense.

2014-09-17
Da, Gaofeng, Xu, Maochao, Xu, Shouhuai.  2014.  A New Approach to Modeling and Analyzing Security of Networked Systems. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :6:1–6:12.

Modeling and analyzing security of networked systems is an important problem in the emerging Science of Security and has been under active investigation. In this paper, we propose a new approach towards tackling the problem. Our approach is inspired by the shock model and random environment techniques in the Theory of Reliability, while accommodating security ingredients. To the best of our knowledge, our model is the first that can accommodate a certain degree of adaptiveness of attacks, which substantially weakens the often-made independence and exponential attack inter-arrival time assumptions. The approach leads to a stochastic process model with two security metrics, and we attain some analytic results in terms of the security metrics.

Feigenbaum, Joan, Jaggard, Aaron D., Wright, Rebecca N..  2014.  Open vs. Closed Systems for Accountability. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :4:1–4:11.

The relationship between accountability and identity in online life presents many interesting questions. Here, we first systematically survey the various (directed) relationships among principals, system identities (nyms) used by principals, and actions carried out by principals using those nyms. We also map these relationships to corresponding accountability-related properties from the literature. Because punishment is fundamental to accountability, we then focus on the relationship between punishment and the strength of the connection between principals and nyms. To study this particular relationship, we formulate a utility-theoretic framework that distinguishes between principals and the identities they may use to commit violations. In doing so, we argue that the analogue applicable to our setting of the well known concept of quasilinear utility is insufficiently rich to capture important properties such as reputation. We propose more general utilities with linear transfer that do seem suitable for this model. In our use of this framework, we define notions of "open" and "closed" systems. This distinction captures the degree to which system participants are required to be bound to their system identities as a condition of participating in the system. This allows us to study the relationship between the strength of identity binding and the accountability properties of a system.

Cao, Phuong, Li, Hongyang, Nahrstedt, Klara, Kalbarczyk, Zbigniew, Iyer, Ravishankar, Slagell, Adam J..  2014.  Personalized Password Guessing: A New Security Threat. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :22:1–22:2.

This paper presents a model for generating personalized passwords (i.e., passwords based on user and service profile). A user's password is generated from a list of personalized words, each word is drawn from a topic relating to a user and the service in use. The proposed model can be applied to: (i) assess the strength of a password (i.e., determine how many guesses are used to crack the password), and (ii) generate secure (i.e., contains digits, special characters, or capitalized characters) yet easy to memorize passwords.

Tembe, Rucha, Zielinska, Olga, Liu, Yuqi, Hong, Kyung Wha, Murphy-Hill, Emerson, Mayhorn, Chris, Ge, Xi.  2014.  Phishing in International Waters: Exploring Cross-national Differences in Phishing Conceptualizations Between Chinese, Indian and American Samples. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :8:1–8:7.

One hundred-sixty four participants from the United States, India and China completed a survey designed to assess past phishing experiences and whether they engaged in certain online safety practices (e.g., reading a privacy policy). The study investigated participants' reported agreement regarding the characteristics of phishing attacks, types of media where phishing occurs and the consequences of phishing. A multivariate analysis of covariance indicated that there were significant differences in agreement regarding phishing characteristics, phishing consequences and types of media where phishing occurs for these three nationalities. Chronological age and education did not influence the agreement ratings; therefore, the samples were demographically equivalent with regards to these variables. A logistic regression analysis was conducted to analyze the categorical variables and nationality data. Results based on self-report data indicated that (1) Indians were more likely to be phished than Americans, (2) Americans took protective actions more frequently than Indians by destroying old documents, and (3) Americans were more likely to notice the "padlock" security icon than either Indian or Chinese respondents. The potential implications of these results are discussed in terms of designing culturally sensitive anti-phishing solutions.

Cao, Phuong, Chung, Key-whan, Kalbarczyk, Zbigniew, Iyer, Ravishankar, Slagell, Adam J..  2014.  Preemptive Intrusion Detection. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :21:1–21:2.

This paper presents a system named SPOT to achieve high accuracy and preemptive detection of attacks. We use security logs of real-incidents that occurred over a six-year period at National Center for Supercomputing Applications (NCSA) to evaluate SPOT. Our data consists of attacks that led directly to the target system being compromised, i.e., not detected in advance, either by the security analysts or by intrusion detection systems. Our approach can detect 75 percent of attacks as early as minutes to tens of hours before attack payloads are executed.

Mitra, Sayan.  2014.  Proving Abstractions of Dynamical Systems Through Numerical Simulations. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :12:1–12:9.

A key question that arises in rigorous analysis of cyberphysical systems under attack involves establishing whether or not the attacked system deviates significantly from the ideal allowed behavior. This is the problem of deciding whether or not the ideal system is an abstraction of the attacked system. A quantitative variation of this question can capture how much the attacked system deviates from the ideal. Thus, algorithms for deciding abstraction relations can help measure the effect of attacks on cyberphysical systems and to develop attack detection strategies. In this paper, we present a decision procedure for proving that one nonlinear dynamical system is a quantitative abstraction of another. Directly computing the reach sets of these nonlinear systems are undecidable in general and reach set over-approximations do not give a direct way for proving abstraction. Our procedure uses (possibly inaccurate) numerical simulations and a model annotation to compute tight approximations of the observable behaviors of the system and then uses these approximations to decide on abstraction. We show that the procedure is sound and that it is guaranteed to terminate under reasonable robustness assumptions.

Escobar, Santiago, Meadows, Catherine, Meseguer, José, Santiago, Sonia.  2014.  A Rewriting-based Forwards Semantics for Maude-NPA. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :3:1–3:12.

The Maude-NRL Protocol Analyzer (Maude-NPA) is a tool for reasoning about the security of cryptographic protocols in which the cryptosystems satisfy different equational properties. It tries to find secrecy or authentication attacks by searching backwards from an insecure attack state pattern that may contain logical variables, in such a way that logical variables become properly instantiated in order to find an initial state. The execution mechanism for this logical reachability is narrowing modulo an equational theory. Although Maude-NPA also possesses a forwards semantics naturally derivable from the backwards semantics, it is not suitable for state space exploration or protocol simulation. In this paper we define an executable forwards semantics for Maude-NPA, instead of its usual backwards one, and restrict it to the case of concrete states, that is, to terms without logical variables. This case corresponds to standard rewriting modulo an equational theory. We prove soundness and completeness of the backwards narrowing-based semantics with respect to the rewriting-based forwards semantics. We show its effectiveness as an analysis method that complements the backwards analysis with new prototyping, simulation, and explicit-state model checking features by providing some experimental results.

Durbeck, Lisa J. K., Athanas, Peter M., Macias, Nicholas J..  2014.  Secure-by-construction Composable Componentry for Network Processing. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :27:1–27:2.

Techniques commonly used for analyzing streaming video, audio, SIGINT, and network transmissions, at less-than-streaming rates, such as data decimation and ad-hoc sampling, can miss underlying structure, trends and specific events held in the data[3]. This work presents a secure-by-construction approach [7] for the upper-end data streams with rates from 10- to 100 Gigabits per second. The secure-by-construction approach strives to produce system security through the composition of individually secure hardware and software components. The proposed network processor can be used not only at data centers but also within networks and onboard embedded systems at the network periphery for a wide range of tasks, including preprocessing and data cleansing, signal encoding and compression, complex event processing, flow analysis, and other tasks related to collecting and analyzing streaming data. Our design employs a four-layer scalable hardware/software stack that can lead to inherently secure, easily constructed specialized high-speed stream processing. This work addresses the following contemporary problems: (1) There is a lack of hardware/software systems providing stream processing and data stream analysis operating at the target data rates; for high-rate streams the implementation options are limited: all-software solutions can't attain the target rates[1]. GPUs and GPGPUs are also infeasible: they were not designed for I/O at 10-100Gbps; they also have asymmetric resources for input and output and thus cannot be pipelined[4, 2], whereas custom chip-based solutions are costly and inflexible to changes, and FPGA-based solutions are historically hard to program[6]; (2) There is a distinct advantage to utilizing high-bandwidth or line-speed analytics to reduce time-to-discovery of information, particularly ones that can be pipelined together to conduct a series of processing tasks or data tests without impeding data rates; (3) There is potentially significant network infrastructure cost savings possible from compact and power-efficient analytic support deployed at the network periphery on the data source or one hop away; (4) There is a need for agile deployment in response to changing objectives; (5) There is an opportunity to constrain designs to use only secure components to achieve their specific objectives. We address these five problems in our stream processor design to provide secure, easily specified processing for low-latency, low-power 10-100Gbps in-line processing on top of a commodity high-end FPGA-based hardware accelerator network processor. With a standard interface a user can snap together various filter blocks, like Legos™, to form a custom processing chain. The overall design is a four-layer solution in which the structurally lowest layer provides the vast computational power to process line-speed streaming packets, and the uppermost layer provides the agility to easily shape the system to the properties of a given application. Current work has focused on design of the two lowest layers, highlighted in the design detail in Figure 1. The two layers shown in Figure 1 are the embeddable portion of the design; these layers, operating at up to 100Gbps, capture both the low- and high frequency components of a signal or stream, analyze them directly, and pass the lower frequency components, residues to the all-software upper layers, Layers 3 and 4; they also optionally supply the data-reduced output up to Layers 3 and 4 for additional processing. Layer 1 is analogous to a systolic array of processors on which simple low-level functions or actions are chained in series[5]. Examples of tasks accomplished at the lowest layer are: (a) check to see if Field 3 of the packet is greater than 5, or (b) count the number of X.75 packets, or (c) select individual fields from data packets. Layer 1 provides the lowest latency, highest throughput processing, analysis and data reduction, formulating raw facts from the stream; Layer 2, also accelerated in hardware and running at full network line rate, combines selected facts from Layer 1, forming a first level of information kernels. Layer 2 is comprised of a number of combiners intended to integrate facts extracted from Layer 1 for presentation to Layer 3. Still resident in FPGA hardware and hardware-accelerated, a Layer 2 combiner is comprised of state logic and soft-core microprocessors. Layer 3 runs in software on a host machine, and is essentially the bridge to the embeddable hardware; this layer exposes an API for the consumption of information kernels to create events and manage state. The generated events and state are also made available to an additional software Layer 4, supplying an interface to traditional software-based systems. As shown in the design detail, network data transitions systolically through Layer 1, through a series of light-weight processing filters that extract and/or modify packet contents. All filters have a similar interface: streams enter from the left, exit the right, and relevant facts are passed upward to Layer 2. The output of the end of the chain in Layer 1 shown in the Figure 1 can be (a) left unconnected (for purely monitoring activities), (b) redirected into the network (for bent pipe operations), or (c) passed to another identical processor, for extended processing on a given stream (scalability).

Yu, Xianqing, Ning, Peng, Vouk, Mladen A..  2014.  Securing Hadoop in Cloud. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :26:1–26:2.

Hadoop is a map-reduce implementation that rapidly processes data in parallel. Cloud provides reliability, flexibility, scalability, elasticity and cost saving to customers. Moving Hadoop into Cloud can be beneficial to Hadoop users. However, Hadoop has two vulnerabilities that can dramatically impact its security in a Cloud. The vulnerabilities are its overloaded authentication key, and the lack of fine-grained access control at the data access level. We propose and develop a security enhancement for Cloud-based Hadoop.

Khalaj, Ebrahim, Vanciu, Radu, Abi-Antoun, Marwan.  2014.  Is There Value in Reasoning About Security at the Architectural Level: A Comparative Evaluation. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :30:1–30:2.

We propose to build a benchmark with hand-selected test-cases from different equivalence classes, then to directly compare different approaches that make different tradeoffs to better understand which approaches find security vulnerabilities more effectively (better recall, better precision).

Ibrahim, Naseem.  2014.  Trustworthy Context-dependent Services. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :20:1–20:2.

With the wide popularity of Cloud Computing, Service-oriented Computing is becoming the de-facto approach for the development of distributed systems. This has introduced the issue of trustworthiness with respect to the services being provided. Service Requesters are provided with a wide range of services that they can select from. Usually the service requester compare between these services according to their cost and quality. One essential part of the quality of a service is the trustworthiness properties of such services. Traditional service models focuses on service functionalities and cost when defining services. This paper introduces a new service model that extends traditional service models to support trustworthiness properties.

Kurilova, Darya, Omar, Cyrus, Nistor, Ligia, Chung, Benjamin, Potanin, Alex, Aldrich, Jonathan.  2014.  Type-specific Languages to Fight Injection Attacks. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :18:1–18:2.

Injection vulnerabilities have topped rankings of the most critical web application vulnerabilities for several years [1, 2]. They can occur anywhere where user input may be erroneously executed as code. The injected input is typically aimed at gaining unauthorized access to the system or to private information within it, corrupting the system's data, or disturbing system availability. Injection vulnerabilities are tedious and difficult to prevent.

2014-10-07
Maroti, Miklos, Kereskenyi, Robert, Tamas Kecskes, Volgyesi, Peter, Ledeczi, Akos.  2014.  Online Collaborative Environment for Designing Complex Computational Systems. The International Conference on Computational Science (ICCS 2014).

Developers of information systems have always utilized various visual formalisms during the design process, albeit in an informal manner. Architecture diagrams, finite state machines, and signal flow graphs are just a few examples. Model Integrated Computing (MIC) is an approach that considers these design artifacts as first class models and uses them to generate the system or subsystems automatically. Moreover, the same models can be used to analyze the system and generate test cases and documentation. MIC advocates the formal definition of these formalisms, called domain-specific modeling languages (DSML), via metamodeling and the automatic configuration of modeling tools from the metamodels. However, current MIC infrastructures are based on desktop applications that support a limited number of platforms, discourage concurrent design collaboration and are not scalable. This paper presents WebGME, a cloud- and web-based cyberinfrastructure to support the collaborative modeling, analysis, and synthesis of complex, large-scale scientific and engineering information systems. It facilitates interfacing with existing external tools, such as simulators and analysis tools, it provides custom domain-specific visualization support and enables the creation of automatic code generators.

Neema, Himanshu, Nine, Harmon, Graham Hemingway, Sztipanovits, Janos, Karsai, Gabor.  2009.  Rapid Synthesis of Multi-Model Simulations for Computational Experiments in C2. Armed Forces Communications and Electronics Association - George Mason University Symposium.

Abstract-Virtual evaluation of complex command and control concepts demands the use of heterogeneous simulation environments. Development challenges include how to integrate multiple simulation platforms with varying semantics and how to integrate simulation models and the complex interactions between them. While existing simulation frameworks may provide many of the required services needed to coordinate among multiple simulation platforms, they lack an overarching integration approach that connects and relates the semantics of heterogeneous domain models and their interactions. This paper outlines some of the challenges encountered in developing a command and control simulation environment and discusses our use of the GME meta-modeling tool-suite to create a model-based integration approach that allows for rapid synthesis of complex HLA-based simulation environments.

The research was conducted by Institute for Software Integrated Systems at Vanderbilt University, in collaboration with George Mason University, University of California at Berkeley, and University of Arizona.