Biblio

Found 369 results

Filters: Keyword is science of security  [Clear All Filters]
2014-09-17
Feigenbaum, Joan, Jaggard, Aaron D., Wright, Rebecca N..  2014.  Open vs. Closed Systems for Accountability. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :4:1–4:11.

The relationship between accountability and identity in online life presents many interesting questions. Here, we first systematically survey the various (directed) relationships among principals, system identities (nyms) used by principals, and actions carried out by principals using those nyms. We also map these relationships to corresponding accountability-related properties from the literature. Because punishment is fundamental to accountability, we then focus on the relationship between punishment and the strength of the connection between principals and nyms. To study this particular relationship, we formulate a utility-theoretic framework that distinguishes between principals and the identities they may use to commit violations. In doing so, we argue that the analogue applicable to our setting of the well known concept of quasilinear utility is insufficiently rich to capture important properties such as reputation. We propose more general utilities with linear transfer that do seem suitable for this model. In our use of this framework, we define notions of "open" and "closed" systems. This distinction captures the degree to which system participants are required to be bound to their system identities as a condition of participating in the system. This allows us to study the relationship between the strength of identity binding and the accountability properties of a system.

Cao, Phuong, Li, Hongyang, Nahrstedt, Klara, Kalbarczyk, Zbigniew, Iyer, Ravishankar, Slagell, Adam J..  2014.  Personalized Password Guessing: A New Security Threat. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :22:1–22:2.

This paper presents a model for generating personalized passwords (i.e., passwords based on user and service profile). A user's password is generated from a list of personalized words, each word is drawn from a topic relating to a user and the service in use. The proposed model can be applied to: (i) assess the strength of a password (i.e., determine how many guesses are used to crack the password), and (ii) generate secure (i.e., contains digits, special characters, or capitalized characters) yet easy to memorize passwords.

Tembe, Rucha, Zielinska, Olga, Liu, Yuqi, Hong, Kyung Wha, Murphy-Hill, Emerson, Mayhorn, Chris, Ge, Xi.  2014.  Phishing in International Waters: Exploring Cross-national Differences in Phishing Conceptualizations Between Chinese, Indian and American Samples. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :8:1–8:7.

One hundred-sixty four participants from the United States, India and China completed a survey designed to assess past phishing experiences and whether they engaged in certain online safety practices (e.g., reading a privacy policy). The study investigated participants' reported agreement regarding the characteristics of phishing attacks, types of media where phishing occurs and the consequences of phishing. A multivariate analysis of covariance indicated that there were significant differences in agreement regarding phishing characteristics, phishing consequences and types of media where phishing occurs for these three nationalities. Chronological age and education did not influence the agreement ratings; therefore, the samples were demographically equivalent with regards to these variables. A logistic regression analysis was conducted to analyze the categorical variables and nationality data. Results based on self-report data indicated that (1) Indians were more likely to be phished than Americans, (2) Americans took protective actions more frequently than Indians by destroying old documents, and (3) Americans were more likely to notice the "padlock" security icon than either Indian or Chinese respondents. The potential implications of these results are discussed in terms of designing culturally sensitive anti-phishing solutions.

2015-01-11
S. Jain, T. Ta, J.S. Baras.  2014.  Physical Layer Methods for Privacy Provision in Distributed Control and Inference. Proceedings 53rd IEEE Conference on Decision and Control.
2014-09-17
Cao, Phuong, Chung, Key-whan, Kalbarczyk, Zbigniew, Iyer, Ravishankar, Slagell, Adam J..  2014.  Preemptive Intrusion Detection. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :21:1–21:2.

This paper presents a system named SPOT to achieve high accuracy and preemptive detection of attacks. We use security logs of real-incidents that occurred over a six-year period at National Center for Supercomputing Applications (NCSA) to evaluate SPOT. Our data consists of attacks that led directly to the target system being compromised, i.e., not detected in advance, either by the security analysts or by intrusion detection systems. Our approach can detect 75 percent of attacks as early as minutes to tens of hours before attack payloads are executed.

Mitra, Sayan.  2014.  Proving Abstractions of Dynamical Systems Through Numerical Simulations. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :12:1–12:9.

A key question that arises in rigorous analysis of cyberphysical systems under attack involves establishing whether or not the attacked system deviates significantly from the ideal allowed behavior. This is the problem of deciding whether or not the ideal system is an abstraction of the attacked system. A quantitative variation of this question can capture how much the attacked system deviates from the ideal. Thus, algorithms for deciding abstraction relations can help measure the effect of attacks on cyberphysical systems and to develop attack detection strategies. In this paper, we present a decision procedure for proving that one nonlinear dynamical system is a quantitative abstraction of another. Directly computing the reach sets of these nonlinear systems are undecidable in general and reach set over-approximations do not give a direct way for proving abstraction. Our procedure uses (possibly inaccurate) numerical simulations and a model annotation to compute tight approximations of the observable behaviors of the system and then uses these approximations to decide on abstraction. We show that the procedure is sound and that it is guaranteed to terminate under reasonable robustness assumptions.

2017-02-10
Timothy Bretl, University of Illinois at Urbana-Champaign, Zoe McCarthy, University of Illinois at Urbana-Champaign.  2014.  Quasi-Static Manipulation of a Kirchhoff Elastic Road Based on a Geometric Analysis of Equilibrium Configurations. International Journal of Robotics Research. 33(1)

Consider a thin, flexible wire of fixed length that is held at each end by a robotic gripper. Any curve traced by this wire when in static equilibrium is a local solution to a geometric optimal control problem, with boundary conditions that vary with the position and orientation of each gripper. We prove that the set of all local solutions to this problem over all possible boundary conditions is a smooth manifold of finite dimension that can be parameterized by a single chart. We show that this chart makes it easy to implement a sampling-based algorithm for quasi-static manipulation planning. We characterize the performance of such an algorithm with experiments in simulation.

2015-11-16
Cuong Pham, University of Illinois at Urbana-Champaign, Zachary J. Estrada, University of Illinois at Urbana-Champaign, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign.  2014.  Reliability and Security Monitoring of Virtual Machines using Hardware Architectural Invariants. 44th International Conference on Dependable Systems and Networks.

This paper presents a solution that simultaneously addresses both reliability and security (RnS) in a monitoring framework. We identify the commonalities between reliability and security to guide the design of HyperTap, a hypervisor-level framework that efficiently supports both types of monitoring in virtualization environments. In HyperTap, the logging of system events and states is common across monitors and constitutes the core of the framework. The audit phase of each monitor is implemented and operated independently. In addition, HyperTap relies on hardware invariants to provide a strongly isolated root of trust. HyperTap uses active monitoring, which can be adapted to enforce a wide spectrum of RnS policies. We validate Hy- perTap by introducing three example monitors: Guest OS Hang Detection (GOSHD), Hidden RootKit Detection (HRKD), and Privilege Escalation Detection (PED). Our experiments with fault injection and real rootkits/exploits demonstrate that HyperTap provides robust monitoring with low performance overhead.

Winner of the William C. Carter Award for Best Paper based on PhD work and Best Paper Award voted by conference participants.

2016-11-16
2014-10-24
Hibshi, Hanan, Slavin, Rocky, Niu, Jianwei, Breaux, Travis D.  2014.  Rethinking Security Requirements in RE Research.

As information security became an increasing concern for software developers and users, requirements engineering (RE) researchers brought new insight to security requirements. Security requirements aim to address security at the early stages of system design while accommodating the complex needs of different stakeholders. Meanwhile, other research communities, such as usable privacy and security, have also examined these requirements with specialized goal to make security more usable for stakeholders from product owners, to system users and administrators. In this paper we report results from conducting a literature survey to compare security requirements research from RE Conferences with the Symposium on Usable Privacy and Security (SOUPS). We report similarities between the two research areas, such as common goals, technical definitions, research problems, and directions. Further, we clarify the differences between these two communities to understand how they can leverage each other’s insights. From our analysis, we recommend new directions in security requirements research mainly to expand the meaning of security requirements in RE to reflect the technological advancements that the broader field of security is experiencing. These recommendations to encourage cross- collaboration with other communities are not limited to the security requirements area; in fact, we believe they can be generalized to other areas of RE. 

2014-09-17
Escobar, Santiago, Meadows, Catherine, Meseguer, José, Santiago, Sonia.  2014.  A Rewriting-based Forwards Semantics for Maude-NPA. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :3:1–3:12.

The Maude-NRL Protocol Analyzer (Maude-NPA) is a tool for reasoning about the security of cryptographic protocols in which the cryptosystems satisfy different equational properties. It tries to find secrecy or authentication attacks by searching backwards from an insecure attack state pattern that may contain logical variables, in such a way that logical variables become properly instantiated in order to find an initial state. The execution mechanism for this logical reachability is narrowing modulo an equational theory. Although Maude-NPA also possesses a forwards semantics naturally derivable from the backwards semantics, it is not suitable for state space exploration or protocol simulation. In this paper we define an executable forwards semantics for Maude-NPA, instead of its usual backwards one, and restrict it to the case of concrete states, that is, to terms without logical variables. This case corresponds to standard rewriting modulo an equational theory. We prove soundness and completeness of the backwards narrowing-based semantics with respect to the rewriting-based forwards semantics. We show its effectiveness as an analysis method that complements the backwards analysis with new prototyping, simulation, and explicit-state model checking features by providing some experimental results.

Durbeck, Lisa J. K., Athanas, Peter M., Macias, Nicholas J..  2014.  Secure-by-construction Composable Componentry for Network Processing. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :27:1–27:2.

Techniques commonly used for analyzing streaming video, audio, SIGINT, and network transmissions, at less-than-streaming rates, such as data decimation and ad-hoc sampling, can miss underlying structure, trends and specific events held in the data[3]. This work presents a secure-by-construction approach [7] for the upper-end data streams with rates from 10- to 100 Gigabits per second. The secure-by-construction approach strives to produce system security through the composition of individually secure hardware and software components. The proposed network processor can be used not only at data centers but also within networks and onboard embedded systems at the network periphery for a wide range of tasks, including preprocessing and data cleansing, signal encoding and compression, complex event processing, flow analysis, and other tasks related to collecting and analyzing streaming data. Our design employs a four-layer scalable hardware/software stack that can lead to inherently secure, easily constructed specialized high-speed stream processing. This work addresses the following contemporary problems: (1) There is a lack of hardware/software systems providing stream processing and data stream analysis operating at the target data rates; for high-rate streams the implementation options are limited: all-software solutions can't attain the target rates[1]. GPUs and GPGPUs are also infeasible: they were not designed for I/O at 10-100Gbps; they also have asymmetric resources for input and output and thus cannot be pipelined[4, 2], whereas custom chip-based solutions are costly and inflexible to changes, and FPGA-based solutions are historically hard to program[6]; (2) There is a distinct advantage to utilizing high-bandwidth or line-speed analytics to reduce time-to-discovery of information, particularly ones that can be pipelined together to conduct a series of processing tasks or data tests without impeding data rates; (3) There is potentially significant network infrastructure cost savings possible from compact and power-efficient analytic support deployed at the network periphery on the data source or one hop away; (4) There is a need for agile deployment in response to changing objectives; (5) There is an opportunity to constrain designs to use only secure components to achieve their specific objectives. We address these five problems in our stream processor design to provide secure, easily specified processing for low-latency, low-power 10-100Gbps in-line processing on top of a commodity high-end FPGA-based hardware accelerator network processor. With a standard interface a user can snap together various filter blocks, like Legos™, to form a custom processing chain. The overall design is a four-layer solution in which the structurally lowest layer provides the vast computational power to process line-speed streaming packets, and the uppermost layer provides the agility to easily shape the system to the properties of a given application. Current work has focused on design of the two lowest layers, highlighted in the design detail in Figure 1. The two layers shown in Figure 1 are the embeddable portion of the design; these layers, operating at up to 100Gbps, capture both the low- and high frequency components of a signal or stream, analyze them directly, and pass the lower frequency components, residues to the all-software upper layers, Layers 3 and 4; they also optionally supply the data-reduced output up to Layers 3 and 4 for additional processing. Layer 1 is analogous to a systolic array of processors on which simple low-level functions or actions are chained in series[5]. Examples of tasks accomplished at the lowest layer are: (a) check to see if Field 3 of the packet is greater than 5, or (b) count the number of X.75 packets, or (c) select individual fields from data packets. Layer 1 provides the lowest latency, highest throughput processing, analysis and data reduction, formulating raw facts from the stream; Layer 2, also accelerated in hardware and running at full network line rate, combines selected facts from Layer 1, forming a first level of information kernels. Layer 2 is comprised of a number of combiners intended to integrate facts extracted from Layer 1 for presentation to Layer 3. Still resident in FPGA hardware and hardware-accelerated, a Layer 2 combiner is comprised of state logic and soft-core microprocessors. Layer 3 runs in software on a host machine, and is essentially the bridge to the embeddable hardware; this layer exposes an API for the consumption of information kernels to create events and manage state. The generated events and state are also made available to an additional software Layer 4, supplying an interface to traditional software-based systems. As shown in the design detail, network data transitions systolically through Layer 1, through a series of light-weight processing filters that extract and/or modify packet contents. All filters have a similar interface: streams enter from the left, exit the right, and relevant facts are passed upward to Layer 2. The output of the end of the chain in Layer 1 shown in the Figure 1 can be (a) left unconnected (for purely monitoring activities), (b) redirected into the network (for bent pipe operations), or (c) passed to another identical processor, for extended processing on a given stream (scalability).

Yu, Xianqing, Ning, Peng, Vouk, Mladen A..  2014.  Securing Hadoop in Cloud. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :26:1–26:2.

Hadoop is a map-reduce implementation that rapidly processes data in parallel. Cloud provides reliability, flexibility, scalability, elasticity and cost saving to customers. Moving Hadoop into Cloud can be beneficial to Hadoop users. However, Hadoop has two vulnerabilities that can dramatically impact its security in a Cloud. The vulnerabilities are its overloaded authentication key, and the lack of fine-grained access control at the data access level. We propose and develop a security enhancement for Cloud-based Hadoop.

2015-01-11
2016-12-13
2015-01-11
Michael R. Clarkson, Bernd Finkbeiner, Masoud Koleini, Kristopher K. Micinski, Markus N. Rabe, César Sánchez.  2014.  Temporal Logics for Hyperproperties. Proc. Conference on Principles of Security and Trust. :265-284.
2014-09-17
Khalaj, Ebrahim, Vanciu, Radu, Abi-Antoun, Marwan.  2014.  Is There Value in Reasoning About Security at the Architectural Level: A Comparative Evaluation. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :30:1–30:2.

We propose to build a benchmark with hand-selected test-cases from different equivalence classes, then to directly compare different approaches that make different tradeoffs to better understand which approaches find security vulnerabilities more effectively (better recall, better precision).

2016-11-11
2015-01-11
2014-09-17
Ibrahim, Naseem.  2014.  Trustworthy Context-dependent Services. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :20:1–20:2.

With the wide popularity of Cloud Computing, Service-oriented Computing is becoming the de-facto approach for the development of distributed systems. This has introduced the issue of trustworthiness with respect to the services being provided. Service Requesters are provided with a wide range of services that they can select from. Usually the service requester compare between these services according to their cost and quality. One essential part of the quality of a service is the trustworthiness properties of such services. Traditional service models focuses on service functionalities and cost when defining services. This paper introduces a new service model that extends traditional service models to support trustworthiness properties.

Kurilova, Darya, Omar, Cyrus, Nistor, Ligia, Chung, Benjamin, Potanin, Alex, Aldrich, Jonathan.  2014.  Type-specific Languages to Fight Injection Attacks. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :18:1–18:2.

Injection vulnerabilities have topped rankings of the most critical web application vulnerabilities for several years [1, 2]. They can occur anywhere where user input may be erroneously executed as code. The injected input is typically aimed at gaining unauthorized access to the system or to private information within it, corrupting the system's data, or disturbing system availability. Injection vulnerabilities are tedious and difficult to prevent.

2016-12-13
2017-02-17
Biplab Deka, University of Illinois at Urbana-Champaign, Alex A. Birklykke, Aalborg University, Henry Duwe, University of Illinois at Urbana-Champaign, Vikash K. Mansinghka, Massachusetts Institute of Technology, Rakesh Kumar, University of Illinois at Urbana-Champaign.  2014.  Markov Chain Algorithms: A Template for Building Future Robust Low-power Systems. Philosophical Transactions of the Royal Society A Mathematical, Physical and Engineering Sciences.

Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the expected inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains (MCs) as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications—Boolean satisfiability, sorting, low-density parity-check decoding and clustering—how applications can be cast as MC algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using MCs as an algorithmic template for future robust low-power systems.

2014-10-07
Maroti, Miklos, Kereskenyi, Robert, Tamas Kecskes, Volgyesi, Peter, Ledeczi, Akos.  2014.  Online Collaborative Environment for Designing Complex Computational Systems. The International Conference on Computational Science (ICCS 2014).

Developers of information systems have always utilized various visual formalisms during the design process, albeit in an informal manner. Architecture diagrams, finite state machines, and signal flow graphs are just a few examples. Model Integrated Computing (MIC) is an approach that considers these design artifacts as first class models and uses them to generate the system or subsystems automatically. Moreover, the same models can be used to analyze the system and generate test cases and documentation. MIC advocates the formal definition of these formalisms, called domain-specific modeling languages (DSML), via metamodeling and the automatic configuration of modeling tools from the metamodels. However, current MIC infrastructures are based on desktop applications that support a limited number of platforms, discourage concurrent design collaboration and are not scalable. This paper presents WebGME, a cloud- and web-based cyberinfrastructure to support the collaborative modeling, analysis, and synthesis of complex, large-scale scientific and engineering information systems. It facilitates interfacing with existing external tools, such as simulators and analysis tools, it provides custom domain-specific visualization support and enables the creation of automatic code generators.