Biblio

Found 4176 results

Filters: First Letter Of Last Name is M  [Clear All Filters]
2015-01-12
Maass, Michael, Scherlis, Bill, Aldrich, Jonathan.  2014.  In-Nimbo Sandboxing. Symposium and Bootcamp on the Science of Security (HotSOS), 2014.

Sandboxes impose a security policy, isolating applications
and their components from the rest of a system. While
many sandboxing techniques exist, state of the art sandboxes
generally perform their functions within the system
that is being defended. As a result, when the sandbox fails
or is bypassed, the security of the surrounding system can
no longer be assured. We experiment with the idea of innimbo
sandboxing, encapsulating untrusted computations
away from the system we are trying to protect. The idea
is to delegate computations that may be vulnerable or malicious
to virtual machine instances in a cloud computing
environment.
This may not reduce the possibility of an in-situ sandbox
compromise, but it could significantly reduce the consequences
should that possibility be realized. To achieve this
advantage, there are additional requirements, including: (1)
A regulated channel between the local and cloud environments
that supports interaction with the encapsulated application,
(2) Performance design that acceptably minimizes
latencies in excess of the in-situ baseline.
To test the feasibility of the idea, we built an in-nimbo
sandbox for Adobe Reader, an application that historically
has been subject to significant attacks. We undertook a
prototype deployment with PDF users in a large aerospace
firm. In addition to thwarting several examples of existing
PDF-based malware, we found that the added increment of
latency, perhaps surprisingly, does not overly impair the

2014-09-17
Davis, Agnes, Shashidharan, Ashwin, Liu, Qian, Enck, William, McLaughlin, Anne, Watson, Benjamin.  2014.  Insecure Behaviors on Mobile Devices Under Stress. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :31:1–31:2.

One of the biggest challenges in mobile security is human behavior. The most secure password may be useless if it is sent as a text or in an email. The most secure network is only as secure as its most careless user. Thus, in the current project we sought to discover the conditions under which users of mobile devices were most likely to make security errors. This scaffolds a larger project where we will develop automatic ways of detecting such environments and eventually supporting users during these times to encourage safe mobile behaviors.

2015-11-17
Zhenqi Huang, University of Illinois at Urbana-Champaign, Chuchu Fan, University of Illinois at Urbana-Champaign, Alexandru Mereacre, University of Oxford, Sayan Mitra, University of Illinois at Urbana-Champaign, Marta Kwiatkowska, University of Oxford.  2014.  Invariant Verification of Nonlinear Hybrid Automata Networks of Cardiac Cells. 26th International Conference on Computer Aided Verification (CAV 2014).

Verification algorithms for networks of nonlinear hybrid automata (HA) can aid us understand and control biological processes such as cardiac arrhythmia, formation of memory, and genetic regulation. We present an algorithm for over-approximating reach sets of networks of nonlinear HA which can be used for sound and relatively complete invariant checking. First, it uses automatically computed input-to-state discrepancy functions for the individual automata modules in the network A for constructing a low-dimensional model M. Simulations of both A and M are then used to compute the reach tubes for A. These techniques enable us to handle a challenging verification problem involving a network of cardiac cells, where each cell has four continuous variables and 29 locations. Our prototype tool can check bounded-time invariants for networks with 5 cells (20 continuous variables, 295 locations) typically in less than 15 minutes for up to reasonable time horizons. From the computed reach tubes we can infer biologically relevant properties of the network from a set of initial states.

2014-10-24
Mezzour, Ghita, Carley, L. Richard, Carley, Kathleen M..  2014.  Longitudinal analysis of a large corpus of cyber threat descriptions. Journal of Computer Virology and Hacking Techniques. :1-12.

Online cyber threat descriptions are rich, but little research has attempted to systematically analyze these descriptions. In this paper, we process and analyze two of Symantec’s online threat description corpora. The Anti-Virus (AV) corpus contains descriptions of more than 12,400 threats detected by Symantec’s AV, and the Intrusion Prevention System (IPS) corpus contains descriptions of more than 2,700 attacks detected by Symantec’s IPS. In our analysis, we quantify the over time evolution of threat severity and type in the corpora. We also assess the amount of time Symantec takes to release signatures for newly discovered threats. Our analysis indicates that a very small minority of threats in the AV corpus are high-severity, whereas the majority of attacks in the IPS corpus are high-severity. Moreover, we find that the prevalence of different threat types such as worms and viruses in the corpora varies considerably over time. Finally, we find that Symantec prioritizes releasing signatures for fast propagating threats.

2018-05-27
2018-05-17
Gallagher, John C., Humphrey, Laura R., Matson, Eric.  2014.  Maintaining Model Consistency during In-Flight Adaptation in a Flapping-Wing Micro Air Vehicle. Robot Intelligence Technology and Applications 2: Results from the 2nd International Conference on Robot Intelligence Technology and Applications. :517–530.

Machine-learning and soft computation methods are often used to adapt and modify control systems for robotic, aerospace, and other electromechanical systems. Most often, those who use such methods of self-adaptation focus on issues related to efficacy of the solutions produced and efficiency of the computational methods harnessed to create them. Considered far less often are the effects self-adaptation on Verification and Validation (V{&}V) of the systems in which they are used. Simply observing that a broken robotic or aerospace system seems to have been repaired is often not enough. Since self-adaptation can severely distort the relationships among system components, many V{&}V methods can quickly become useless. This paper will focus on a method by which one can interleave machine-learning and model consistency checks to not only improve system performance, but also to identify how those improvements modify the relationship between the system and its underlying model. Armed with such knowledge, it becomes possible to update the underlying model to maintain consistency between the real and modeled systems. We will focus on a specific application of this idea to maintaining model consistency for a simulated Flapping-Wing Micro Air Vehicle that uses machine learning to compensate for wing damage incurred while in flight. We will demonstrate that our method can detect the nature of the wing damage and update the underlying vehicle model to better reflect the operation of the system after learning. The paper will conclude with a discussion of potential future applications, including generalizing the technique to other vehicles and automating the generation of model consistency-testing hypotheses.

Goppert, James, Gallagher, John C., Hwang, Inseok, Matson, Eric.  2014.  Model Checking of a Flapping-Wing Mirco-Air-Vehicle Trajectory Tracking Controller Subject to Disturbances. Robot Intelligence Technology and Applications 2: Results from the 2nd International Conference on Robot Intelligence Technology and Applications. :531–543.

This paper proposes a model checking method for a trajectory tracking controller for a flapping wing micro-air-vehicle (MAV) under disturbance. Due to the coupling of the continuous vehicle dynamics and the discrete guidance laws, the system is a hybrid system. Existing hybrid model checkers approximate the model by partitioning the continuous state space into invariant regions (flow pipes) through the use of reachable set computations. There are currently no efficient methods for accounting for unknown disturbances to the system. Neglecting disturbances for the trajectory tracking problem underestimates the reachable set and can fail to detect when the system would reach an unsafe condition. For linear systems, we propose the use of the H-infinity norm to augment the flow pipes and account for disturbances. We show that dynamic inversion can be coupled with our method to address the nonlinearities in the flapping-wing control system.

2018-05-23
Pajic, M., Mangharam, R., Sokolsky, O., others.  2014.  Model-Driven Safety Analysis of Closed-Loop Medical Systems. IEEE Transactions on Industrial Informatics. 10:3–16.
2014-09-17
Liu, Qian, Bae, Juhee, Watson, Benjamin, McLaughhlin, Anne, Enck, William.  2014.  Modeling and Sensing Risky User Behavior on Mobile Devices. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :33:1–33:2.

As mobile technology begins to dominate computing, understanding how their use impacts security becomes increasingly important. Fortunately, this challenge is also an opportunity: the rich set of sensors with which most mobile devices are equipped provide a rich contextual dataset, one that should enable mobile user behavior to be modeled well enough to predict when users are likely to act insecurely, and provide cognitively grounded explanations of those behaviors. We will evaluate this hypothesis with a series of experiments designed first to confirm that mobile sensor data can reliably predict user stress, and that users experiencing such stress are more likely to act insecurely.

2018-06-04
2015-05-06
El-Koujok, M., Benammar, M., Meskin, N., Al-Naemi, M., Langari, R..  2014.  Multiple Sensor Fault Diagnosis by Evolving Data-driven Approach. Inf. Sci.. 259:346–358.

Sensors are indispensable components of modern plants and processes and their reliability is vital to ensure reliable and safe operation of complex systems. In this paper, the problem of design and development of a data-driven Multiple Sensor Fault Detection and Isolation (MSFDI) algorithm for nonlinear processes is investigated. The proposed scheme is based on an evolving multi-Takagi Sugeno framework in which each sensor output is estimated using a model derived from the available input/output measurement data. Our proposed MSFDI algorithm is applied to Continuous-Flow Stirred-Tank Reactor (CFSTR). Simulation results demonstrate and validate the performance capabilities of our proposed MSFDI algorithm.

2015-05-05
Ma, J., Zhang, T., Dong, M..  2014.  A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition with Security Guarantee in e-Health Applications. Biomedical and Health Informatics, IEEE Journal of. PP:1-1.

This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
 

2018-05-27
Ziming Zhang, Yuting Chen, Venkatesh Saligrama.  2014.  A Novel Visual Word Co-occurrence Model for Person Re-identification. Computer Vision - {ECCV} 2014 Workshops - Zurich, Switzerland, September 6-7 and 12, 2014, Proceedings, Part {III}. 8927:122–133.
2014-09-17
Tembe, Rucha, Zielinska, Olga, Liu, Yuqi, Hong, Kyung Wha, Murphy-Hill, Emerson, Mayhorn, Chris, Ge, Xi.  2014.  Phishing in International Waters: Exploring Cross-national Differences in Phishing Conceptualizations Between Chinese, Indian and American Samples. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :8:1–8:7.

One hundred-sixty four participants from the United States, India and China completed a survey designed to assess past phishing experiences and whether they engaged in certain online safety practices (e.g., reading a privacy policy). The study investigated participants' reported agreement regarding the characteristics of phishing attacks, types of media where phishing occurs and the consequences of phishing. A multivariate analysis of covariance indicated that there were significant differences in agreement regarding phishing characteristics, phishing consequences and types of media where phishing occurs for these three nationalities. Chronological age and education did not influence the agreement ratings; therefore, the samples were demographically equivalent with regards to these variables. A logistic regression analysis was conducted to analyze the categorical variables and nationality data. Results based on self-report data indicated that (1) Indians were more likely to be phished than Americans, (2) Americans took protective actions more frequently than Indians by destroying old documents, and (3) Americans were more likely to notice the "padlock" security icon than either Indian or Chinese respondents. The potential implications of these results are discussed in terms of designing culturally sensitive anti-phishing solutions.

Mitra, Sayan.  2014.  Proving Abstractions of Dynamical Systems Through Numerical Simulations. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :12:1–12:9.

A key question that arises in rigorous analysis of cyberphysical systems under attack involves establishing whether or not the attacked system deviates significantly from the ideal allowed behavior. This is the problem of deciding whether or not the ideal system is an abstraction of the attacked system. A quantitative variation of this question can capture how much the attacked system deviates from the ideal. Thus, algorithms for deciding abstraction relations can help measure the effect of attacks on cyberphysical systems and to develop attack detection strategies. In this paper, we present a decision procedure for proving that one nonlinear dynamical system is a quantitative abstraction of another. Directly computing the reach sets of these nonlinear systems are undecidable in general and reach set over-approximations do not give a direct way for proving abstraction. Our procedure uses (possibly inaccurate) numerical simulations and a model annotation to compute tight approximations of the observable behaviors of the system and then uses these approximations to decide on abstraction. We show that the procedure is sound and that it is guaranteed to terminate under reasonable robustness assumptions.

2018-05-23
Lian Duan, Sanjai Rayadurgam, Mats Per Erik Heimdahl, Anaheed Ayoub, Oleg Sokolsky, Insup Lee.  2014.  Reasoning About Confidence and Uncertainty in Assurance Cases: A Survey. Software Engineering in Health Care - 4th International Symposium, {FHIES} 2014, and 6th International Workshop, {SEHC} 2014, Washington, DC, USA, July 17-18, 2014, Revised Selected Papers. :64–80.
2016-11-16
2019-05-30
Mark Yampolskiy, Yevgeniy Vorobeychik, Xenofon Koutsoukos, Peter Horvath, Heath LeBlanc, Janos Sztipanovits.  2014.  Resilient Distributed Consensus for Tree Topology. 3rd ACM International Conference on High Confidence Networked Systems (HiCoNS 2014).

Distributed consensus protocols are an important class of distributed algorithms. Recently, an Adversarial Resilient Consensus Protocol (ARC-P) has been proposed which is capable to achieve consensus despite false information pro- vided by a limited number of malicious nodes. In order to withstand false information, this algorithm requires a mesh- like topology, so that multiple alternative information flow paths exist. However, these assumptions are not always valid. For instance, in Smart Grid, an emerging distributed CPS, the node connectivity is expected to resemble the scale free network topology. Especially closer to the end customer, in home and building area networks, the connectivity graph resembles a tree structure.

In this paper, we propose a Range-based Adversary Re- silient Consensus Protocol (R.ARC-P). Three aspects dis- tinguish R.ARC-P from its predecessor: This protocol op- erates on the tree topology, it distinguishes between trust- worthiness of nodes in the immediate neighborhood, and it uses a valid value range in order to reduce the number of nodes considered as outliers. R.ARC-P is capable of reach- ing global consensus among all genuine nodes in the tree if assumptions about maximal number of malicious nodes in the neighborhood hold. In the case that this assumption is wrong, it is still possible to reach Strong Partial Consensus, i.e., consensus between leafs of at least two different parents.

2018-05-27
2014-09-17
Escobar, Santiago, Meadows, Catherine, Meseguer, José, Santiago, Sonia.  2014.  A Rewriting-based Forwards Semantics for Maude-NPA. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :3:1–3:12.

The Maude-NRL Protocol Analyzer (Maude-NPA) is a tool for reasoning about the security of cryptographic protocols in which the cryptosystems satisfy different equational properties. It tries to find secrecy or authentication attacks by searching backwards from an insecure attack state pattern that may contain logical variables, in such a way that logical variables become properly instantiated in order to find an initial state. The execution mechanism for this logical reachability is narrowing modulo an equational theory. Although Maude-NPA also possesses a forwards semantics naturally derivable from the backwards semantics, it is not suitable for state space exploration or protocol simulation. In this paper we define an executable forwards semantics for Maude-NPA, instead of its usual backwards one, and restrict it to the case of concrete states, that is, to terms without logical variables. This case corresponds to standard rewriting modulo an equational theory. We prove soundness and completeness of the backwards narrowing-based semantics with respect to the rewriting-based forwards semantics. We show its effectiveness as an analysis method that complements the backwards analysis with new prototyping, simulation, and explicit-state model checking features by providing some experimental results.

2018-05-23
M. Pajic, Z. Jiang, O. Sokolsky, I. Lee, R. Mangharam.  2014.  Safety-critical Medical Device Development using the UPP2SF Model Translation Tool. ACM Transactions on Embedded Computing. 13},foo number = {4s
2014-09-17
Durbeck, Lisa J. K., Athanas, Peter M., Macias, Nicholas J..  2014.  Secure-by-construction Composable Componentry for Network Processing. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :27:1–27:2.

Techniques commonly used for analyzing streaming video, audio, SIGINT, and network transmissions, at less-than-streaming rates, such as data decimation and ad-hoc sampling, can miss underlying structure, trends and specific events held in the data[3]. This work presents a secure-by-construction approach [7] for the upper-end data streams with rates from 10- to 100 Gigabits per second. The secure-by-construction approach strives to produce system security through the composition of individually secure hardware and software components. The proposed network processor can be used not only at data centers but also within networks and onboard embedded systems at the network periphery for a wide range of tasks, including preprocessing and data cleansing, signal encoding and compression, complex event processing, flow analysis, and other tasks related to collecting and analyzing streaming data. Our design employs a four-layer scalable hardware/software stack that can lead to inherently secure, easily constructed specialized high-speed stream processing. This work addresses the following contemporary problems: (1) There is a lack of hardware/software systems providing stream processing and data stream analysis operating at the target data rates; for high-rate streams the implementation options are limited: all-software solutions can't attain the target rates[1]. GPUs and GPGPUs are also infeasible: they were not designed for I/O at 10-100Gbps; they also have asymmetric resources for input and output and thus cannot be pipelined[4, 2], whereas custom chip-based solutions are costly and inflexible to changes, and FPGA-based solutions are historically hard to program[6]; (2) There is a distinct advantage to utilizing high-bandwidth or line-speed analytics to reduce time-to-discovery of information, particularly ones that can be pipelined together to conduct a series of processing tasks or data tests without impeding data rates; (3) There is potentially significant network infrastructure cost savings possible from compact and power-efficient analytic support deployed at the network periphery on the data source or one hop away; (4) There is a need for agile deployment in response to changing objectives; (5) There is an opportunity to constrain designs to use only secure components to achieve their specific objectives. We address these five problems in our stream processor design to provide secure, easily specified processing for low-latency, low-power 10-100Gbps in-line processing on top of a commodity high-end FPGA-based hardware accelerator network processor. With a standard interface a user can snap together various filter blocks, like Legos™, to form a custom processing chain. The overall design is a four-layer solution in which the structurally lowest layer provides the vast computational power to process line-speed streaming packets, and the uppermost layer provides the agility to easily shape the system to the properties of a given application. Current work has focused on design of the two lowest layers, highlighted in the design detail in Figure 1. The two layers shown in Figure 1 are the embeddable portion of the design; these layers, operating at up to 100Gbps, capture both the low- and high frequency components of a signal or stream, analyze them directly, and pass the lower frequency components, residues to the all-software upper layers, Layers 3 and 4; they also optionally supply the data-reduced output up to Layers 3 and 4 for additional processing. Layer 1 is analogous to a systolic array of processors on which simple low-level functions or actions are chained in series[5]. Examples of tasks accomplished at the lowest layer are: (a) check to see if Field 3 of the packet is greater than 5, or (b) count the number of X.75 packets, or (c) select individual fields from data packets. Layer 1 provides the lowest latency, highest throughput processing, analysis and data reduction, formulating raw facts from the stream; Layer 2, also accelerated in hardware and running at full network line rate, combines selected facts from Layer 1, forming a first level of information kernels. Layer 2 is comprised of a number of combiners intended to integrate facts extracted from Layer 1 for presentation to Layer 3. Still resident in FPGA hardware and hardware-accelerated, a Layer 2 combiner is comprised of state logic and soft-core microprocessors. Layer 3 runs in software on a host machine, and is essentially the bridge to the embeddable hardware; this layer exposes an API for the consumption of information kernels to create events and manage state. The generated events and state are also made available to an additional software Layer 4, supplying an interface to traditional software-based systems. As shown in the design detail, network data transitions systolically through Layer 1, through a series of light-weight processing filters that extract and/or modify packet contents. All filters have a similar interface: streams enter from the left, exit the right, and relevant facts are passed upward to Layer 2. The output of the end of the chain in Layer 1 shown in the Figure 1 can be (a) left unconnected (for purely monitoring activities), (b) redirected into the network (for bent pipe operations), or (c) passed to another identical processor, for extended processing on a given stream (scalability).

2015-10-11
Xianqing Yu, Peng Ning, Mladen A. Vouk.  2014.  Securing Hadoop in cloud. HotSoS 2014 Symposium and Bootcamp on the Science of Security. :ArticleNo.26.

Hadoop is a map-reduce implementation that rapidly processes data in parallel. Cloud provides reliability, flexibility, scalability, elasticity and cost saving to customers. Moving Hadoop into Cloud can be beneficial to Hadoop users. However, Hadoop has two vulnerabilities that can dramatically impact its security in a Cloud. The vulnerabilities are its overloaded authentication key, and the lack of fine-grained access control at the data access level. We propose and develop a security enhancement for Cloud-based Hadoop.

2015-05-01