Biblio

Found 3679 results

Filters: First Letter Of Last Name is C  [Clear All Filters]
2017-03-20
Johnston, Reece, Kim, Sun-il, Coe, David, Etzkorn, Letha, Kulick, Jeffrey, Milenkovic, Aleksandar.  2016.  Xen Network Flow Analysis for Intrusion Detection. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :18:1–18:4.

Virtualization technology has become ubiquitous in the computing world. With it, a number of security concerns have been amplified as users run adjacently on a single host. In order to prevent attacks from both internal and external sources, the networking of such systems must be secured. Network intrusion detection systems (NIDSs) are an important tool for aiding this effort. These systems work by analyzing flow or packet information to determine malicious intent. However, it is difficult to implement a NIDS on a virtualized system due to their complexity. This is especially true for the Xen hypervisor: Xen has incredible heterogeneity when it comes to implementation, making a generic solution difficult. In this paper, we analyze the network data flow of a typical Xen implementation along with identifying features common to any implementation. We then explore the benefits of placing security checks along the data flow and promote a solution within the hypervisor itself.

2017-10-18
Gris, Ivan, Rivera, Diego A., Rayon, Alex, Camacho, Adriana, Novick, David.  2016.  Young Merlin: An Embodied Conversational Agent in Virtual Reality. Proceedings of the 18th ACM International Conference on Multimodal Interaction. :425–426.

This paper describes a system for embodied conversational agents developed by Inmerssion and one of the applications—Young Merlin: Trial by Fire —built with this system. In the Merlin application, the ECA and a human interact with speech in virtual reality. The goal of this application is to provide engaging VR experiences that build rapport through storytelling and verbal interactions. The agent is fully automated, and his attitude towards the user changes over time depending on the interaction. The conversational system was built through a declarative approach that supports animations, markup language, and gesture recognition. Future versions of Merlin will implement multi-character dialogs, additional actions, and extended interaction time.

2017-05-16
Yang, Yang, Luo, Yadan, Chen, Weilun, Shen, Fumin, Shao, Jie, Shen, Heng Tao.  2016.  Zero-Shot Hashing via Transferring Supervised Knowledge. Proceedings of the 2016 ACM on Multimedia Conference. :1286–1295.

Hashing has shown its efficiency and effectiveness in facilitating large-scale multimedia applications. Supervised knowledge (\textbackslashemph\e.g.\, semantic labels or pair-wise relationship) associated to data is capable of significantly improving the quality of hash codes and hash functions. However, confronted with the rapid growth of newly-emerging concepts and multimedia data on the Web, existing supervised hashing approaches may easily suffer from the scarcity and validity of supervised information due to the expensive cost of manual labelling. In this paper, we propose a novel hashing scheme, termed \textbackslashemph\zero-shot hashing\ (ZSH), which compresses images of "unseen" categories to binary codes with hash functions learned from limited training data of "seen" categories. Specifically, we project independent data labels (i.e., 0/1-form label vectors) into semantic embedding space, where semantic relationships among all the labels can be precisely characterized and thus seen supervised knowledge can be transferred to unseen classes. Moreover, in order to cope with the semantic shift problem, we rotate the embedded space to more suitably align the embedded semantics with the low-level visual feature space, thereby alleviating the influence of semantic gap. In the meantime, to exert positive effects on learning high-quality hash functions, we further propose to preserve local structural property and discrete nature in binary codes. Besides, we develop an efficient alternating algorithm to solve the ZSH model. Extensive experiments conducted on various real-life datasets show the superior zero-shot image retrieval performance of ZSH as compared to several state-of-the-art hashing methods.

2016-11-15
Keywhan Chung, University of Illinois at Urbana-Champaign, Charles A. Kamhoua, Air Force Research Laboratory, Kevin A. Kwiat, Air Force Research Laboratory, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign.  2016.  Game Theory with Learning for Cyber Security Monitoring. IEEE High Assurance Systems Engineering Symposium (HASE 2016).

Recent attacks show that threats to cyber infrastructure are not only increasing in volume, but are getting more sophisticated. The attacks may comprise multiple actions that are hard to differentiate from benign activity, and therefore common detection techniques have to deal with high false positive rates. Because of the imperfect performance of automated detection techniques, responses to such attacks are highly dependent on human-driven decision-making processes. While game theory has been applied to many problems that require rational decisionmaking, we find limitation on applying such method on security games. In this work, we propose Q-Learning to react automatically to the adversarial behavior of a suspicious user to secure the system. This work compares variations of Q-Learning with a traditional stochastic game. Simulation results show the possibility of Naive Q-Learning, despite restricted information on opponents.

2016-04-11
Carver, J., Burcham, M., Kocak, S., Bener, A., Felderer, M., Gander, M., King, J., Markkula, J., Oivo, M., Sauerwein, C. et al..  2016.  Establishing a Baseline for Measuring Advancement in the Science of Security - an Analysis of the 2015 IEEE Security & Privacy Proceedings. 2016 Symposium and Bootcamp on the Science of Security (HotSoS).

To help establish a more scientific basis for security science, which will enable the development of fundamental theories and move the field from being primarily reactive to primarily proactive, it is important for research results to be reported in a scientifically rigorous manner. Such reporting will allow for the standard pillars of science, namely replication, meta-analysis, and theory building. In this paper we aim to establish a baseline of the state of scientific work in security through the analysis of indicators of scientific research as reported in the papers from the 2015 IEEE Symposium on Security and Privacy. To conduct this analysis, we developed a series of rubrics to determine the completeness of the papers relative to the type of evaluation used (e.g. case study, experiment, proof). Our findings showed that while papers are generally easy to read, they often do not explicitly document some key information like the research objectives, the process for choosing the cases to include in the studies, and the threats to validity. We hope that this initial analysis will serve as a baseline against which we can measure the advancement of the science of security.

2016-12-08
Flavio Medeiros, Christian Kästner, Marcio Ribeiro, Rohit Gheyi, Sven Apel.  2016.  A comparison of 10 sampling algorithms for configurable systems. ICSE '16 Proceedings of the 38th International Conference on Software Engineering. :643-654.

Almost every software system provides configuration options to tailor the system to the target platform and application scenario. Often, this configurability renders the analysis of every individual system configuration infeasible. To address this problem, researchers have proposed a diverse set of sampling algorithms. We present a comparative study of 10 state-of-the-art sampling algorithms regarding their fault-detection capability and size of sample sets. The former is important to improve software quality and the latter to reduce the time of analysis. In a nutshell, we found that sampling algorithms with larger sample sets are able to detect higher numbers of faults, but simple algorithms with small sample sets, such as most-enabled-disabled, are the most efficient in most contexts. Furthermore, we observed that the limiting assumptions made in previous work influence the number of detected faults, the size of sample sets, and the ranking of algorithms. Finally, we have identified a number of technical challenges when trying to avoid the limiting assumptions, which questions the practicality of certain sampling algorithms.

2016-07-13
Christopher Hannon, Illinois Institute of Technology, Jiaqi Yan, Illinois Institute of Tecnology, Dong Jin, Illinois Institute of Technology.  2016.  DSSnet: A Smart Grid Modeling Platform Combining Electrical Power Distribution System Simulation and Software Defined Networking Emulation. ACM SIGSIM Conference on Principles of Advanced Discrete Simulation.

The successful operations of modern power grids are highly dependent on a reliable and ecient underlying communication network. Researchers and utilities have started to explore the opportunities and challenges of applying the emerging software-de ned networking (SDN) technology to enhance eciency and resilience of the Smart Grid. This trend calls for a simulation-based platform that provides sufcient exibility and controllability for evaluating network application designs, and facilitating the transitions from inhouse research ideas to real productions. In this paper, we present DSSnet, a hybrid testing platform that combines a power distribution system simulator with an SDN emulator to support high delity analysis of communication network applications and their impacts on the power systems. Our contributions lay in the design of a virtual time system with the tight controllability on the execution of the emulation system, i.e., pausing and resuming any speci ed container processes in the perception of their own virtual clocks, with little overhead scaling to 500 emulated hosts with an average of 70 ms overhead; and also lay in the ecient synchronization of the two sub-systems based on the virtual time. We evaluate the system performance of DSSnet, and also demonstrate the usability through a case study by evaluating a load shifting algorithm.

2016-12-06
Waqar Ahmad, Christian Kästner, Joshua Sunshine, Jonathan Aldrich.  2016.  Inter-app Communication in Android: Developer Challenges. 2016 IEEE/ACM 13th Working Conference on Mining Software Repositories. :177-188.

The Android platform is designed to support mutually untrusted third-party apps, which run as isolated processes but may interact via platform-controlled mechanisms, called Intents. Interactions among third-party apps are intended and can contribute to a rich user experience, for example, the ability to share pictures from one app with another. The Android platform presents an interesting point in a design space of module systems that is biased toward isolation, extensibility, and untrusted contributions. The Intent mechanism essentially provides message channels among modules, in which the set of message types is extensible. However, the module system has design limitations including the lack of consistent mechanisms to document message types, very limited checking that a message conforms to its specifications, the inability to explicitly declare dependencies on other modules, and the lack of checks for backward compatibility as message types evolve over time. In order to understand the degree to which these design limitations result in real issues, we studied a broad corpus of apps and cross-validated our results against app documentation and Android support forums. Our findings suggest that design limitations do indeed cause development problems. Based on our results, we outline further research questions and propose possible mitigation strategies.

2016-12-07
Sarah Pearman, Nicholas Munson, Leeyat Slyper, Lujo Bauer, Serge Egelman, Arnab Kumar, Charu Sharma, Jeremy Thomas, Nicolas Christin.  2016.  Risk Compensation in Home-User Computer Security Behavior: A Mixed-Methods Exploratory Study. SOUPS 2016: 12th Symposium on Usable Privacy and Security.

Risk homeostasis theory claims that individuals adjust their behaviors in response to changing variables to keep what they perceive as a constant accepted level of risk [8]. Risk homeostasis theory is used to explain why drivers may drive faster when wearing seatbelts. Here we explore whether risk homeostasis theory applies to end-user security behaviors. We use observed data from over 200 participants in a longitudinal in-situ study as well as survey data from 249 users to attempt to determine how user security behaviors and attitudes are affected by the presence or absence of antivirus software. If risk compensation is occurring, users might be expected to behave more dangerously in some ways when antivirus is present. Some of our preliminary data suggests that risk compensation may be occurring, but additional work with larger samples is needed. 

2018-05-15
Jeremy Daily, Rose Gamble, Stephen Moffitt, Connor Raines, Paul Harris, Jannah Miran, Indrakshi Ray, Subhojeet Mukherjee, Hossein Shirazi, James Johnson.  2016.  Towards a Cyber Assurance Testbed for Heavy Vehicle Electronic Controls. SAE Int. J. Commer. Veh.. 9:339-349.

AbstractCyber assurance of heavy trucks is a major concern with new designs as well as with supporting legacy systems. Many cyber security experts and analysts are used to working with traditional information technology (IT) networks and are familiar with a set of technologies that may not be directly useful in the commercial vehicle sector. To help connect security researchers to heavy trucks, a remotely accessible testbed has been prototyped for experimentation with security methodologies and techniques to evaluate and improve on existing technologies, as well as developing domain-specific technologies. The testbed relies on embedded Linux-based node controllers that can simulate the sensor inputs to various heavy vehicle electronic control units (ECUs). The node controller also monitors and affects the flow of network information between the ECUs and the vehicle communications backbone. For example, a node controller acts as a clone that generates analog wheel speed sensor data while at the same time monitors or controls the network traffic on the J1939 and J1708 networks. The architecture and functions of the node controllers are detailed. Sample interaction with the testbed is illustrated, along with a discussion of the challenges of running remote experiments. Incorporating high fidelity hardware in the testbed enables security researchers to advance the state of the art in hardening heavy vehicle ECUs against cyber-attacks. How the testbed can be used for security research is presented along with an example of its use in evaluating seed/key exchange strength and in intrusion detection systems (IDSs).

2016-12-08
Gabriel Ferreira, Momin Malik, Christian Kästner, Jurgen Pfeffer, Sven Apel.  2016.  Do #ifdefs influence the occurrence of vulnerabilities? an empirical study of the linux kernel SPLC '16 Proceedings of the 20th International Systems and Software Product Line Conference. :65-73.

Preprocessors support the diversification of software products with #ifdefs, but also require additional effort from developers to maintain and understand variable code. We conjecture that #ifdefs cause developers to produce more vulnerable code because they are required to reason about multiple features simultaneously and maintain complex mental models of dependencies of configurable code.

We extracted a variational call graph across all configurations of the Linux kernel, and used configuration complexity metrics to compare vulnerable and non-vulnerable functions considering their vulnerability history. Our goal was to learn about whether we can observe a measurable influence of configuration complexity on the occurrence of vulnerabilities.

Our results suggest, among others, that vulnerable functions have higher variability than non-vulnerable ones and are also constrained by fewer configuration options. This suggests that developers are inclined to notice functions appear in frequently-compiled product variants. We aim to raise developers' awareness to address variability more systematically, since configuration complexity is an important, but often ignored aspect of software product lines.

Jens Meinicke, Chu-Pan Wong, Christian Kästner, Thomas Thum, Gunter Saake.  2016.  On essential configuration complexity: measuring interactions in highly-configurable systems. ASE 2016 Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. :483-494.

Quality assurance for highly-configurable systems is challenging due to the exponentially growing configuration space. Interactions among multiple options can lead to surprising behaviors, bugs, and security vulnerabilities. Analyzing all configurations systematically might be possible though if most options do not interact or interactions follow specific patterns that can be exploited by analysis tools. To better understand interactions in practice, we analyze program traces to characterize and identify where interactions occur on control flow and data. To this end, we developed a dynamic analysis for Java based on variability-aware execution and monitor executions of multiple small to medium-sized programs. We find that the essential configuration complexity of these programs is indeed much lower than the combinatorial explosion of the configuration space indicates. However, we also discover that the interaction characteristics that allow scalable and complete analyses are more nuanced than what is exploited by existing state-of-the-art quality assurance strategies.

2019-09-26
Carolyn Crandall.  2016.  The ins and outs of deception for cyber security. Network World.

New deception technologies bring a heightened level of aggressiveness in addressing cyberattacks.  Dynamic deception steps in, when prevention systems fail, and provides organizations with an efficient way to continuously detect intrusions with high interaction traps, engagement servers, and luring techniques to engage attackers. It does this without requiring additional IT staff to manage the solution.

2016-12-07
Cyrus Omar, Jonathan Aldrich.  2016.  Programmable semantic fragments: the design and implementation of typy. GPCE 2016 Proceedings of the 2016 ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences.

This paper introduces typy, a statically typed programming language embedded by reflection into Python. typy features a fragmentary semantics, i.e. it delegates semantic control over each term, drawn from Python's fixed concrete and abstract syntax, to some contextually relevant user-defined semantic fragment. The delegated fragment programmatically 1) typechecks the term (following a bidirectional protocol); and 2) assigns dynamic meaning to the term by computing a translation to Python.

We argue that this design is expressive with examples of fragments that express the static and dynamic semantics of 1) functional records; 2) labeled sums (with nested pattern matching a la ML); 3) a variation on JavaScript's prototypal object system; and 4) typed foreign interfaces to Python and OpenCL. These semantic structures are, or would need to be, defined primitively in conventionally structured languages.

We further argue that this design is compositionally well-behaved. It avoids the expression problem and the problems of grammar composition because the syntax is fixed. Moreover, programs are semantically stable under fragment composition (i.e. defining a new fragment will not change the meaning of existing program components.)

2017-01-09
Jafar Al-Kofahi, Tien Nguyen, Christian Kästner.  2016.  Escaping AutoHell: a vision for automated analysis and migration of autotools build systems. RELENG 2016 Proceedings of the 4th International Workshop on Release Engineering.

GNU Autotools is a widely used build tool in the open source community. As open source projects grow more complex, maintaining their build systems becomes more challenging, due to the lack of tool support. In this paper, we propose a platform to build support tools for GNU Autotools build systems. The platform provides an abstraction of the build system to be used in different analysis techniques.

2016-12-08
Christopher Bogart, Christian Kästner, James Herbsleb, Ferdian Thung.  2016.  How to break an API: cost negotiation and community values in three software ecosystems. FSE 2016 Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering.

Change introduces conflict into software ecosystems: breaking changes may ripple through the ecosystem and trigger rework for users of a package, but often developers can invest additional effort or accept opportunity costs to alleviate or delay downstream costs. We performed a multiple case study of three software ecosystems with different tooling and philosophies toward change, Eclipse, R/CRAN, and Node.js/npm, to understand how developers make decisions about change and change-related costs and what practices, tooling, and policies are used. We found that all three ecosystems differ substantially in their practices and expectations toward change and that those differences can be explained largely by different community values in each ecosystem. Our results illustrate that there is a large design space in how to build an ecosystem, its policies and its supporting infrastructure; and there is value in making community values and accepted tradeoffs explicit and transparent in order to resolve conflicts and negotiate change-related costs

2017-01-20
Xin Liu, Illinois Institute of Technology, Dong Jin, Illinois Institute of Technology, Cheol Won Lee, National Research Institute, South Korea, Jong Cheol Moon, National Research Institute, South Korea.  2016.  ConVenus: Congestion Verification of Network Updates in Software-defined Networks. Winter Simulation Conference (WSC).

We present ConVenus, a system that performs rapid congestion verification of network updates in softwaredefined networks. ConVenus is a lightweight middleware between the SDN controller and network devices, and is capable to intercept flow updates from the controller and verify whether the amount of traffic in any links and switches exceeds the desired capacity. To enable online verification, ConVenus dynamically identifies the minimum set of flows and switches that are affected by each flow update, and creates a compact network model. ConVenus uses a four-phase simulation algorithm to quickly compute the throughput of every flow in the network model and report network congestion. The experimental results demonstrate that ConVenus manages to verify 90% of the updates in a network consisting of over 500 hosts and 80 switches within 5 milliseconds.

2016-12-08
Hanan Hibshi, Travis Breaux, Christian Wagner.  2016.  Improving Security Requirements Adequacy An Interval Type 2 Fuzzy Logic Security Assessment System. 2016 IEEE Symposium Series on Computational Intelligence .

Organizations rely on security experts to improve the security of their systems. These professionals use background knowledge and experience to align known threats and vulnerabilities before selecting mitigation options. The substantial depth of expertise in any one area (e.g., databases, networks, operating systems) precludes the possibility that an expert would have complete knowledge about all threats and vulnerabilities. To begin addressing this problem of distributed knowledge, we investigate the challenge of developing a security requirements rule base that mimics human expert reasoning to enable new decision-support systems. In this paper, we show how to collect relevant information from cyber security experts to enable the generation of: (1) interval type-2 fuzzy sets that capture intra- and inter-expert uncertainty around vulnerability levels; and (2) fuzzy logic rules underpinning the decision-making process within the requirements analysis. The proposed method relies on comparative ratings of security requirements in the context of concrete vignettes, providing a novel, interdisciplinary approach to knowledge generation for fuzzy logic systems. The proposed approach is tested by evaluating 52 scenarios with 13 experts to compare their assessments to those of the fuzzy logic decision support system. The initial results show that the system provides reliable assessments to the security analysts, in particular, generating more conservative assessments in 19% of the test scenarios compared to the experts’ ratings. 

2016-04-25
Momin Malik, Jurgen Pfeffer, Gabriel Ferreira, Christian Kästner.  2016.  Visualizing the variational callgraph of the Linux Kernel: An approach for reasoning about dependencies. HotSos '16 Proceedings of the Symposium and Bootcamp on the Science of Security.

Software developers use #ifdef statements to support code configurability, allowing software product diversification. But because functions can be in many executions paths that depend on complex combinations of configuration options, the introduction of an #ifdef for a given purpose (such as adding a new feature to a program) can enable unintended function calls, which can be a source of vulnerabilities. Part of the difficulty lies in maintaining mental models of all dependencies. We propose analytic visualizations of thevariational callgraph to capture dependencies across configurations and create visualizations to demonstrate how it would help developers visually reason through the implications of diversification, for example through visually doing change impact analysis.

2018-05-16
C. Guo, S. Ren, Y. Jiang, P. L. Wu, L. Sha, R. B. Berlin.  2016.  Transforming Medical Best Practice Guidelines to Executable and Verifiable Statechart Models. 2016 ACM/IEEE 7th International Conference on Cyber-Physical Systems (ICCPS). :1-10.
2017-11-13
Shepherd, C., Arfaoui, G., Gurulian, I., Lee, R. P., Markantonakis, K., Akram, R. N., Sauveron, D., Conchon, E..  2016.  Secure and Trusted Execution: Past, Present, and Future - A Critical Review in the Context of the Internet of Things and Cyber-Physical Systems. 2016 IEEE Trustcom/BigDataSE/ISPA. :168–177.

Notions like security, trust, and privacy are crucial in the digital environment and in the future, with the advent of technologies like the Internet of Things (IoT) and Cyber-Physical Systems (CPS), their importance is only going to increase. Trust has different definitions, some situations rely on real-world relationships between entities while others depend on robust technologies to gain trust after deployment. In this paper we focus on these robust technologies, their evolution in past decades and their scope in the near future. The evolution of robust trust technologies has involved diverse approaches, as a consequence trust is defined, understood and ascertained differently across heterogeneous domains and technologies. In this paper we look at digital trust technologies from the point of view of security and examine how they are making secure computing an attainable reality. The paper also revisits and analyses the Trusted Platform Module (TPM), Secure Elements (SE), Hypervisors and Virtualisation, Intel TXT, Trusted Execution Environments (TEE) like GlobalPlatform TEE, Intel SGX, along with Host Card Emulation, and Encrypted Execution Environment (E3). In our analysis we focus on these technologies and their application to the emerging domains of the IoT and CPS.

2018-05-25
2018-05-17
Coogan, S., Arcak, M..  2016.  Symmetric monotone embedding of traffic flow networks with first-in-first-out dynamics. Proceedings of the 10th IFAC Symposium on Nonlinear Control Systems. :640-645.
2018-05-25