Visible to the public Biblio

Filters: Keyword is Instruments  [Clear All Filters]
2018-02-02
Kochte, M. A., Baranowski, R., Wunderlich, H. J..  2017.  Trustworthy reconfigurable access to on-chip infrastructure. 2017 International Test Conference in Asia (ITC-Asia). :119–124.

The accessibility of on-chip embedded infrastructure for test, reconfiguration, or debug poses a serious security problem. Access mechanisms based on IEEE Std 1149.1 (JTAG), and especially reconfigurable scan networks (RSNs), as allowed by IEEE Std 1500, IEEE Std 1149.1-2013, and IEEE Std 1687 (IJTAG), require special care in the design and development. This work studies the threats to trustworthy data transmission in RSNs posed by untrusted components within the RSN and external interfaces. We propose a novel scan pattern generation method that finds trustworthy access sequences to prevent sniffing and spoofing of transmitted data in the RSN. For insecure RSNs, for which such accesses do not exist, we present an automated transformation that improves the security and trustworthiness while preserving the accessibility to attached instruments. The area overhead is reduced based on results from trustworthy access pattern generation. As a result, sensitive data is not exposed to untrusted components in the RSN, and compromised data cannot be injected during trustworthy accesses.

Moyer, T., Chadha, K., Cunningham, R., Schear, N., Smith, W., Bates, A., Butler, K., Capobianco, F., Jaeger, T., Cable, P..  2016.  Leveraging Data Provenance to Enhance Cyber Resilience. 2016 IEEE Cybersecurity Development (SecDev). :107–114.

Building secure systems used to mean ensuring a secure perimeter, but that is no longer the case. Today's systems are ill-equipped to deal with attackers that are able to pierce perimeter defenses. Data provenance is a critical technology in building resilient systems that will allow systems to recover from attackers that manage to overcome the "hard-shell" defenses. In this paper, we provide background information on data provenance, details on provenance collection, analysis, and storage techniques and challenges. Data provenance is situated to address the challenging problem of allowing a system to "fight-through" an attack, and we help to identify necessary work to ensure that future systems are resilient.

2018-01-16
Miramirkhani, N., Appini, M. P., Nikiforakis, N., Polychronakis, M..  2017.  Spotless Sandboxes: Evading Malware Analysis Systems Using Wear-and-Tear Artifacts. 2017 IEEE Symposium on Security and Privacy (SP). :1009–1024.

Malware sandboxes, widely used by antivirus companies, mobile application marketplaces, threat detection appliances, and security researchers, face the challenge of environment-aware malware that alters its behavior once it detects that it is being executed on an analysis environment. Recent efforts attempt to deal with this problem mostly by ensuring that well-known properties of analysis environments are replaced with realistic values, and that any instrumentation artifacts remain hidden. For sandboxes implemented using virtual machines, this can be achieved by scrubbing vendor-specific drivers, processes, BIOS versions, and other VM-revealing indicators, while more sophisticated sandboxes move away from emulation-based and virtualization-based systems towards bare-metal hosts. We observe that as the fidelity and transparency of dynamic malware analysis systems improves, malware authors can resort to other system characteristics that are indicative of artificial environments. We present a novel class of sandbox evasion techniques that exploit the "wear and tear" that inevitably occurs on real systems as a result of normal use. By moving beyond how realistic a system looks like, to how realistic its past use looks like, malware can effectively evade even sandboxes that do not expose any instrumentation indicators, including bare-metal systems. We investigate the feasibility of this evasion strategy by conducting a large-scale study of wear-and-tear artifacts collected from real user devices and publicly available malware analysis services. The results of our evaluation are alarming: using simple decision trees derived from the analyzed data, malware can determine that a system is an artificial environment and not a real user device with an accuracy of 92.86%. As a step towards defending against wear-and-tear malware evasion, we develop statistical models that capture a system's age and degree of use, which can be used to aid sandbox operators in creating system i- ages that exhibit a realistic wear-and-tear state.

2017-12-12
Dai, D., Chen, Y., Carns, P., Jenkins, J., Ross, R..  2017.  Lightweight Provenance Service for High-Performance Computing. 2017 26th International Conference on Parallel Architectures and Compilation Techniques (PACT). :117–129.

Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

Sun, F., Zhang, P., White, J., Schmidt, D., Staples, J., Krause, L..  2017.  A Feasibility Study of Autonomically Detecting In-Process Cyber-Attacks. 2017 3rd IEEE International Conference on Cybernetics (CYBCONF). :1–8.

A cyber-attack detection system issues alerts when an attacker attempts to coerce a trusted software application to perform unsafe actions on the attacker's behalf. One way of issuing such alerts is to create an application-agnostic cyber- attack detection system that responds to prevalent software vulnerabilities. The creation of such an autonomic alert system, however, is impeded by the disparity between implementation language, function, quality-of-service (QoS) requirements, and architectural patterns present in applications, all of which contribute to the rapidly changing threat landscape presented by modern heterogeneous software systems. This paper evaluates the feasibility of creating an autonomic cyber-attack detection system and applying it to several exemplar web-based applications using program transformation and machine learning techniques. Specifically, we examine whether it is possible to detect cyber-attacks (1) online, i.e., as they occur using lightweight structures derived from a call graph and (2) offline, i.e., using machine learning techniques trained with features extracted from a trace of application execution. In both cases, we first characterize normal application behavior using supervised training with the test suites created for an application as part of the software development process. We then intentionally perturb our test applications so they are vulnerable to common attack vectors and then evaluate the effectiveness of various feature extraction and learning strategies on the perturbed applications. Our results show that both lightweight on-line models based on control flow of execution path and application specific off-line models can successfully and efficiently detect in-process cyber-attacks against web applications.

2017-03-08
Allen, J. H., Curtis, P. D., Mehravari, N., Crabb, G..  2015.  A proven method for identifying security gaps in international postal and transportation critical infrastructure. 2015 IEEE International Symposium on Technologies for Homeland Security (HST). :1–5.

The safety, security, and resilience of international postal, shipping, and transportation critical infrastructure are vital to the global supply chain that enables worldwide commerce and communications. But security on an international scale continues to fail in the face of new threats, such as the discovery by Panamanian authorities of suspected components of a surface-to-air missile system aboard a North Korean-flagged ship in July 2013 [1].This reality calls for new and innovative approaches to critical infrastructure security. Owners and operators of critical postal, shipping, and transportation operations need new methods to identify, assess, and mitigate security risks and gaps in the most effective manner possible.

2015-05-05
Everspaugh, A., Yan Zhai, Jellinek, R., Ristenpart, T., Swift, M..  2014.  Not-So-Random Numbers in Virtualized Linux and the Whirlwind RNG. Security and Privacy (SP), 2014 IEEE Symposium on. :559-574.

Virtualized environments are widely thought to cause problems for software-based random number generators (RNGs), due to use of virtual machine (VM) snapshots as well as fewer and believed-to-be lower quality entropy sources. Despite this, we are unaware of any published analysis of the security of critical RNGs when running in VMs. We fill this gap, using measurements of Linux's RNG systems (without the aid of hardware RNGs, the most common use case today) on Xen, VMware, and Amazon EC2. Despite CPU cycle counters providing a significant source of entropy, various deficiencies in the design of the Linux RNG makes its first output vulnerable during VM boots and, more critically, makes it suffer from catastrophic reset vulnerabilities. We show cases in which the RNG will output the exact same sequence of bits each time it is resumed from the same snapshot. This can compromise, for example, cryptographic secrets generated after resumption. We explore legacy-compatible countermeasures, as well as a clean-slate solution. The latter is a new RNG called Whirlwind that provides a simpler, more-secure solution for providing system randomness.
 

2015-04-30
Yinping Yang, Falcao, H., Delicado, N., Ortony, A..  2014.  Reducing Mistrust in Agent-Human Negotiations. Intelligent Systems, IEEE. 29:36-43.

Face-to-face negotiations always benefit if the interacting individuals trust each other. But trust is also important in online interactions, even for humans interacting with a computational agent. In this article, the authors describe a behavioral experiment to determine whether, by volunteering information that it need not disclose, a software agent in a multi-issue negotiation can alleviate mistrust in human counterparts who differ in their propensities to mistrust others. Results indicated that when cynical, mistrusting humans negotiated with an agent that proactively communicated its issue priority and invited reciprocation, there were significantly more agreements and better utilities than when the agent didn't volunteer such information. Furthermore, when the agent volunteered its issue priority, the outcomes for mistrusting individuals were as good as those for trusting individuals, for whom the volunteering of issue priority conferred no advantage. These findings provide insights for designing more effective, socially intelligent agents in online negotiation settings.