Biblio
Modern personal computers have embraced increasingly powerful Graphics Processing Units (GPUs). Recently, GPU-based graphics acceleration in web apps (i.e., applications running inside a web browser) has become popular. WebGL is the main effort to provide OpenGL-like graphics for web apps and it is currently used in 53% of the top-100 websites. Unfortunately, WebGL has posed serious security concerns as several attack vectors have been demonstrated through WebGL. Web browsers\guillemotright solutions to these attacks have been reactive: discovered vulnerabilities have been patched and new runtime security checks have been added. Unfortunately, this approach leaves the system vulnerable to zero-day vulnerability exploits, especially given the large size of the Trusted Computing Base of the graphics plane. We present Sugar, a novel operating system solution that enhances the security of GPU acceleration for web apps by design. The key idea behind Sugar is using a dedicated virtual graphics plane for a web app by leveraging modern GPU virtualization solutions. A virtual graphics plane consists of a dedicated virtual GPU (or vGPU) as well as all the software graphics stack (including the device driver). Sugar enhances the system security since a virtual graphics plane is fully isolated from the rest of the system. Despite GPU virtualization overhead, we show that Sugar achieves high performance. Moreover, unlike current systems, Sugar is able to use two underlying physical GPUs, when available, to co-render the User Interface (UI): one GPU is used to provide virtual graphics planes for web apps and the other to provide the primary graphics plane for the rest of the system. Such a design not only provides strong security guarantees, it also provides enhanced performance isolation.
Cybersecurity assurance plays an important role in managing trust in smart grid communication systems. In this paper, cybersecurity assurance controls for smart grid communication networks and devices are delineated from the more technical functional controls to provide insights on recent innovative risk-based approaches to cybersecurity assurance in smart grid systems. The cybersecurity assurance control baselining presented in this paper is based on requirements and guidelines of the new family of IEC 62443 standards on network and systems security of industrial automation and control systems. The paper illustrates how key cybersecurity control baselining and tailoring concepts of the U.S. NIST SP 800-53 can be adopted in smart grid security architecture. The paper outlines the application of IEC 62443 standards-based security zoning and assignment of security levels to the zones in smart grid system architectures. To manage trust in the smart grid system architecture, cybersecurity assurance base lining concepts are applied per security impact levels. Selection and justification of security assurance controls presented in the paper is utilizing the approach common in Security Technical Implementation Guides (STIGs) of the U.S. Defense Information Systems Agency. As shown in the paper, enhanced granularity for managing trust both on the overall system and subsystem levels of smart grid systems can be achieved by implementation of the instructions of the CNSSI 1253 of the U.S. Committee of National Security Systems on security categorization and control selection for national security systems.
The Internet of Things (IoT) is an emerging technology, an extension of the traditional Internet which make everything is connected each other based on Radio Frequency Identification (RFID), Sensor, GPS or Machine to Machine technologies, etc. The security issues surrounding IoT have been of detrimental impact to its development and has consequently attracted research interest. However, there are very few approaches which assess the security of IoT from the perspective of an attacker. Penetration testing is widely used to evaluate traditional internet or systems security to date and it normally spends numerous cost and time. In this paper, we analyze the security problems of IoT and propose a penetration testing approach and its automation based on belief-desire-intention (BDI) model to evaluate the security of the IoT.
Today's computing devices keep considerable amounts of sensitive data unencrypted in RAM. When stolen, lost or simply unattended, attackers are capable of accessing the data in RAM with ease. Valuable and possibly classified data falling into the wrongs hands can lead to severe consequences, for instance when disclosed or reused to log in to accounts or to make transactions. We present a lightweight and hardware-independent mechanism to protect confidential data on suspended Linux devices against physical attackers. Our mechanism rapidly encrypts the contents of RAM during suspension and thereby prevents attackers from retrieving confidential data from the device. Existing systems can easily be extended with our mechanism while fully preserving the usability for end users.
The computer security community has long advocated defense in depth, building multiple layers of defense to protect a system. Realizing this vision is not yet practical, as software often ships with inadequate defenses, typically developed in an ad hoc fashion. Currently, programmers reason about security manually and lack tools to validate assurance that security controls provide satisfactory defenses. In this keynote talk, I will discuss how achieving defense in depth has a significant component in configuration. In particular, we advocate configuring security requirements for various layers of software defenses (e.g., privilege separation, authorization, and auditing) and generating software and systems defenses that implement such configurations (mostly) automatically. I will focus mainly on the challenge of retrofitting software with authorization code automatically to demonstrate the configuration problems faced by the community, and discuss how we may leverage these lessons to configuring software and systems for defense in depth.
The computer security community has long advocated defense in depth, building multiple layers of defense to protect a system. Realizing this vision is not yet practical, as software often ships with inadequate defenses, typically developed in an ad hoc fashion. Currently, programmers reason about security manually and lack tools to validate assurance that security controls provide satisfactory defenses. In this keynote talk, I will discuss how achieving defense in depth has a significant component in configuration. In particular, we advocate configuring security requirements for various layers of software defenses (e.g., privilege separation, authorization, and auditing) and generating software and systems defenses that implement such configurations (mostly) automatically. I will focus mainly on the challenge of retrofitting software with authorization code automatically to demonstrate the configuration problems faced by the community, and discuss how we may leverage these lessons to configuring software and systems for defense in depth.
The longstanding debate on a fundamental science of security has led to advances in systems, software, and network security. However, existing efforts have done little to inform how an environment should react to emerging and ongoing threats and compromises. The authors explore the goals and structures of a new science of cyber-decision-making in the Cyber-Security Collaborative Research Alliance, which seeks to develop a fundamental theory for reasoning under uncertainty the best possible action in a given cyber environment. They also explore the needs and limitations of detection mechanisms; agile systems; and the users, adversaries, and defenders that use and exploit them, and conclude by considering how environmental security can be cast as a continuous optimization problem.
The longstanding debate on a fundamental science of security has led to advances in systems, software, and network security. However, existing efforts have done little to inform how an environment should react to emerging and ongoing threats and compromises. The authors explore the goals and structures of a new science of cyber-decision-making in the Cyber-Security Collaborative Research Alliance, which seeks to develop a fundamental theory for reasoning under uncertainty the best possible action in a given cyber environment. They also explore the needs and limitations of detection mechanisms; agile systems; and the users, adversaries, and defenders that use and exploit them, and conclude by considering how environmental security can be cast as a continuous optimization problem.
To keep malware out of mobile application markets, existing techniques analyze the security aspects of application behaviors and summarize patterns of these security aspects to determine what applications do. However, user expectations (reflected via user perception in combination with user judgment) are often not incorporated into such analysis to determine whether application behaviors are within user expectations. This poster presents our recent work on bridging the semantic gap between user perceptions of the application behaviors and the actual application behaviors.
Hadoop is a map-reduce implementation that rapidly processes data in parallel. Cloud provides reliability, flexibility, scalability, elasticity and cost saving to customers. Moving Hadoop into Cloud can be beneficial to Hadoop users. However, Hadoop has two vulnerabilities that can dramatically impact its security in a Cloud. The vulnerabilities are its overloaded authentication key, and the lack of fine-grained access control at the data access level. We propose and develop a security enhancement for Cloud-based Hadoop.
It is widely accepted that wireless channels decorrelate fast over space, and half a wavelength is the key distance metric used in link signature (LS) for security assurance. However, we believe that this channel correlation model is questionable, and will lead to false sense of security. In this project, we focus on establishing correct modeling of channel correlation so as to facilitate proper guard zone designs for LS security in various wireless environments of interest.
In this study, we present a control theoretic technique to model routing in wireless multihop networks. We model ad hoc wireless networks as stochastic dynamical systems where, as a base case, a centralized controller pre-computes optimal paths to the destination. The usefulness of this approach lies in the fact that it can help obtain bounds on reliability of end-to-end packet transmissions. We compare this approach with the reliability achieved by some of the widely used routing techniques in multihop networks.
Trust is a necessary component in cybersecurity. It is a common task for a system to make a decision about whether or not to trust the credential of an entity from another domain, issued by a third party. Generally, in the cyberspace, connected and interacting systems largely rely on each other with respect to security, privacy, and performance. In their interactions, one entity or system needs to trust others, and this "trust" frequently becomes a vulnerability of that system. Aiming at mitigating the vulnerability, we are developing a computational theory of trust, as a part of our efforts towards Science of Security. Previously, we developed a formal-semantics-based calculus of trust [3, 2], in which trust can be calculated based on a trustor's direct observation on the performance of the trustee, or based on a trust network. In this paper, we construct a framework for making trust reasoning based on the observed evidence. We take privacy in cloud computing as a driving application case [5].
Moving Target Defense (MTD) can enhance the resilience of cyber systems against attacks. Although there have been many MTD techniques, there is no systematic understanding and quantitative characterization of the power of MTD. In this paper, we propose to use a cyber epidemic dynamics approach to characterize the power of MTD. We define and investigate two complementary measures that are applicable when the defender aims to deploy MTD to achieve a certain security goal. One measure emphasizes the maximum portion of time during which the system can afford to stay in an undesired configuration (or posture), without considering the cost of deploying MTD. The other measure emphasizes the minimum cost of deploying MTD, while accommodating that the system has to stay in an undesired configuration (or posture) for a given portion of time. Our analytic studies lead to algorithms for optimally deploying MTD.
Sandboxes impose a security policy, isolating applications and their components from the rest of a system. While many sandboxing techniques exist, state of the art sandboxes generally perform their functions within the system that is being defended. As a result, when the sandbox fails or is bypassed, the security of the surrounding system can no longer be assured. We experiment with the idea of in-nimbo sandboxing, encapsulating untrusted computations away from the system we are trying to protect. The idea is to delegate computations that may be vulnerable or malicious to virtual machine instances in a cloud computing environment. This may not reduce the possibility of an in-situ sandbox compromise, but it could significantly reduce the consequences should that possibility be realized. To achieve this advantage, there are additional requirements, including: (1) A regulated channel between the local and cloud environments that supports interaction with the encapsulated application, (2) Performance design that acceptably minimizes latencies in excess of the in-situ baseline. To test the feasibility of the idea, we built an in-nimbo sandbox for Adobe Reader, an application that historically has been subject to significant attacks. We undertook a prototype deployment with PDF users in a large aerospace firm. In addition to thwarting several examples of existing PDF-based malware, we found that the added increment of latency, perhaps surprisingly, does not overly impair the user experience with respect to performance or usability.