Biblio
Security isolation is a foundation of computing systems that enables resilience to different forms of attacks. This article seeks to understand existing security isolation techniques by systematically classifying different approaches and analyzing their properties. We provide a hierarchical classification structure for grouping different security isolation techniques. At the top level, we consider two principal aspects: mechanism and policy. Each aspect is broken down into salient dimensions that describe key properties. We break the mechanism into two dimensions: enforcement location and isolation granularity, and break the policy aspect down into three dimensions: policy generation, policy configurability, and policy lifetime. We apply our classification to a set of representative papers that cover a breadth of security isolation techniques and discuss trade-offs among different design choices and limitations of existing approaches.
Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that humans often rely on automation or other humans, but not both simultaneously. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measures how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.
Phishing is a social engineering tactic used to trick people into revealing personal information [Zielinska, Tembe, Hong, Ge, Murphy-Hill, & Mayhorn 2014]. As phishing emails continue to infiltrate users' mailboxes, what social engineering techniques are the phishers using to successfully persuade victims into releasing sensitive information?
Cialdini's [2007] six principles of persuasion (authority, social proof, liking/similarity, commitment/consistency, scarcity, and reciprocation) have been linked to elements of phishing emails [Akbar 2014; Ferreira, & Lenzini 2015]; however, the findings have been conflicting. Authority and scarcity were found as the most common persuasion principles in 207 emails obtained from a Netherlands database [Akbar 2014], while liking/similarity was the most common principle in 52 personal emails available in Luxemborg and England [Ferreira et al. 2015]. The purpose of this study was to examine the persuasion principles present in emails available in the United States over a period of five years.
Two reviewers assessed eight hundred eighty-seven phishing emails from Arizona State University, Brown University, and Cornell University for Cialdini's six principles of persuasion. Each email was evaluated using a questionnaire adapted from the Ferreira et al. [2015] study. There was an average agreement of 87% per item between the two raters.
Spearman's Rho correlations were used to compare email characteristics over time. During the five year period under consideration (2010--2015), the persuasion principles of commitment/consistency and scarcity have increased over time, while the principles of reciprocation and social proof have decreased over time. Authority and liking/similarity revealed mixed results with certain characteristics increasing and others decreasing.
The commitment/consistency principle could be seen in the increase of emails referring to elements outside the email to look more reliable, such as Google Docs or Adobe Reader (rs(850) = .12, p =.001), while the scarcity principle could be seen in urgent elements that could encourage users to act quickly and may have had success in eliciting a response from users (rs(850) = .09, p =.01). Reciprocation elements, such as a requested reply, decreased over time (rs(850) = -.12, p =.001). Additionally, the social proof principle present in emails by referring to actions performed by other users also decreased (rs(850) = -.10, p =.01).
Two persuasion principles exhibited both an increase and decrease in their presence in emails over time: authority and liking/similarity. These principles could increase phishing rate success if used appropriately, but could also raise suspicions in users and decrease compliance if used incorrectly. Specifically, the source of the email, which corresponds to the authority principle, displayed an increase over time in educational institutes (rs(850) = .21, p <.001), but a decrease in financial institutions (rs(850) = -.18, p <.001). Similarly, the liking/similarity principle revealed an increase over time of logos present in emails (rs(850) = .18, p <.001) and decrease in service details, such as payment information (rs(850) = -.16, p <.001).
The results from this study offer a different perspective regarding phishing. Previous research has focused on the user aspect; however, few studies have examined the phisher perspective and the social psychological techniques they are implementing. Additionally, they have yet to look at the success of the social psychology techniques. Results from this study can be used to help to predict future trends and inform training programs, as well as machine learning programs used to identify phishing messages.
This paper presents an approach for securing software application chains in cloud environments. We use the concept of workflow management systems to explain the model. Our prototype is based on the Kepler scientific workflow system enhanced with a security analytics package. This model can be applied to other cloud based systems. Depending on the information being received from the cloud, this approach can also offer information about internal states of the resources in
the cloud. The approach we use hinges on (1) an ability to limit attacks to Input, Remote, and Output channels (or flows), and (2) validate the flows using operational profile (OP) or certification based signals. OP based validation is a statistical approach and may miss some of the attacks. However, where enumeration is possible (e.g., static web sites), this approach can offer high assurances of validity of the flows. It is also assumed that workflow components are sound so long as the input flows are limited to operational profile. Other acceptance testing approaches could be used to validate the flows. Work in progress has two thrusts: (1) using cloud-based Kepler workflows to probe and assess security states and operation of cloud resources (specifically VMs) under different workloads leveraging DACSA sensors; and (2) analyzing effectiveness of the proposed approach in securing workflows.
Smartphone users often use private and enterprise data with untrusted third party applications. The fundamental lack of secrecy guarantees in smartphone OSes, such as Android, exposes this data to the risk of unauthorized exfiltration. A natural solution is the integration of secrecy guarantees into the OS. In this paper, we describe the challenges for decentralized information flow control (DIFC) enforcement on Android. We propose context-sensitive DIFC enforcement via lazy polyinstantiation and practical and secure network export through domain declassification. Our DIFC system, Weir, is backwards compatible by design, and incurs less than 4 ms overhead for component startup. With Weir, we demonstrate practical and secure DIFC enforcement on Android.
In a multiagent system, a (social) norm describes what the agents may expect from each other. Norms promote autonomy (an agent need not comply with a norm) and heterogeneity (a norm describes interactions at a high level independent of implementation details). Researchers have studied norm emergence through social learning where the agents interact repeatedly in a graph structure.
In contrast, we consider norm emergence in an open system, where membership can change, and where no predetermined graph structure exists. We propose Silk, a mechanism wherein a generator monitors interactions among member agents and recommends norms to help resolve conflicts. Each member decides on whether to accept or reject a recommended norm. Upon exiting the system, a member passes its experience along to incoming members of the same type. Thus, members develop norms in a hybrid manner to resolve conflicts.
We evaluate Silk via simulation in the traffic domain. Our results show that social norms promoting conflict resolution emerge in both moderate and selfish societies via our hybrid mechanism.
To interact effectively, agents must enter into commitments. What should an agent do when these commitments conflict? We describe Coco, an approach for reasoning about which specific commitments apply to specific parties in light of general types of commitments, specific circumstances, and dominance relations among specific commitments. Coco adapts answer-set programming to identify a maximalsetofnondominatedcommitments. It provides a modeling language and tool geared to support practical applications.
Modern Internet applications are being disaggregated into a microservice-based architecture, with services being updated and deployed hundreds of times a day. The accelerated software life cycle and heterogeneity of language runtimes in a single application necessitates a new approach for testing the resiliency of these applications in production infrastructures. We present Gremlin, a framework for systematically testing the failure-handling capabilities of microservices. Gremlin is based on the observation that microservices are loosely coupled and thus rely on standard message-exchange patterns over the network. Gremlin allows the operator to easily design tests and executes them by manipulating inter-service messages at the network layer. We show how to use Gremlin to express common failure scenarios and how developers of an enterprise application were able to discover previously unknown bugs in their failure-handling code without modifying the application.
While software systems are being developed and released to consumers more rapidly than ever, security remains an important issue for developers. Shorter development cycles means less time for these critical security testing and review efforts. The attack surface of a system is the sum of all paths for untrusted data into and out of a system. Code that lies on the attack surface therefore contains code with actual exploitable vulnerabilities. However, identifying code that lies on the attack surface requires the same contested security resources from the secure testing efforts themselves. My research proposes an automated technique to approximate attack surfaces through the analysis of stack traces. We hypothesize that stack traces user crashes represent activity that puts the system under stress, and is therefore indicative of potential security vulnerabilities. The goal of this research is to aid software engineers in prioritizing security efforts by approximating the attack surface of a system via stack trace analysis. In a trial on Mozilla Firefox, the attack surface approximation selected 8.4% of files and contained 72.1% of known vulnerabilities. A similar trial was performed on the Windows 8 product.
Objective: The overarching goal is to convey the concept of science of security and the contributions that a scientifically based, human factors approach can make to this interdisciplinary field.Background: Rather than a piecemeal approach to solving cybersecurity problems as they arise, the U.S. government is mounting a systematic effort to develop an approach grounded in science. Because humans play a central role in security measures, research on security-related decisions and actions grounded in principles of human information-processing and decision-making is crucial to this interdisciplinary effort.Method: We describe the science of security and the role that human factors can play in it, and use two examples of research in cybersecurity—detection of phishing attacks and selection of mobile applications—to illustrate the contribution of a scientific, human factors approach.Results: In these research areas, we show that systematic information-processing analyses of the decisions that users make and the actions they take provide a basis for integrating the human component of security science.Conclusion: Human factors specialists should utilize their foundation in the science of applied information processing and decision making to contribute to the science of cybersecurity.
Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.
Cyber-attacks and breaches are often detected too late to avoid damage. While “classical” reactive cyber defenses usually work only if we have some prior knowledge about the attack methods and “allowable” patterns, properly constructed redundancy-based anomaly detectors can be more robust and often able to detect even zero day attacks. They are a step toward an oracle that uses knowable behavior of a healthy system to identify abnormalities. In the world of Internet of Things (IoT), security, and anomalous behavior of sensors and other IoT components, will be orders of magnitude more difficult unless we make those elements security aware from the start. In this article we examine the ability of redundancy-based a nomaly detectors to recognize some high-risk and difficult to detect attacks on web servers—a likely management interface for many IoT stand-alone elements. In real life, it has taken long, a number of years in some cases, to identify some of the vulnerabilities and related attacks. We discuss practical relevance of the approach in the context of providing high-assurance Webservices that may belong to autonomous IoT applications and devices
Intrusion detection systems (IDSs) assume increasingly importance in past decades as information systems become ubiquitous. Despite the abundance of intrusion detection algorithms developed so far, there is still no single detection algorithm or procedure that can catch all possible intrusions; also, simultaneously running all these algorithms may not be feasible for practical IDSs due to resource limitation. For these reasons, effective IDS configuration becomes crucial for real-time intrusion detection. However, the uncertainty in the intruder’s type and the (often unknown) dynamics involved with the target system pose challenges to IDS configuration. Considering these challenges, the IDS configuration problem is formulated as an incomplete information stochastic game in this work, and a new algorithm, Bayesian Nash-Q learning, that combines conventional reinforcement learning with a Bayesian type identification procedure is proposed. Numerical results show that the proposed algorithm can identify the intruder’s type with high fidelity and provide effective configuration.
We conducted an MTurk online user study to assess whether directing users to attend to the address bar and the use of domain highlighting lead to better performance at detecting fraudulent webpages. 320 participants were recruited to evaluate the trustworthiness of webpages (half authentic and half fraudulent) in two blocks. In the first block, participants were instructed to judge the webpage’s legitimacy by any information on the page. In the second block, they were directed specifically to look at the address bar. Webpages with domain highlighting were presented in the first block for half of the participants and in the second block for the remaining participants. Results showed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. When participants were directed to look at the address bar, correct decisions increased for fraudulent webpage s (“unsafe”) but did not change for authentic webpages (“safe”). The percentage of correct judgments showed no influence of domain highlighting regardless of whether the decisions were based on any information on the webpage or participants were directed to the address bar. The results suggest that directing users’ attention to the address bar is slightly beneficial at helping users detect phishing web pages, whereas domain highlighting gives almost no additional protection.
Platform as a Service (PaaS) provides middleware resources to cloud customers. As demand for PaaS services increases, so do concerns about the security of PaaS. This paper discusses principal PaaS security and integrity requirements, and vulnerabilities and the corresponding countermeasures. We consider three core cloud elements: multi-tenancy, isolation, and virtualization and how they relate to PaaS services and security trends and concerns such as user and resource isolation, side-channel vulnerabilities in multi-tenant environments, and protection of sensitive data
We have begun to investigate the effectiveness of a phishing warning Chrome extension in a field setting of everyday computer use. A preliminary experiment has been conducted in which participants installed and used the extension. They were required to fill out an online browsing behavior questionnaire by clicking on a survey link sent in a weekly email by us. Two phishing attacks were simulated during the study by directing participants to "fake" (phishing) survey sites we created. Almost all participants who saw the warnings on our fake sites input incorrect passwords, but follow-up interviews revealed that only one participant did so intentionally. A follow-up interview revealed that the warning failure was mainly due to the survey task being mandatory. Another finding of interest from the interview was that about 50% of the participants had never heard of phishing or did not understand its meaning.