Biblio

Filters: Keyword is Oct'16  [Clear All Filters]
2017-01-05
Aiping Xiong, Robert W. Proctor, Weining Yang, Ninghui Li.  2017.  Is Domain Highlighting Actually Helpful in Identifying Phishing Webpages? Human Factors: The Journal of the Human Factors and Ergonomics Society.

Objective: To evaluate the effectiveness of domain highlighting in helping users identify whether webpages are legitimate or spurious.

Background: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which website they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. 

Method: We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of webpages in two phases. In phase one, participants were to judge the legitimacy based on any information on the webpage, whereas phase two they were to focus on the address bar. Whether the domain was highlighted was also varied.  Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations.

Results: Participants differentiated the legitimate and fraudulent webpages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the highlighted domains.

Conclusion: Failure to detect many fraudulent webpages even when the domain was highlighted implies that users lacked knowledge of webpage security cues or how to use those cues.

2017-01-09
Ricard López Fogués, Pradeep K. Murukannaiah, Jose M. Such, Munindar P. Singh.  2017.  Understanding Sharing Policies in Multiparty Scenarios: Incorporating Context, Preferences, and Arguments into Decision Making. ACM Transactions on Computer-Human Interaction.

Social network services enable users to conveniently share personal information.  Often, the information shared concerns other people, especially other members of the social network service.  In such situations, two or more people can have conflicting privacy preferences; thus, an appropriate sharing policy may not be apparent. We identify such situations as multiuser privacy scenarios. Current approaches propose finding a sharing policy through preference aggregation.  However, studies suggest that users feel more confident in their decisions regarding sharing when they know the reasons behind each other's preferences.  The goals of this paper are (1) understanding how people decide the appropriate sharing policy in multiuser scenarios where arguments are employed, and (2) developing a computational model to predict an appropriate sharing policy for a given scenario. We report on a study that involved a survey of 988 Amazon MTurk users about a variety of multiuser scenarios and the optimal sharing policy for each scenario.  Our evaluation of the participants' responses reveals that contextual factors, user preferences, and arguments influence the optimal sharing policy in a multiuser scenario.  We develop and evaluate an inference model that predicts the optimal sharing policy given the three types of features.  We analyze the predictions of our inference model to uncover potential scenario types that lead to incorrect predictions, and to enhance our understanding of when multiuser scenarios are more or less prone to dispute.

 

To appear

2017-01-05
Jing Chen, Robert W. Proctor, Ninghui Li.  2016.  Human Trust in Automation in a Phishing Context. 46th Annual Meeting of the Society for Computers in Psychology.

Many previous studies have shown that trust in automation mediates the effectiveness of automation in maintaining performance, and one critical factor that affects trust is the reliability of the automated system. In the cyber domain, automated systems are pervasive, yet the involvement of human trust has not been studied extensively as in other domains such as transportation.

In the current study, we used a phishing email identification task (with a phishing detection automated assistant system) as a testbed to study human trust in automation in the cyber domain. More specifically, we systematically investigated the influence of “description” (i.e., whether the user was informed about the actual reliability of the automated system) and “experience” (i.e., whether the user was provided feedback on their choices), in addition to the reliability level of the automated phishing detection system. These factors were varied in different conditions of response bias (false alarm vs. misses) and task difficulty (easy vs. difficult), which were found may be critical in a pilot study. Measures of user performance and trust were compared across different conditions. The measures of interest were human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).

Aiping Xiong, Robert W. Proctor, Ninghui Li, Weining Yang.  2016.  Use of Warnings for Instructing Users How to Detect Phishing Webpages. 46th Annual Meeting of the Society for Computers in Psychology.

The ineffectiveness of phishing warnings has been attributed to users' poor comprehension of the warning. However, the effectiveness of a phishing warning is typically evaluated at the time when users interact with a suspected phishing webpage, which we call the effect with phishing warning. Nevertheless, users' improved phishing detection when the warning is absent—or the effect of the warning—is the ultimate goal to prevent users from falling for phishing scams. We conducted an online study to evaluate the effect with and of several phishing warning variations, varying the point at which the warning was presented and whether procedural knowledge instruction was included in the warning interface. The current Chrome phishing warning was also included as a control. 360 Amazon Mechanical-Turk workers made submission; 500¬ word maximum for symposia) decisions about 10 login webpages (8 authentic, 2 fraudulent) with the aid of warning (first phase). After a short distracting task, the workers made the same decisions about 10 different login webpages (8 authentic, 2 fraudulent) without warning. In phase one, the compliance rates with two proposed warning interfaces (98% and 94%) were similar to those of the Chrome warning (98%), regardless of when the warning was presented. In phase two (without warning), performance was better for the condition in which warning with procedural knowledge instruction was presented before the phishing webpage in phase one, suggesting a better of effect than for the other conditions. With the procedural knowledge of how to determine a webpage’s legitimacy, users identified phishing webpages more accurately even without the warning being presented.

2016-12-06
Amit Vasudevan, Sagar Chaki, Limin Jia, Anupam Datta.  2016.  überSpark: Enforcing Verifiable Object Abstractions for Automated Compositional Security Analysis of a Hypervisor. 25th USENIX Security Symposium (USENIX Security 16).

We present überSpark (üSpark), an innovative architecture for compositional verification of security properties of extensible hypervisors written in C and Assembly. üSpark comprises two key ideas: (i) endowing low-level system software with abstractions found in higher-level languages (e.g., objects, interfaces, function-call semantics for implementations of interfaces, access control on interfaces, concurrency and serialization), enforced using a combination of commodity hardware mechanisms and lightweight static analysis; and (ii) interfacing with platform hardware by programming in Assembly using an idiomatic style (called CASM) that is verifiable via tools aimed at C, while retaining its performance and low-level access to hardware. After verification, the C code is compiled using a certified compiler while the CASM code is translated into its corresponding Assembly instructions. Collectively, these innovations enable compositional verification of security invariants without sacrificing performance. We validate üSpark by building and verifying security invariants of an existing open-source commodity x86 micro-hypervisor and several of its extensions, and demonstrating only minor performance overhead with low verification costs.

2016-12-07
Filipre Militao, Jonathan Aldrich, Luis Caires.  2016.  Composing Interfering Abstract Protocols. European Conference on Object-Oriented Programming (ECOOP).

The undisciplined use of shared mutable state can be a source of program errors when aliases unsafely interfere with each other. While protocol-based techniques to reason about interference abound, they do not address two practical concerns: the decidability of protocol composition and its integration with protocol abstraction. We show that our composition procedure is decidable and that it ensures safe interference even when composing abstract protocols. To evaluate the expressiveness of our protocol framework for ensuring safe shared memory interference, we show how this same protocol framework can be used to model safe, typeful message-passing concurrency idioms.

2016-06-23
Adwait Nadkarni, Benjamin Andow, William Enck, Somesh Jha.  2016.  Practical DIFC Enforcement on Android. USENIX Security Symposium.

Smartphone users often use private and enterprise data with untrusted third party applications.  The fundamental lack of secrecy guarantees in smartphone OSes, such as Android, exposes this data to the risk of unauthorized exfiltration.  A natural solution is the integration of secrecy guarantees into the OS.  In this paper, we describe the challenges for decentralized information flow control (DIFC) enforcement on Android.  We propose context-sensitive DIFC enforcement via lazy polyinstantiation and practical and secure network export through domain declassification.  Our DIFC system, Weir, is backwards compatible by design, and incurs less than 4 ms overhead for component startup.  With Weir,  we demonstrate practical and secure DIFC enforcement on Android.

2016-12-07
Jaspreet Bhatia, Morgan Evans, Sudarshan Wadkar, Travis Breaux.  2016.  Automated Extraction of Regulated Information Types using Hyponymy Relations. 2016 RE: Requirements Engineering Conference.

Requirements analysts can model regulated data practices to identify and reason about risks of noncompliance. If terminology is inconsistent or ambiguous, however, these models and their conclusions will be unreliable. To study this problem, we investigated an approach to automatically construct an information type ontology by identifying information type hyponymy in privacy policies using Tregex patterns. Tregex is a utility to match regular expressions against constituency parse trees, which are hierarchical expressions of natural language clauses, including noun and verb phrases. We discovered the Tregex patterns by applying content analysis to 15 privacy policies from three domains (shopping, telecommunication and social networks) to identify all instances of information type hyponymy. From this dataset, three semantic and four syntactic categories of hyponymy emerged based on category completeness and word-order. Among these, we identified and empirically evaluated 26 Tregex patterns to automate the extraction of hyponyms from privacy policies. The patterns identify information type hypernym-hyponym pairs with an average precision of 0.83 and recall of 0.52 across our dataset of 15 policies. 

2016-12-08
Gabriel Ferreira, Momin Malik, Christian Kästner, Jurgen Pfeffer, Sven Apel.  2016.  Do #ifdefs influence the occurrence of vulnerabilities? an empirical study of the linux kernel SPLC '16 Proceedings of the 20th International Systems and Software Product Line Conference. :65-73.

Preprocessors support the diversification of software products with #ifdefs, but also require additional effort from developers to maintain and understand variable code. We conjecture that #ifdefs cause developers to produce more vulnerable code because they are required to reason about multiple features simultaneously and maintain complex mental models of dependencies of configurable code.

We extracted a variational call graph across all configurations of the Linux kernel, and used configuration complexity metrics to compare vulnerable and non-vulnerable functions considering their vulnerability history. Our goal was to learn about whether we can observe a measurable influence of configuration complexity on the occurrence of vulnerabilities.

Our results suggest, among others, that vulnerable functions have higher variability than non-vulnerable ones and are also constrained by fewer configuration options. This suggests that developers are inclined to notice functions appear in frequently-compiled product variants. We aim to raise developers' awareness to address variability more systematically, since configuration complexity is an important, but often ignored aspect of software product lines.

Jens Meinicke, Chu-Pan Wong, Christian Kästner, Thomas Thum, Gunter Saake.  2016.  On essential configuration complexity: measuring interactions in highly-configurable systems. ASE 2016 Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. :483-494.

Quality assurance for highly-configurable systems is challenging due to the exponentially growing configuration space. Interactions among multiple options can lead to surprising behaviors, bugs, and security vulnerabilities. Analyzing all configurations systematically might be possible though if most options do not interact or interactions follow specific patterns that can be exploited by analysis tools. To better understand interactions in practice, we analyze program traces to characterize and identify where interactions occur on control flow and data. To this end, we developed a dynamic analysis for Java based on variability-aware execution and monitor executions of multiple small to medium-sized programs. We find that the essential configuration complexity of these programs is indeed much lower than the combinatorial explosion of the configuration space indicates. However, we also discover that the interaction characteristics that allow scalable and complete analyses are more nuanced than what is exploited by existing state-of-the-art quality assurance strategies.

2017-01-05
Jaspreet Bhatia, Travis Breaux, Joel Reidenberg, Thomas Norton.  2016.  A Theory of Vagueness and Privacy Risk Perception. 2016 IEEE 24th International Requirements Engineering Conference (RE).

Ambiguity arises in requirements when astatement is unintentionally or otherwise incomplete, missing information, or when a word or phrase has morethan one possible meaning. For web-based and mobileinformation systems, ambiguity, and vagueness inparticular, undermines the ability of organizations to aligntheir privacy policies with their data practices, which canconfuse or mislead users thus leading to an increase inprivacy risk. In this paper, we introduce a theory ofvagueness for privacy policy statements based on ataxonomy of vague terms derived from an empiricalcontent analysis of 15 privacy policies. The taxonomy wasevaluated in a paired comparison experiment and resultswere analyzed using the Bradley-Terry model to yield arank order of vague terms in both isolation andcomposition. The theory predicts how vague modifiers toinformation actions and information types can becomposed to increase or decrease overall vagueness. Wefurther provide empirical evidence based on factorialvignette surveys to show how increases in vagueness willdecrease users' acceptance of privacy risk and thusdecrease users' willingness to share personal information.

2016-12-06
Hanan Hibshi, Travis Breaux, Maria Riaz, Laurie Williams.  2016.  A grounded analysis of experts’ decision-making during security assessments. Journal of Cybersecurity Advance Access .

Security analysis requires specialized knowledge to align threats and vulnerabilities in information technology. To identify mitigations, analysts need to understand how threats, vulnerabilities, and mitigations are composed together to yield security requirements. Despite abundant guidance in the form of checklists and controls about how to secure systems, evidence suggests that security experts do not apply these checklists. Instead, they rely on their prior knowledge and experience to identify security vulnerabilities. To better understand the different effects of checklists, design analysis, and expertise, we conducted a series of interviews to capture and encode the decisionmaking process of security experts and novices during three security analysis exercises. Participants were asked to analyze three kinds of artifacts: source code, data flow diagrams, and network diagrams, for vulnerabilities, and then to apply a requirements checklist to demonstrate their ability to mitigate vulnerabilities. We framed our study using Situation Awareness, which is a theory about human perception that was used to elicit interviewee responses. The responses were then analyzed using coding theory and grounded analysis. Our results include decision-making patterns that characterize how analysts perceive, comprehend, and project future threats against a system, and how these patterns relate to selecting security mitigations. Based on this analysis, we discovered new theory to measure how security experts and novices apply attack models and how structured and unstructured analysis enables increasing security requirements coverage. We highlight the role of expertise level and requirements composition in affecting security decision-making and we discuss how our method produced new hypotheses about security analysis and decisionmaking.

Hamid Bagheri, Sam Malek.  2016.  Titanium: Efficient Analysis of Evolving Alloy Specifications. FSE 2016: ACM SIGSOFT International Symposium on the Foundations of Software.

The Alloy specification language, and the corresponding Alloy Analyzer, have received much attention in the last two decades with applications in many areas of software engineering. Increasingly, formal analyses enabled by Alloy are desired for use in an on-line mode, where the specifications are automatically kept in sync with the running, possibly changing, software system. However, given Alloy Analyzer’s reliance on computationally expensive SAT solvers, an important challenge is the time it takes for such analyses to execute at runtime. The fact that in an on-line mode, the analyses are often repeated on slightly revised versions of a given specification, presents us with an opportunity to tackle this challenge. We present Titanium, an extension of Alloy for formal analysis of evolving specifications. By leveraging the results from previous analyses, Titanium narrows the state space of the revised specification, thereby greatly reducing the required computational effort. We describe the semantic basis of Titanium in terms of models specified in relational logic. We show how the approach can be realized atop an existing relational logic model finder. Our experimental results show Titanium achieves a significant speed-up over Alloy Analyzer when applied to the analysis of evolving specifications.