Biblio
Phishing refers to attacks over the Internet that often proceed in the following manner. An unsolicited email is sent by the deceiver posing as a legitimate party, with the intent of getting the user to click on a link that leads to a fraudulent webpage. This webpage mimics the authentic one of a reputable organization and requests personal information such as passwords and credit card numbers from the user. If the phishing attack is successful, that personal information can then be used for various illegal activities by the perpetrator. The most reliable sign of a phishing website may be that its domain name is incorrect in the address bar. In recognition of this, all major web browsers now use domain highlighting, that is, the domain name is shown in bold font. Domain highlighting is based on the assumption that users will attend to the address bar and that they will be able to distinguish legitimate from illegitimate domain names. We previously found little evidence for the effectiveness of domain highlighting, even when participants were directed to look at the address bar, in a study with many participants conducted online through Mechanical Turk. The present study was conducted in a laboratory setting that allowed us to have better control over the viewing conditions and measure the parts of the display at which the users looked. We conducted a laboratory experiment to assess whether directing users to attend to the address bar and the use of domain highlighting assist them at detecting fraudulent webpages. An Eyelink 1000plus eye tracker was used to monitor participants’ gaze patterns throughout the experiment. 48 participants were recruited from an undergraduate subject pool; half had been phished previously and half had not. They were required to evaluate the trustworthiness of webpages (half authentic and half fraudulent) in two trial blocks. In the first block, participants were instructed to judge the webpage’s legitimacy by any information on the page. In the second block, they were directed specifically to look at the address bar. Whether or not the domain name was highlighted in the address bar was manipulated between subjects. Results confirmed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. Participants rarely looked at the address bar during the trial block in which they were not directed to the address bar. The percentage of time spent looking at the address bar increased significantly when the participants were directed to look at it. The number of fixations on the address bar also increased, with both measures indicating that more attention was allocated to the address bar when it was emphasized. When participants were directed to look at the address bar, correct decisions were improved slightly for fraudulent webpages (“unsafe”) but not for the authentic ones (“safe”). Domain highlighting had little influence even when participants were directed to look at the address bar, suggesting that participants do not rely on the domain name for their decisions about webpage legitimacy. Without the general knowledge of domain names and specific knowledge about particular domain names, domain highlighting will not be effective.
The ineffectiveness of phishing warnings has been attributed to users' poor comprehension of the warning. However, the effectiveness of a phishing warning is typically evaluated at the time when users interact with a suspected phishing webpage, which we call the effect with phishing warning. Nevertheless, users' improved phishing detection when the warning is absent—or the effect of the warning—is the ultimate goal to prevent users from falling for phishing scams. We conducted an online study to evaluate the effect with and of several phishing warning variations, varying the point at which the warning was presented and whether procedural knowledge instruction was included in the warning interface. The current Chrome phishing warning was also included as a control. 360 Amazon Mechanical-Turk workers made submission; 500¬ word maximum for symposia) decisions about 10 login webpages (8 authentic, 2 fraudulent) with the aid of warning (first phase). After a short distracting task, the workers made the same decisions about 10 different login webpages (8 authentic, 2 fraudulent) without warning. In phase one, the compliance rates with two proposed warning interfaces (98% and 94%) were similar to those of the Chrome warning (98%), regardless of when the warning was presented. In phase two (without warning), performance was better for the condition in which warning with procedural knowledge instruction was presented before the phishing webpage in phase one, suggesting a better of effect than for the other conditions. With the procedural knowledge of how to determine a webpage’s legitimacy, users identified phishing webpages more accurately even without the warning being presented.
The ineffectiveness of phishing warnings has been attributed to users' poor comprehension of the warning. However, the effectiveness of a phishing warning is typically evaluated at the time when users interact with a suspected phishing webpage, which we call the effect with phishing warning. Nevertheless, users' improved phishing detection when the warning is absent—or the effect of the warning—is the ultimate goal to prevent users from falling for phishing scams. We conducted an online study to evaluate the effect with and of several phishing warning variations, varying the point at which the warning was presented and whether procedural knowledge instruction was included in the warning interface. The current Chrome phishing warning was also included as a control. 360 Amazon Mechanical-Turk workers made submission; 500¬ word maximum for symposia) decisions about 10 login webpages (8 authentic, 2 fraudulent) with the aid of warning (first phase). After a short distracting task, the workers made the same decisions about 10 different login webpages (8 authentic, 2 fraudulent) without warning. In phase one, the compliance rates with two proposed warning interfaces (98% and 94%) were similar to those of the Chrome warning (98%), regardless of when the warning was presented. In phase two (without warning), performance was better for the condition in which warning with procedural knowledge instruction was presented before the phishing webpage in phase one, suggesting a better of effect than for the other conditions. With the procedural knowledge of how to determine a webpage’s legitimacy, users identified phishing webpages more accurately even without the warning being presented.
Modern shared memory multiprocessors permit reordering of memory operations for performance reasons. These reorderings are often a source of subtle bugs in programs written for such architectures. Traditional approaches to verify weak memory programs often rely on interleaving semantics, which is prone to state space explosion, and thus severely limits the scalability of the analysis. In recent times, there has been a renewed interest in modelling dynamic executions of weak memory programs using partial orders. However, such an approach typically requires ad-hoc mechanisms to correctly capture the data and control-flow choices/conflicts present in real-world programs. In this work, we propose a novel, conflict-aware, composable, truly concurrent semantics for programs written using C/C++ for modern weak memory architectures. We exploit our symbolic semantics based on general event structures to build an efficient decision procedure that detects assertion violations in bounded multi-threaded programs. Using a large, representative set of benchmarks, we show that our conflict-aware semantics outperforms the state-of-the-art partial-order based approaches.
In many decades, due to fast growth of the World Wide Web, HTTP automated software/applications (auto-ware) are blooming for multiple purposes. Unfortunately, beside normal applications such as virus defining or operating system updating, auto-ware can also act as abnormal processes such as botnet, worms, virus, spywares, and advertising software (adware). Therefore, auto-ware, in a sense, consumes network bandwidth, and it might become internal security threats, auto-ware accessed domain/server also might be malicious one. Understanding about behaviour of HTTP auto-ware is beneficial for anomaly/malicious detection, the network management, traffic engineering and security. In this paper, HTTP auto-ware communication behaviour is analysed and modeled, from which a method in filtering out its domain/server is proposed. The filtered results can be used as a good resource for other security action purposes such as malicious domain/URL detection/filtering or investigation of HTTP malware from internal threats.
When filling out privacy-related forms in public places such as hospitals or clinics, people usually are not aware that the sound of their handwriting leaks personal information. In this paper, we explore the possibility of eavesdropping on handwriting via nearby mobile devices based on audio signal processing and machine learning. By presenting a proof-of-concept system, WritingHacker, we show the usage of mobile devices to collect the sound of victims' handwriting, and to extract handwriting-specific features for machine learning based analysis. WritingHacker focuses on the situation where the victim's handwriting follows certain print style. An attacker can keep a mobile device, such as a common smart-phone, touching the desk used by the victim to record the audio signals of handwriting. Then the system can provide a word-level estimate for the content of the handwriting. To reduce the impacts of various writing habits and writing locations, the system utilizes the methods of letter clustering and dictionary filtering. Our prototype system's experimental results show that the accuracy of word recognition reaches around 50% - 60% under certain conditions, which reveals the danger of privacy leakage through the sound of handwriting.
This paper describes a system for embodied conversational agents developed by Inmerssion and one of the applications—Young Merlin: Trial by Fire —built with this system. In the Merlin application, the ECA and a human interact with speech in virtual reality. The goal of this application is to provide engaging VR experiences that build rapport through storytelling and verbal interactions. The agent is fully automated, and his attitude towards the user changes over time depending on the interaction. The conversational system was built through a declarative approach that supports animations, markup language, and gesture recognition. Future versions of Merlin will implement multi-character dialogs, additional actions, and extended interaction time.
A self-managing software system should be able to monitor and analyze its runtime behavior and make adaptation decisions accordingly to meet certain desirable objectives. Traditional software adaptation techniques and recent “models@runtime” approaches usually require an a priori model for a system’s dynamic behavior. Oftentimes the model is difficult to define and labor-intensive to maintain, and tends to get out of date due to adaptation and architecture decay. We propose an alternative approach that does not require defining the system’s behavior model beforehand, but instead involves mining software component interactions from system execution traces to build a probabilistic usage model, which is in turn used to analyze, plan, and execute adaptations. In this article, we demonstrate how such an approach can be realized and effectively used to address a variety of adaptation concerns. In particular, we describe the details of one application of this approach for safely applying dynamic changes to a running software system without creating inconsistencies. We also provide an overview of two other applications of the approach, identifying potentially malicious (abnormal) behavior for self-protection, and improving deployment of software components in a distributed setting for performance self-optimization. Finally, we report on our experiments with engineering self-management features in an emergency deployment system using the proposed mining approach.
Secure collaboration requires the collaborating parties to apply the
right policies for their interaction. We adopt a notion of
conditional, directed norms as a way to capture the standards of
correctness for a collaboration. How can we handle conflicting norms?
We describe an approach based on knowledge of what norm dominates what
norm in what situation. Our approach adapts answer-set programming to
compute stable sets of norms with respect to their computed conflicts
and dominance. It assesses agent compliance with respect to those
stable sets. We demonstrate our approach on a healthcare scenario.
In recent years, online programming and software engineering education via information technology has gained a lot of popularity. Typically, popular courses often have hundreds or thousands of students but only a few course sta members. Tool automation is needed to maintain the quality of education. In this paper, we envision that the capability of quantifying behavioral similarity between programs is helpful for teaching and learning programming and software engineering, and propose three metrics that approximate the computation of behavioral similarity. Speci cally, we leverage random testing and dynamic symbolic execution (DSE) to generate test inputs, and run programs on these test inputs to compute metric values of the behavioral similarity. We evaluate our metrics on three real-world data sets from the Pex4Fun platform (which so far has accumulated more than 1.7 million game-play interactions). The results show that our metrics provide highly accurate approximation to the behavioral similarity. We also demonstrate a number of practical applications of our metrics including hint generation, progress indication, and automatic grading.
The rising popularity of Android and the GUI-driven nature of its apps have motivated the need for applicable automated GUI testing techniques. Although exhaustive testing of all possible combinations is the ideal upper bound in combinatorial testing, it is often infeasible, due to the combinatorial explosion of test cases. This paper presents TrimDroid, a framework for GUI testing of Android apps that uses a novel strategy to generate tests in a combinatorial, yet scalable, fashion. It is backed with automated program analysis and formally rigorous test generation engines. TrimDroid relies on program analysis to extract formal specifications. These speci- fications express the app’s behavior (i.e., control flow between the various app screens) as well as the GUI elements and their dependencies. The dependencies among the GUI elements comprising the app are used to reduce the number of combinations with the help of a solver. Our experiments have corroborated TrimDroid’s ability to achieve a comparable coverage as that possible under exhaustive GUI testing using significantly fewer test cases.
Computer security problems often occur when there are disconnects between users’ understanding of their role in computer security and what is expected of them. To help users make good security decisions more easily, we need insights into the challenges they face in their daily computer usage. We built and deployed the Security Behavior Observatory (SBO) to collect data on user behavior and machine configurations from participants’ home computers. Combining SBO data with user interviews, this paper presents a qualitative study comparing users’ attitudes, behaviors, and understanding of computer security to the actual states of their computers. Qualitative inductive thematic analysis of the interviews produced “engagement” as the overarching theme, whereby participants with greater engagement in computer security and maintenance did not necessarily have more secure computer states. Thus, user engagement alone may not be predictive of computer security. We identify several other themes that inform future directions for better design and research into security interventions. Our findings emphasize the need for better understanding of how users’ computers get infected, so that we can more effectively design user-centered mitigations.
Risk homeostasis theory claims that individuals adjust their behaviors in response to changing variables to keep what they perceive as a constant accepted level of risk [8]. Risk homeostasis theory is used to explain why drivers may drive faster when wearing seatbelts. Here we explore whether risk homeostasis theory applies to end-user security behaviors. We use observed data from over 200 participants in a longitudinal in-situ study as well as survey data from 249 users to attempt to determine how user security behaviors and attitudes are affected by the presence or absence of antivirus software. If risk compensation is occurring, users might be expected to behave more dangerously in some ways when antivirus is present. Some of our preliminary data suggests that risk compensation may be occurring, but additional work with larger samples is needed.
Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.
Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.
To interact effectively, agents must enter into commitments. What should an agent do when these commitments conflict? We describe Coco, an approach for reasoning about which specific commitments apply to specific parties in light of general types of commitments, specific circumstances, and dominance relations among specific commitments. Coco adapts answer-set programming to identify a maximalsetofnondominatedcommitments. It provides a modeling language and tool geared to support practical applications.
Many aspects of our daily lives now rely on computers, including communications, transportation, government, finance, medicine, and education. However, with increased dependence comes increased vulnerability. Therefore recognizing attacks quickly is critical. In this paper, we introduce a new anomaly detection algorithm based on persistent homology, a tool which computes summary statistics of a manifold. The idea is to represent a cyber network with a dynamic point cloud and compare the statistics over time. The robustness of persistent homology makes for a very strong comparison invariant.
The Internet routing ecosystem is facing substantial scalability challenges on the data plane. Various “clean slate” architectures for representing forwarding tables (FIBs), such as IPv6, introduce additional constraints on efficient implementations from both lookup time and memory footprint perspectives due to significant classification width. In this work, we propose an abstraction layer able to represent IPv6 FIBs on existing IP and even MPLS infrastructure. Feasibility of the proposed representations is confirmed by an extensive simulation study on real IPv6 forwarding tables, including low-level experimental performance evaluation.