Visible to the public Biblio

Filters: Keyword is Warning of Phishing Attacks, Supporting Human Information Processing, Identifying Phishin Deception Indicators, and Reducing Vulnerability  [Clear All Filters]
2017-01-05
Aiping Xiong, Robert W. Proctor, Ninghui Li, Weining Yang.  2016.  Use of Warnings for Instructing Users How to Detect Phishing Webpages. 46th Annual Meeting of the Society for Computers in Psychology.

The ineffectiveness of phishing warnings has been attributed to users' poor comprehension of the warning. However, the effectiveness of a phishing warning is typically evaluated at the time when users interact with a suspected phishing webpage, which we call the effect with phishing warning. Nevertheless, users' improved phishing detection when the warning is absent—or the effect of the warning—is the ultimate goal to prevent users from falling for phishing scams. We conducted an online study to evaluate the effect with and of several phishing warning variations, varying the point at which the warning was presented and whether procedural knowledge instruction was included in the warning interface. The current Chrome phishing warning was also included as a control. 360 Amazon Mechanical-Turk workers made submission; 500¬ word maximum for symposia) decisions about 10 login webpages (8 authentic, 2 fraudulent) with the aid of warning (first phase). After a short distracting task, the workers made the same decisions about 10 different login webpages (8 authentic, 2 fraudulent) without warning. In phase one, the compliance rates with two proposed warning interfaces (98% and 94%) were similar to those of the Chrome warning (98%), regardless of when the warning was presented. In phase two (without warning), performance was better for the condition in which warning with procedural knowledge instruction was presented before the phishing webpage in phase one, suggesting a better of effect than for the other conditions. With the procedural knowledge of how to determine a webpage’s legitimacy, users identified phishing webpages more accurately even without the warning being presented.

2016-10-07
Pearson, C. J., Welk, A. K., Mayhorn, C. B..  2016.  In Automation We Trust? Identifying Varying Levels of Trust in Human and Automated Information Sources Human Factors and Ergonomics Society. :201-205.

Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that trust is an antecedent to reliance, and often influences how individuals prioritize and integrate information presented from a human and/or automated information source. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measured how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information regarding which route was safest from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs and systems for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.

 

2016-07-01
Pearson, Carl J., Welk, Allaire K., Boettcher, William A., Mayer, Roger C., Streck, Sean, Simons-Rudolph, Joseph M., Mayhorn, Christopher B..  2016.  Differences in Trust Between Human and Automated Decision Aids. Proceedings of the Symposium and Bootcamp on the Science of Security. :95–98.

Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that humans often rely on automation or other humans, but not both simultaneously. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measures how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.

Zielinska, Olga, Welk, Allaire, Mayhorn, Christopher B., Murphy-Hill, Emerson.  2016.  The Persuasive Phish: Examining the Social Psychological Principles Hidden in Phishing Emails. Proceedings of the Symposium and Bootcamp on the Science of Security. :126–126.

Phishing is a social engineering tactic used to trick people into revealing personal information [Zielinska, Tembe, Hong, Ge, Murphy-Hill, & Mayhorn 2014]. As phishing emails continue to infiltrate users' mailboxes, what social engineering techniques are the phishers using to successfully persuade victims into releasing sensitive information?

Cialdini's [2007] six principles of persuasion (authority, social proof, liking/similarity, commitment/consistency, scarcity, and reciprocation) have been linked to elements of phishing emails [Akbar 2014; Ferreira, & Lenzini 2015]; however, the findings have been conflicting. Authority and scarcity were found as the most common persuasion principles in 207 emails obtained from a Netherlands database [Akbar 2014], while liking/similarity was the most common principle in 52 personal emails available in Luxemborg and England [Ferreira et al. 2015]. The purpose of this study was to examine the persuasion principles present in emails available in the United States over a period of five years.

Two reviewers assessed eight hundred eighty-seven phishing emails from Arizona State University, Brown University, and Cornell University for Cialdini's six principles of persuasion. Each email was evaluated using a questionnaire adapted from the Ferreira et al. [2015] study. There was an average agreement of 87% per item between the two raters.

Spearman's Rho correlations were used to compare email characteristics over time. During the five year period under consideration (2010--2015), the persuasion principles of commitment/consistency and scarcity have increased over time, while the principles of reciprocation and social proof have decreased over time. Authority and liking/similarity revealed mixed results with certain characteristics increasing and others decreasing.

The commitment/consistency principle could be seen in the increase of emails referring to elements outside the email to look more reliable, such as Google Docs or Adobe Reader (rs(850) = .12, p =.001), while the scarcity principle could be seen in urgent elements that could encourage users to act quickly and may have had success in eliciting a response from users (rs(850) = .09, p =.01). Reciprocation elements, such as a requested reply, decreased over time (rs(850) = -.12, p =.001). Additionally, the social proof principle present in emails by referring to actions performed by other users also decreased (rs(850) = -.10, p =.01).

Two persuasion principles exhibited both an increase and decrease in their presence in emails over time: authority and liking/similarity. These principles could increase phishing rate success if used appropriately, but could also raise suspicions in users and decrease compliance if used incorrectly. Specifically, the source of the email, which corresponds to the authority principle, displayed an increase over time in educational institutes (rs(850) = .21, p <.001), but a decrease in financial institutions (rs(850) = -.18, p <.001). Similarly, the liking/similarity principle revealed an increase over time of logos present in emails (rs(850) = .18, p <.001) and decrease in service details, such as payment information (rs(850) = -.16, p <.001).

The results from this study offer a different perspective regarding phishing. Previous research has focused on the user aspect; however, few studies have examined the phisher perspective and the social psychological techniques they are implementing. Additionally, they have yet to look at the success of the social psychology techniques. Results from this study can be used to help to predict future trends and inform training programs, as well as machine learning programs used to identify phishing messages.

2016-04-10
Olga A. Zielinska, Allaire K. Welk, Emerson Murphy-Hill, Christopher B. Mayhorn.  2016.  A temporal analysis of persuasion principles in phishing emails. Human Factors and Ergonomics Society 60th Annual Meeting.

Eight hundred eighty-seven phishing emails from Arizona State University, Brown University, and Cornell University were assessed by two reviewers for Cialdini’s six principles of persuasion: authority, social proof, liking/similarity, commitment/consistency, scarcity, and reciprocation. A correlational analysis of email characteristics by year revealed that the persuasion principles of commitment/consistency and scarcity have increased over time, while the principles of reciprocation and social proof have decreased over time. Authority and liking/similarity revealed mixed results with certain characteristics increasing and others decreasing. Results from this study can inform user training of phishing emails and help cybersecurity software to become more effective. 

2016-01-09
2015-07-06
2015-04-02
Allaire K. Welk, Christopher B. Mayhorn.  2015.  All Signals Go: Investigating How Individual Differences Affect Performance on a Medical Diagnosis Task Designed to Parallel a Signal Intelligence Analyst Task. Symposium and Bootcamp on the Science of Security (HotSoS).

Signals intelligence analysts play a critical role in the United States government by providing information regarding potential national security threats to government leaders. Analysts perform complex decision-making tasks that involve gathering, sorting, and analyzing information. The current study evaluated how individual differences and training influence performance on an Internet search-based medical diagnosis task designed to simulate a signals analyst task. The implemented training emphasized the extraction and organization of relevant information and deductive reasoning. The individual differences of interest included working memory capacity and previous experience with elements of the task, specifically health literacy, prior experience using the Internet, and prior experience conducting Internet searches. Preliminary results indicated that the implemented training did not significantly affect performance, however, working memory significantly predicted performance on the implemented task. These results support previous research and provide additional evidence that working memory capacity influences performance on cognitively complex decision-making tasks, whereas experience with elements of the task may not. These findings suggest that working memory capacity should be considered when screening individuals for signals intelligence positions. Future research should aim to generalize these findings within a broader sample, and ideally utilize a task that directly replicates those performed by signals analysts.

Olga Zielinska, Allaire Welk, Christopher B. Mayhorn, Emerson Murphy-Hill.  2015.  Exploring expert and novice mental models of phishing. HotSoS: Symposium and Bootcamp on the Science of Security.

Experience influences actions people take in protecting themselves against phishing. One way to measure experience is through mental models. Mental models are internal representations of a concept or system that develop with experience. By rating pairs of concepts on the strength of their relationship, networks can be created through Pathfinder, showing an in-depth analysis of how information is organized. Researchers had novice and expert computer users rate three sets of terms related to phishing. The terms were divided into three categories: prevention of phishing, trends and characteristics of phishing attacks, and the consequences of phishing. Results indicated that expert mental models were more complex with more links between concepts. Specifically, experts had sixteen, thirteen, and fifteen links in the networks describing the prevention, trends, and consequences of phishing, respectively; however, novices only had eleven, nine, and nine links in the networks describing prevention, trends, and consequences of phishing, respectively. These preliminary results provide quantifiable network displays of mental models of novices and experts that cannot be seen through interviews. This information could provide a basis for future research on how mental models could be used to determine phishing vulnerability and the effectiveness of phishing training.