A Human Information-Processing Analysis of Online Deception Detection - October 2016
Public Audience
Purpose: To highlight project progress. Information is generally at a higher level which is accessible to the interested public. All information contained in the report (regions 1-3) is a Government Deliverable/CDRL.
PI(s): Robert W. Proctor, Ninghui Li
Researchers: Jing Chen; Weining Yang; Aiping Xiong; Wanling Zou
HARD PROBLEM(S) ADDRESSED
- Human Behavior - Predicting individual users’ judgments and decisions regarding possible online deception. Our research addresses this problem within the context of examining user decisions with regard to phishing attacks. This work is grounded within the scientific literature on human decision-making processes.
PUBLICATIONS
-
Xiong, A., Yang, W., Li, N., & Proctor, R. W. (2016). Ineffectiveness of domain highlighting as a tool to help users identify phishing webpages. Talk presentated at the 60th Annual Meeting of Human Factors and Ergonomics Society, Washington, DC, September. (nominated for the for the HFES 2016 Marc Resnick Best Paper Competition)
-
Chen, J., Xiong, A., Li, N., & Proctor, R. W. (2016). The description-experience gap in the effect of warning reliability on user trust, reliance, and performance in a phishing context. Talk presented at the 7th International Conference on Applied Human Factors and Ergonomics (AHFE), Orlando, FL, July.
-
Xiong, A., Li, N., Zou, W., & Proctor, R. W. Tracking users’ fixations when evaluating the validity of a web site. Talk presented at the 7th International Conference on Applied Human Factors and Ergonomics (AHFE), Orlando, FL, July.
ACCOMPLISHMENT HIGHLIGHTS
-
We completed an online study that evaluated a method in which training to identify phishing webpages is embedded within a phishing warning. In the study, participants first made decisions about authentic and fraudulent webpages with the aid of warning. They made similar decisions without the warning aid after a short distracting task and a week later. Although participants’ performances were similar for all interfaces in first phase, training-embedded designs provided better protection than the current Chrome phishing warning on both subsequent tests. Our findings suggest that embedded training is a complementary strategy to compensate for lack of long-term benefit of the current phishing warning.
In a second experiment, a phishing email identification task (with a phishing detection automated assistant system) was used as a testbed to study human trust in automation in the cyber domain. Factors investigated included the influence of “description” (i.e., whether the user was informed about the actual reliability of the automated system) and “experience” (i.e., whether the user was provided feedback on their choices), in addition to the reliability level of the automated phishing detection system. Higher automation reliability increased the overall quality of the decisionsthat users made and their self-reported trust. The reliability of the system was underestimated even with description. Description affected self-reported trust, and feedback affected perceived automation reliability.