Visible to the public Biblio

Filters: Author is Aiping Xiong  [Clear All Filters]
2017-04-08
Aiping Xiong, Robert W. Proctor, Ninghui Li, Weining Yang.  2017.  Is domain highlighting actually helpful in identifying phishing webpages?

Objective: To evaluate the effectiveness of domain highlighting in helping users identify whether webpages are legitimate or spurious.

Background: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which website they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. 

Method: Two phishing detection experiments were conducted. Experiment 1 was run online: Participants judged the legitimacy of webpages in two phases. In phase one, participants were to judge the legitimacy based on any information on the webpage, whereas phase two they were to focus on the address bar. Whether the domain was highlighted was also varied.  Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations.

Results: Participants differentiated the legitimate and fraudulent webpages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the domain highlighting.

Conclusion: Failure to detect many fraudulent webpages even when the domain was highlighted implies that users lacked knowledge of webpage security cues or how to use those cues.

Application: Potential applications include development of phishing-prevention training incorporating domain highlighting with other methods to help users identify phishing webpages. 

2017-04-01
Weining Yang, Aiping Xiong, Jing Chen, Robert W. Proctor, Ninghui Li.  2017.  Use of Phishing Training to Improve Security Warning Compliance: Evidence from a Field Experiment.

The current approach to protect users from phishing attacks is to display a warning when the webpage is considered suspicious. We hypothesize that users are capable of making correct informed decisions when the warning also conveys the reasons why it is displayed. We chose to use traffic rankings of domains, which can be easily described to users, as a warning trigger and evaluated the effect of the phishing warning message and phishing training. The evaluation was conducted in a field experiment. We found that knowledge gained from the training enhances the effectiveness of phishing warnings, as the number of participants being phished was reduced. However, the knowledge by itself was not sufficient to provide phishing protection. We suggest that integrating training in the warning interface, involving traffic ranking in phishing detection, and explaining why warnings are generated will improve current phishing defense.

Aiping Xiong, Robert W. Proctor, Ninghui Li, Weining Yang.  2017.  Is domain highlighting actually helpful in identifying phishing webpages? Human Factors: The Journal of the Human Factors and Ergonomics Society.

To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the highlighted domains. Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages.

2017-01-05
Aiping Xiong, Robert W. Proctor, Ninghui Li, Weining Yang.  2016.  Use of Warnings for Instructing Users How to Detect Phishing Webpages. 46th Annual Meeting of the Society for Computers in Psychology.

The ineffectiveness of phishing warnings has been attributed to users' poor comprehension of the warning. However, the effectiveness of a phishing warning is typically evaluated at the time when users interact with a suspected phishing webpage, which we call the effect with phishing warning. Nevertheless, users' improved phishing detection when the warning is absent—or the effect of the warning—is the ultimate goal to prevent users from falling for phishing scams. We conducted an online study to evaluate the effect with and of several phishing warning variations, varying the point at which the warning was presented and whether procedural knowledge instruction was included in the warning interface. The current Chrome phishing warning was also included as a control. 360 Amazon Mechanical-Turk workers made submission; 500¬ word maximum for symposia) decisions about 10 login webpages (8 authentic, 2 fraudulent) with the aid of warning (first phase). After a short distracting task, the workers made the same decisions about 10 different login webpages (8 authentic, 2 fraudulent) without warning. In phase one, the compliance rates with two proposed warning interfaces (98% and 94%) were similar to those of the Chrome warning (98%), regardless of when the warning was presented. In phase two (without warning), performance was better for the condition in which warning with procedural knowledge instruction was presented before the phishing webpage in phase one, suggesting a better of effect than for the other conditions. With the procedural knowledge of how to determine a webpage’s legitimacy, users identified phishing webpages more accurately even without the warning being presented.

Aiping Xiong, Robert W. Proctor, Weining Yang, Ninghui Li.  2017.  Is Domain Highlighting Actually Helpful in Identifying Phishing Webpages? Human Factors: The Journal of the Human Factors and Ergonomics Society.

Objective: To evaluate the effectiveness of domain highlighting in helping users identify whether webpages are legitimate or spurious.

Background: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which website they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. 

Method: We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of webpages in two phases. In phase one, participants were to judge the legitimacy based on any information on the webpage, whereas phase two they were to focus on the address bar. Whether the domain was highlighted was also varied.  Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations.

Results: Participants differentiated the legitimate and fraudulent webpages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the highlighted domains.

Conclusion: Failure to detect many fraudulent webpages even when the domain was highlighted implies that users lacked knowledge of webpage security cues or how to use those cues.

Aiping Xiong, Robert W. Proctor, Ninghui Li, Weining Yang.  2016.  Use of Warnings for Instructing Users How to Detect Phishing Webpages. 46th Annual Meeting of the Society for Computers in Psychology.

The ineffectiveness of phishing warnings has been attributed to users' poor comprehension of the warning. However, the effectiveness of a phishing warning is typically evaluated at the time when users interact with a suspected phishing webpage, which we call the effect with phishing warning. Nevertheless, users' improved phishing detection when the warning is absent—or the effect of the warning—is the ultimate goal to prevent users from falling for phishing scams. We conducted an online study to evaluate the effect with and of several phishing warning variations, varying the point at which the warning was presented and whether procedural knowledge instruction was included in the warning interface. The current Chrome phishing warning was also included as a control. 360 Amazon Mechanical-Turk workers made submission; 500¬ word maximum for symposia) decisions about 10 login webpages (8 authentic, 2 fraudulent) with the aid of warning (first phase). After a short distracting task, the workers made the same decisions about 10 different login webpages (8 authentic, 2 fraudulent) without warning. In phase one, the compliance rates with two proposed warning interfaces (98% and 94%) were similar to those of the Chrome warning (98%), regardless of when the warning was presented. In phase two (without warning), performance was better for the condition in which warning with procedural knowledge instruction was presented before the phishing webpage in phase one, suggesting a better of effect than for the other conditions. With the procedural knowledge of how to determine a webpage’s legitimacy, users identified phishing webpages more accurately even without the warning being presented.

2016-10-06
Aiping Xiong, Robert Proctor, Wanling Zou, Ninghui Li.  2016.  Tracking users’ fixations when evaluating the validity of a web site.

Phishing refers to attacks over the Internet that often proceed in the following manner. An unsolicited email is sent by the deceiver posing as a legitimate party, with the intent of getting the user to click on a link that leads to a fraudulent webpage. This webpage mimics the authentic one of a reputable organization and requests personal information such as passwords and credit card numbers from the user. If the phishing attack is successful, that personal information can then be used for various illegal activities by the perpetrator. The most reliable sign of a phishing website may be that its domain name is incorrect in the address bar. In recognition of this, all major web browsers now use domain highlighting, that is, the domain name is shown in bold font. Domain highlighting is based on the assumption that users will attend to the address bar and that they will be able to distinguish legitimate from illegitimate domain names. We previously found little evidence for the effectiveness of domain highlighting, even when participants were directed to look at the address bar, in a study with many participants conducted online through Mechanical Turk. The present study was conducted in a laboratory setting that allowed us to have better control over the viewing conditions and measure the parts of the display at which the users looked. We conducted a laboratory experiment to assess whether directing users to attend to the address bar and the use of domain highlighting assist them at detecting fraudulent webpages. An Eyelink 1000plus eye tracker was used to monitor participants’ gaze patterns throughout the experiment. 48 participants were recruited from an undergraduate subject pool; half had been phished previously and half had not. They were required to evaluate the trustworthiness of webpages (half authentic and half fraudulent) in two trial blocks. In the first block, participants were instructed to judge the webpage’s legitimacy by any information on the page. In the second block, they were directed specifically to look at the address bar. Whether or not the domain name was highlighted in the address bar was manipulated between subjects. Results confirmed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. Participants rarely looked at the address bar during the trial block in which they were not directed to the address bar. The percentage of time spent looking at the address bar increased significantly when the participants were directed to look at it. The number of fixations on the address bar also increased, with both measures indicating that more attention was allocated to the address bar when it was emphasized. When participants were directed to look at the address bar, correct decisions were improved slightly for fraudulent webpages (“unsafe”) but not for the authentic ones (“safe”). Domain highlighting had little influence even when participants were directed to look at the address bar, suggesting that participants do not rely on the domain name for their decisions about webpage legitimacy. Without the general knowledge of domain names and specific knowledge about particular domain names, domain highlighting will not be effective.

Jing Chen, Aiping Xiong, Ninghui Li, Robert Proctor.  2016.  The description-experience gap in the effect of warning reliability on user trust, reliance, and performance in a phishing context.

Automation reliability is an important factor that may affect human trust in automation, which has been shown to strongly influence the way the human operator interacts with the automated system. If the trust level is too low, the human operator may not utilize the automated system as expected; if the trust level is too high, the over-trust may lead to automation biases. In these cases, the overall system performance will be undermined --- after all, the ultimate goal of human-automation collaboration is to improve performance beyond what would be achieved with either alone. Most of the past research has manipulated the automation reliability through “experience”. That is, participants perform a certain task with an automated system that has a certain level of reliability (e.g., an automated warning system providing valid warnings 75% of the times). During or after the task, participants’ trust and reliance on the automated system is measured, as well as the performance. However, research has shown that participants’ perceived reliability usually differs from the actual reliability. In a real-world situation, it is very likely that the exact reliability can be described to the human operator (i.e., through “description”). A description-experience gap has been found robustly in human decision-making studies, according to which there are systematic differences between decisions made from description and decisions from experience. The current study examines the possible description-experience gap in the effect of automation reliability on human trust, reliance, and performance in the context of phishing. Specifically, the research investigates how the reliability of phishing warnings influences people's decisions about whether to proceed upon receiving the warning. The effect of the reliability of an automated phishing warning system is manipulated through experience with the system or through description of it. These two types of manipulations are directly compared, and the measures of interest are human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).

Jing Chen, Aiping Xiong, Ninghui Li, Robert Proctor.  2016.  The description-experience gap in the effect of warning reliability on user trust, reliance, and performance in a phishing context.

Automation reliability is an important factor that may affect human trust in automation, which has been shown to strongly influence the way the human operator interacts with the automated system. If the trust level is too low, the human operator may not utilize the automated system as expected; if the trust level is too high, the over-trust may lead to automation biases. In these cases, the overall system performance will be undermined --- after all, the ultimate goal of human-automation collaboration is to improve performance beyond what would be achieved with either alone. Most of the past research has manipulated the automation reliability through “experience”. That is, participants perform a certain task with an automated system that has a certain level of reliability (e.g., an automated warning system providing valid warnings 75% of the times). During or after the task, participants’ trust and reliance on the automated system is measured, as well as the performance. However, research has shown that participants’ perceived reliability usually differs from the actual reliability. In a real-world situation, it is very likely that the exact reliability can be described to the human operator (i.e., through “description”). A description-experience gap has been found robustly in human decision-making studies, according to which there are systematic differences between decisions made from description and decisions from experience. The current study examines the possible description-experience gap in the effect of automation reliability on human trust, reliance, and performance in the context of phishing. Specifically, the research investigates how the reliability of phishing warnings influences people's decisions about whether to proceed upon receiving the warning. The effect of the reliability of an automated phishing warning system is manipulated through experience with the system or through description of it. These two types of manipulations are directly compared, and the measures of interest are human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).

Jing Chen, Aiping Xiong, Ninghui Li, Robert Proctor.  2016.  The description-experience gap in the effect of warning reliability on user trust, reliance, and performance in a phishing context.

Automation reliability is an important factor that may affect human trust in automation, which has been shown to strongly influence the way the human operator interacts with the automated system. If the trust level is too low, the human operator may not utilize the automated system as expected; if the trust level is too high, the over-trust may lead to automation biases. In these cases, the overall system performance will be undermined --- after all, the ultimate goal of human-automation collaboration is to improve performance beyond what would be achieved with either alone. Most of the past research has manipulated the automation reliability through “experience”. That is, participants perform a certain task with an automated system that has a certain level of reliability (e.g., an automated warning system providing valid warnings 75% of the times). During or after the task, participants’ trust and reliance on the automated system is measured, as well as the performance. However, research has shown that participants’ perceived reliability usually differs from the actual reliability. In a real-world situation, it is very likely that the exact reliability can be described to the human operator (i.e., through “description”). A description-experience gap has been found robustly in human decision-making studies, according to which there are systematic differences between decisions made from description and decisions from experience. The current study examines the possible description-experience gap in the effect of automation reliability on human trust, reliance, and performance in the context of phishing. Specifically, the research investigates how the reliability of phishing warnings influences people's decisions about whether to proceed upon receiving the warning. The effect of the reliability of an automated phishing warning system is manipulated through experience with the system or through description of it. These two types of manipulations are directly compared, and the measures of interest are human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).

Aiping Xiong, Robert Proctor, Wanling Zou, Ninghui Li.  2016.  Tracking users’ fixations when evaluating the validity of a web site.

Phishing refers to attacks over the Internet that often proceed in the following manner. An unsolicited email is sent by the deceiver posing as a legitimate party, with the intent of getting the user to click on a link that leads to a fraudulent webpage. This webpage mimics the authentic one of a reputable organization and requests personal information such as passwords and credit card numbers from the user. If the phishing attack is successful, that personal information can then be used for various illegal activities by the perpetrator. The most reliable sign of a phishing website may be that its domain name is incorrect in the address bar. In recognition of this, all major web browsers now use domain highlighting, that is, the domain name is shown in bold font. Domain highlighting is based on the assumption that users will attend to the address bar and that they will be able to distinguish legitimate from illegitimate domain names. We previously found little evidence for the effectiveness of domain highlighting, even when participants were directed to look at the address bar, in a study with many participants conducted online through Mechanical Turk. The present study was conducted in a laboratory setting that allowed us to have better control over the viewing conditions and measure the parts of the display at which the users looked. We conducted a laboratory experiment to assess whether directing users to attend to the address bar and the use of domain highlighting assist them at detecting fraudulent webpages. An Eyelink 1000plus eye tracker was used to monitor participants’ gaze patterns throughout the experiment. 48 participants were recruited from an undergraduate subject pool; half had been phished previously and half had not. They were required to evaluate the trustworthiness of webpages (half authentic and half fraudulent) in two trial blocks. In the first block, participants were instructed to judge the webpage’s legitimacy by any information on the page. In the second block, they were directed specifically to look at the address bar. Whether or not the domain name was highlighted in the address bar was manipulated between subjects. Results confirmed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. Participants rarely looked at the address bar during the trial block in which they were not directed to the address bar. The percentage of time spent looking at the address bar increased significantly when the participants were directed to look at it. The number of fixations on the address bar also increased, with both measures indicating that more attention was allocated to the address bar when it was emphasized. When participants were directed to look at the address bar, correct decisions were improved slightly for fraudulent webpages (“unsafe”) but not for the authentic ones (“safe”). Domain highlighting had little influence even when participants were directed to look at the address bar, suggesting that participants do not rely on the domain name for their decisions about webpage legitimacy. Without the general knowledge of domain names and specific knowledge about particular domain names, domain highlighting will not be effective.

Aiping Xiong, Weining Yang, Ninghui Li, Robert Proctor.  2016.  Ineffectiveness of domain highlighting as a tool to help users identify phishing webpages. 60th Annual Meeting of Human Factors and Ergonomics Society.

Domain highlighting has been implemented by popular browsers with the aim of helping users identify which sites they are visiting. But, its effectiveness in helping users identify fraudulent webpages has not been stringently tested. Thus, we conducted an online study to test the effectiveness of domain highlighting. 320 participants were recruited to evaluate the legitimacy of 6 webpages (half authentic and half fraudulent) in two study phases. In the first phase participants were instructed to determine the legitimacy based on any information on the webpage, whereas in the second phase they were instructed to focus specifically on the address bar. Webpages with domain highlighting were presented in the first block for half of the participants and in the second block for the remaining participants. Results showed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. When participants were directed to focus on the address bar, correct decisions were increased for fraudulent webpages (unsafe) but did not change significantly for the authentic webpages (safe). The percentage of correct judgments for fraudulent webpages showed no significant difference between domain highlighting and non-highlighting conditions, even when participants were directed to the address bar. Although the results showed some benefit to detecting fraudulent webpages from directing the user's attention to the address bar, the domain highlighting method itself did not provide effective protection against phishing attacks, suggesting that other measures need to be taken for successful detection of deception.

2015-12-16
Aiping Xiong, Weining Yang, Ninghui Li, Robert W. Proctor.  2015.  Directing Users’ Attention to Combat Phishing Attacks. Society for Computers in Psychology.


We conducted an MTurk online user study to assess whether directing users to attend to the address bar and the use of domain highlighting lead to better performance at detecting fraudulent webpages. 320 participants were recruited to evaluate the trustworthiness of webpages (half authentic and half fraudulent) in two blocks. In the first block, participants were instructed to judge the webpage’s legitimacy by any information on the page. In the second block, they were directed specifically to look at the address bar. Webpages with domain highlighting were presented in the first block for half of the participants and in the second block for the remaining participants. Results showed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. When participants were directed to look at the address bar, correct decisions increased for fraudulent webpage s (“unsafe”) but did not change for authentic webpages (“safe”). The percentage of correct judgments showed no influence of domain highlighting regardless of whether the decisions were based on any information on the webpage or participants were directed to the address bar. The results suggest that directing users’ attention to the address bar is slightly beneficial at helping users detect phishing web pages, whereas domain highlighting gives almost no additional protection.