Biblio
The current approach to protect users from phishing attacks is to display a warning when the webpage is considered suspicious. We hypothesize that users are capable of making correct informed decisions when the warning also conveys the reasons why it is displayed. We chose to use traffic rankings of domains, which can be easily described to users, as a warning trigger and evaluated the effect of the phishing warning message and phishing training. The evaluation was conducted in a field experiment. We found that knowledge gained from the training enhances the effectiveness of phishing warnings, as the number of participants being phished was reduced. However, the knowledge by itself was not sufficient to provide phishing protection. We suggest that integrating training in the warning interface, involving traffic ranking in phishing detection, and explaining why warnings are generated will improve current phishing defense.
Many previous studies have shown that trust in automation mediates the effectiveness of automation in maintaining performance, and one critical factor that affects trust is the reliability of the automated system. In the cyber domain, automated systems are pervasive, yet the involvement of human trust has not been studied extensively as in other domains such as transportation.
In the current study, we used a phishing email identification task (with a phishing detection automated assistant system) as a testbed to study human trust in automation in the cyber domain. More specifically, we systematically investigated the influence of “description” (i.e., whether the user was informed about the actual reliability of the automated system) and “experience” (i.e., whether the user was provided feedback on their choices), in addition to the reliability level of the automated phishing detection system. These factors were varied in different conditions of response bias (false alarm vs. misses) and task difficulty (easy vs. difficult), which were found may be critical in a pilot study. Measures of user performance and trust were compared across different conditions. The measures of interest were human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).
Automation reliability is an important factor that may affect human trust in automation, which has been shown to strongly influence the way the human operator interacts with the automated system. If the trust level is too low, the human operator may not utilize the automated system as expected; if the trust level is too high, the over-trust may lead to automation biases. In these cases, the overall system performance will be undermined --- after all, the ultimate goal of human-automation collaboration is to improve performance beyond what would be achieved with either alone. Most of the past research has manipulated the automation reliability through “experience”. That is, participants perform a certain task with an automated system that has a certain level of reliability (e.g., an automated warning system providing valid warnings 75% of the times). During or after the task, participants’ trust and reliance on the automated system is measured, as well as the performance. However, research has shown that participants’ perceived reliability usually differs from the actual reliability. In a real-world situation, it is very likely that the exact reliability can be described to the human operator (i.e., through “description”). A description-experience gap has been found robustly in human decision-making studies, according to which there are systematic differences between decisions made from description and decisions from experience. The current study examines the possible description-experience gap in the effect of automation reliability on human trust, reliance, and performance in the context of phishing. Specifically, the research investigates how the reliability of phishing warnings influences people's decisions about whether to proceed upon receiving the warning. The effect of the reliability of an automated phishing warning system is manipulated through experience with the system or through description of it. These two types of manipulations are directly compared, and the measures of interest are human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).
Automation reliability is an important factor that may affect human trust in automation, which has been shown to strongly influence the way the human operator interacts with the automated system. If the trust level is too low, the human operator may not utilize the automated system as expected; if the trust level is too high, the over-trust may lead to automation biases. In these cases, the overall system performance will be undermined --- after all, the ultimate goal of human-automation collaboration is to improve performance beyond what would be achieved with either alone. Most of the past research has manipulated the automation reliability through “experience”. That is, participants perform a certain task with an automated system that has a certain level of reliability (e.g., an automated warning system providing valid warnings 75% of the times). During or after the task, participants’ trust and reliance on the automated system is measured, as well as the performance. However, research has shown that participants’ perceived reliability usually differs from the actual reliability. In a real-world situation, it is very likely that the exact reliability can be described to the human operator (i.e., through “description”). A description-experience gap has been found robustly in human decision-making studies, according to which there are systematic differences between decisions made from description and decisions from experience. The current study examines the possible description-experience gap in the effect of automation reliability on human trust, reliance, and performance in the context of phishing. Specifically, the research investigates how the reliability of phishing warnings influences people's decisions about whether to proceed upon receiving the warning. The effect of the reliability of an automated phishing warning system is manipulated through experience with the system or through description of it. These two types of manipulations are directly compared, and the measures of interest are human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).
Automation reliability is an important factor that may affect human trust in automation, which has been shown to strongly influence the way the human operator interacts with the automated system. If the trust level is too low, the human operator may not utilize the automated system as expected; if the trust level is too high, the over-trust may lead to automation biases. In these cases, the overall system performance will be undermined --- after all, the ultimate goal of human-automation collaboration is to improve performance beyond what would be achieved with either alone. Most of the past research has manipulated the automation reliability through “experience”. That is, participants perform a certain task with an automated system that has a certain level of reliability (e.g., an automated warning system providing valid warnings 75% of the times). During or after the task, participants’ trust and reliance on the automated system is measured, as well as the performance. However, research has shown that participants’ perceived reliability usually differs from the actual reliability. In a real-world situation, it is very likely that the exact reliability can be described to the human operator (i.e., through “description”). A description-experience gap has been found robustly in human decision-making studies, according to which there are systematic differences between decisions made from description and decisions from experience. The current study examines the possible description-experience gap in the effect of automation reliability on human trust, reliance, and performance in the context of phishing. Specifically, the research investigates how the reliability of phishing warnings influences people's decisions about whether to proceed upon receiving the warning. The effect of the reliability of an automated phishing warning system is manipulated through experience with the system or through description of it. These two types of manipulations are directly compared, and the measures of interest are human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).