Visible to the public Biblio

Filters: Author is Jensen, Theodore  [Clear All Filters]
2021-05-13
Peck, Sarah Marie, Khan, Mohammad Maifi Hasan, Fahim, Md Abdullah Al, Coman, Emil N, Jensen, Theodore, Albayram, Yusuf.  2020.  Who Would Bob Blame? Factors in Blame Attribution in Cyberattacks Among the Non-Adopting Population in the Context of 2FA 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC). :778–789.
This study focuses on identifying the factors contributing to a sense of personal responsibility that could improve understanding of insecure cybersecurity behavior and guide research toward more effective messaging targeting non-adopting populations. Towards that, we ran a 2(account type) x2(usage scenario) x2(message type) between-group study with 237 United States adult participants on Amazon MTurk, and investigated how the non-adopting population allocates blame, and under what circumstances they blame the end user among the parties who hold responsibility: the software companies holding data, the attackers exposing data, and others. We find users primarily hold service providers accountable for breaches but they feel the same companies should not enforce stronger security policies on users. Results indicate that people do hold end users accountable for their behavior in the event of a breach, especially when the users' behavior affects others. Implications of our findings in risk communication is discussed in the paper.
2019-02-08
Jensen, Theodore, Albayram, Yusuf, Khan, Mohammad Maifi Hasan, Buck, Ross, Coman, Emil, Fahim, Md Abdullah Al.  2018.  Initial Trustworthiness Perceptions of a Drone System Based on Performance and Process Information. Proceedings of the 6th International Conference on Human-Agent Interaction. :229-237.

Prior work notes dispositional, learned, and situational aspects of trust in automation. However, no work has investigated the relative role of these factors in initial trust of an automated system. Moreover, trust in automation researchers often consider trust unidimensionally, whereas ability, integrity, and benevolence perceptions (i.e., trusting beliefs) may provide a more thorough understanding of trust dynamics. To investigate this, we recruited 163 participants on Amazon's Mechanical Turk (MTurk) and randomly assigned each to one of 4 videos describing a hypothetical drone system: one control, the others with additional system performance or process, or both types of information. Participants reported on trusting beliefs in the system, propensity to trust other people, risk-taking tendencies, and trust in the government law enforcement agency behind the system. We found that financial risk-taking tendencies influenced trusting beliefs. Also, those who received process information were likely to have higher integrity and ability beliefs than those not receiving process information, while those who received performance information were likely to have higher ability beliefs. Lastly, perceptions of structural assurance positively influenced all three trusting beliefs. Our findings suggest that a) users' risk-taking tendencies influence trustworthiness perceptions of systems, b) different types of information about a system have varied effects on the trustworthiness dimensions, and c) institutions play an important role in users' calibration of trust. Insights gained from this study can help design training materials and interfaces that improve user trust calibration in automated systems.