Visible to the public Biblio

Filters: Keyword is human performance  [Clear All Filters]
2020-10-12
Ferguson-Walter, Kimberly, Major, Maxine, Van Bruggen, Dirk, Fugate, Sunny, Gutzwiller, Robert.  2019.  The World (of CTF) is Not Enough Data: Lessons Learned from a Cyber Deception Experiment. 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC). :346–353.
The human side of cyber is fundamentally important to understanding and improving cyber operations. With the exception of Capture the Flag (CTF) exercises, cyber testing and experimentation tends to ignore the human attacker. While traditional CTF events include a deeply rooted human component, they rarely aim to measure human performance, cognition, or psychology. We argue that CTF is not sufficient for measuring these aspects of the human; instead, we examine the value in performing red team behavioral and cognitive testing in a large-scale, controlled human-subject experiment. In this paper we describe the pros and cons of performing this type of experimentation and provide detailed exposition of the data collection and experimental controls used during a recent cyber deception experiment-the Tularosa Study. Finally, we will discuss lessons learned and how our experiences can inform best practices in future cyber operations studies of human behavior and cognition.
2020-02-18
Han, Chihye, Yoon, Wonjun, Kwon, Gihyun, Kim, Daeshik, Nam, Seungkyu.  2019.  Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study. 2019 International Joint Conference on Neural Networks (IJCNN). :1–8.

The recent success of brain-inspired deep neural networks (DNNs) in solving complex, high-level visual tasks has led to rising expectations for their potential to match the human visual system. However, DNNs exhibit idiosyncrasies that suggest their visual representation and processing might be substantially different from human vision. One limitation of DNNs is that they are vulnerable to adversarial examples, input images on which subtle, carefully designed noises are added to fool a machine classifier. The robustness of the human visual system against adversarial examples is potentially of great importance as it could uncover a key mechanistic feature that machine vision is yet to incorporate. In this study, we compare the visual representations of white- and black-box adversarial examples in DNNs and humans by leveraging functional magnetic resonance imaging (fMRI). We find a small but significant difference in representation patterns for different (i.e. white- versus black-box) types of adversarial examples for both humans and DNNs. However, human performance on categorical judgment is not degraded by noise regardless of the type unlike DNN. These results suggest that adversarial examples may be differentially represented in the human visual system, but unable to affect the perceptual experience.