Visible to the public Biblio

Found 1717 results

Filters: First Letter Of Last Name is J  [Clear All Filters]
2016-12-09
Jim Blythe, University of Southern California, Sean Smith, Dartmouth College.  2015.  Understanding and Accounting for Human Behavior.

Since computers are machines, it's tempting to think of computer security as purely a technical problem. However, computing systems are created, used, and maintained by humans, and exist to serve the goals of human and institutional stakeholders. Consequently, effectively addressing the security problem requires understanding this human dimension.


In this tutorial, we discuss this challenge and survey principal research approaches to it.
 

Invited Tutorial, Symposium and Bootcamp on the Science of Security (HotSoS 2015), April 2015, Urbana, IL.

2016-12-01
Pierre McCauley, University of Illinois at Urbana-Champaign, Brandon Nsiah-Ababio, University of Illinois at Urbana-Champaign, Joshua Reed, University of Illinois at Urbana-Champaign, Faramola Isiaka, University of Illinois at Urbana-Champaign, Tao Xie, University of Illinois at Urbana-Champaign.  2016.  Preliminary Analysis of Code Hunt Data Set from a Contest. 2nd International Code Hunt Workshop on Educational Software Engineering (CHESE 2016).

Code Hunt (https://www.codehunt.com/) from Microsoft Research is a web-based serious gaming platform being popularly used for various programming contests. In this paper, we demonstrate preliminary statistical analysis of a Code Hunt data set that contains the programs written by students (only) worldwide during a contest over 48 hours. There are 259 users, 24 puzzles (organized into 4 sectors), and about 13,000 programs submitted by these users. Our analysis results can help improve the creation of puzzles in a future contest.

2016-11-15
2016-11-14
2016-11-11
Brighten Godfrey, University of Illions at Urbana-Champagin, Anduo Wang, Temple University, Dong Jin, Illinois Institute of Technology, Jason Croft, University of Illinois at Urbana-Champaign, Matthew Caesar, University of Illinois at Urbana-Champaign.  2015.  A Hypothesis Testing Framework for Network Security.

We rely on network infrastructure to deliver critical services and ensure security. Yet networks today have reached a level of complexity that is far beyond our ability to have confidence in their correct behavior – resulting in significant time investment and security vulnerabilities that can cost millions of dollars, or worse. Motivated by this need for rigorous understanding of complex networks, I will give an overview of our or Science of Security lablet project, A Hypothesis Testing Framework for Network Security.

First, I will discuss the emerging field of network verification, which transforms network security by rigorously checking that intended behavior is correctly realized across the live running network. Our research developed a technique called data plane verification, which has discovered problems in operational environments and can verify hypotheses and security policies with millisecond-level latency in dynamic networks. In just a few years, data plane verification has moved from early research prototypes to production deployment. We have built on this technique to reason about hypotheses even under the temporal uncertainty inherent in a large distributed network. Second, I will discuss a new approach to reasoning about networks as databases that we can query to determine answers to behavioral questions and to actively control the network. This talk will span work by a large group of folks, including Anduo Wang, Wenxu an Zhou, Dong Jin, Jason Croft, Matthew Caesar, Ahmed Khurshid, and Xuan Zou.

Presented at the Illinois ITI Joint Trust and Security/Science of Security Seminar, September 15, 2015.

2016-11-09
2016-10-24
Ross Koppel, University of Pennsylvania, Jim Blythe, University of Southern California, Vijay Kothari, Dartmouth College, Sean W. Smith, Darthmouth Colleg.  2016.  Beliefs about Cybersecurity Rules and Passwords: A Comparison of Two Survey Samples of Cybersecurity Professionals Versus Regular Users. 12th Symposium On Usable Privacy and Security.

In this paper we explore the differential perceptions of cybersecurity professionals and general users regarding access rules and passwords. We conducted a preliminary survey involving 28 participants: 15 cybersecurity professionasl and 13 general users. We present our preliminary findings and explain how such survey data might be used to improve security in practice. We focus on user fatigue with access rules and passwords.

2016-10-21
2016-10-07
Pradeep Murukannaiah, Jessica Staddon, Heather Lipford, Bart Knijnenburg.  2016.  PrIncipedia: A Privacy Incidents Encyclopedia. Privacy Law Scholars Conference.

A thorough understanding of society’s privacy incidents is of paramount importance for technical solutions, training/education, social research, and legal scholarship in privacy. The goal of the PrIncipedia project is to provide this understanding by developing the first comprehensive database of privacy incidents, enabling the exploration of a variety of privacy-related research questions. We provide a working definition of “privacy incident” and evidence that it meets end-user perceptions of privacy. We also provide semi-automated support for building the database through a learned classifier that detects news articles about privacy incidents.

Ignacio X. Dominguez, Jayant Dhawan, Robert St. Amant, David L. Roberts.  2016.  Exploring the effects of different text stimuli on typing behavior. Proceedings of the International Conference on Cognitive Modeling {(ICCM)}. :175–181.
Ignacio X. Dominguez, Jayant Dhawan, Robert St. Amant, David L. Roberts.  2016.  JIVUI: JavaScript Interface for Visualization of User Interaction. Proceedings of the International Conference on Cognitive Modeling (ICCM). :125–130.

In this paper we describe the JavaScript Interface for Visu- alization of User Interaction (JIVUI): a modular, Web-based, and customizable visualization tool that shows an animation of the trace of a user interaction with a graphical interface, or of predictions made by cognitive models of user interaction. Any combination of gaze, mouse, and keyboard data can be repro- duced within a user-provided interface. Although customiz- able, the tool includes a series of plug-ins to support common visualization tasks, including a timeline of input device events and perceptual and cognitive operators based on the Model Hu- man Processor and TYPIST. We talk about our use of this tool to support hypothesis generation, assumption validation, and to guide our modeling efforts. 

2016-10-06
Jing Chen, Aiping Xiong, Ninghui Li, Robert Proctor.  2016.  The description-experience gap in the effect of warning reliability on user trust, reliance, and performance in a phishing context.

Automation reliability is an important factor that may affect human trust in automation, which has been shown to strongly influence the way the human operator interacts with the automated system. If the trust level is too low, the human operator may not utilize the automated system as expected; if the trust level is too high, the over-trust may lead to automation biases. In these cases, the overall system performance will be undermined --- after all, the ultimate goal of human-automation collaboration is to improve performance beyond what would be achieved with either alone. Most of the past research has manipulated the automation reliability through “experience”. That is, participants perform a certain task with an automated system that has a certain level of reliability (e.g., an automated warning system providing valid warnings 75% of the times). During or after the task, participants’ trust and reliance on the automated system is measured, as well as the performance. However, research has shown that participants’ perceived reliability usually differs from the actual reliability. In a real-world situation, it is very likely that the exact reliability can be described to the human operator (i.e., through “description”). A description-experience gap has been found robustly in human decision-making studies, according to which there are systematic differences between decisions made from description and decisions from experience. The current study examines the possible description-experience gap in the effect of automation reliability on human trust, reliance, and performance in the context of phishing. Specifically, the research investigates how the reliability of phishing warnings influences people's decisions about whether to proceed upon receiving the warning. The effect of the reliability of an automated phishing warning system is manipulated through experience with the system or through description of it. These two types of manipulations are directly compared, and the measures of interest are human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).

Jing Chen, Aiping Xiong, Ninghui Li, Robert Proctor.  2016.  The description-experience gap in the effect of warning reliability on user trust, reliance, and performance in a phishing context.

Automation reliability is an important factor that may affect human trust in automation, which has been shown to strongly influence the way the human operator interacts with the automated system. If the trust level is too low, the human operator may not utilize the automated system as expected; if the trust level is too high, the over-trust may lead to automation biases. In these cases, the overall system performance will be undermined --- after all, the ultimate goal of human-automation collaboration is to improve performance beyond what would be achieved with either alone. Most of the past research has manipulated the automation reliability through “experience”. That is, participants perform a certain task with an automated system that has a certain level of reliability (e.g., an automated warning system providing valid warnings 75% of the times). During or after the task, participants’ trust and reliance on the automated system is measured, as well as the performance. However, research has shown that participants’ perceived reliability usually differs from the actual reliability. In a real-world situation, it is very likely that the exact reliability can be described to the human operator (i.e., through “description”). A description-experience gap has been found robustly in human decision-making studies, according to which there are systematic differences between decisions made from description and decisions from experience. The current study examines the possible description-experience gap in the effect of automation reliability on human trust, reliance, and performance in the context of phishing. Specifically, the research investigates how the reliability of phishing warnings influences people's decisions about whether to proceed upon receiving the warning. The effect of the reliability of an automated phishing warning system is manipulated through experience with the system or through description of it. These two types of manipulations are directly compared, and the measures of interest are human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).