Visible to the public Biblio

Found 2071 results

Filters: First Letter Of Last Name is R  [Clear All Filters]
2017-01-09
Ricard López Fogués, Pradeep K. Murukannaiah, Jose M. Such, Munindar P. Singh.  2017.  Understanding Sharing Policies in Multiparty Scenarios: Incorporating Context, Preferences, and Arguments into Decision Making. ACM Transactions on Computer-Human Interaction.

Social network services enable users to conveniently share personal information.  Often, the information shared concerns other people, especially other members of the social network service.  In such situations, two or more people can have conflicting privacy preferences; thus, an appropriate sharing policy may not be apparent. We identify such situations as multiuser privacy scenarios. Current approaches propose finding a sharing policy through preference aggregation.  However, studies suggest that users feel more confident in their decisions regarding sharing when they know the reasons behind each other's preferences.  The goals of this paper are (1) understanding how people decide the appropriate sharing policy in multiuser scenarios where arguments are employed, and (2) developing a computational model to predict an appropriate sharing policy for a given scenario. We report on a study that involved a survey of 988 Amazon MTurk users about a variety of multiuser scenarios and the optimal sharing policy for each scenario.  Our evaluation of the participants' responses reveals that contextual factors, user preferences, and arguments influence the optimal sharing policy in a multiuser scenario.  We develop and evaluate an inference model that predicts the optimal sharing policy given the three types of features.  We analyze the predictions of our inference model to uncover potential scenario types that lead to incorrect predictions, and to enhance our understanding of when multiuser scenarios are more or less prone to dispute.

 

To appear

2017-01-05
Aiping Xiong, Robert W. Proctor, Ninghui Li, Weining Yang.  2016.  Use of Warnings for Instructing Users How to Detect Phishing Webpages. 46th Annual Meeting of the Society for Computers in Psychology.

The ineffectiveness of phishing warnings has been attributed to users' poor comprehension of the warning. However, the effectiveness of a phishing warning is typically evaluated at the time when users interact with a suspected phishing webpage, which we call the effect with phishing warning. Nevertheless, users' improved phishing detection when the warning is absent—or the effect of the warning—is the ultimate goal to prevent users from falling for phishing scams. We conducted an online study to evaluate the effect with and of several phishing warning variations, varying the point at which the warning was presented and whether procedural knowledge instruction was included in the warning interface. The current Chrome phishing warning was also included as a control. 360 Amazon Mechanical-Turk workers made submission; 500¬ word maximum for symposia) decisions about 10 login webpages (8 authentic, 2 fraudulent) with the aid of warning (first phase). After a short distracting task, the workers made the same decisions about 10 different login webpages (8 authentic, 2 fraudulent) without warning. In phase one, the compliance rates with two proposed warning interfaces (98% and 94%) were similar to those of the Chrome warning (98%), regardless of when the warning was presented. In phase two (without warning), performance was better for the condition in which warning with procedural knowledge instruction was presented before the phishing webpage in phase one, suggesting a better of effect than for the other conditions. With the procedural knowledge of how to determine a webpage’s legitimacy, users identified phishing webpages more accurately even without the warning being presented.

Jing Chen, Robert W. Proctor, Ninghui Li.  2016.  Human Trust in Automation in a Phishing Context. 46th Annual Meeting of the Society for Computers in Psychology.

Many previous studies have shown that trust in automation mediates the effectiveness of automation in maintaining performance, and one critical factor that affects trust is the reliability of the automated system. In the cyber domain, automated systems are pervasive, yet the involvement of human trust has not been studied extensively as in other domains such as transportation.

In the current study, we used a phishing email identification task (with a phishing detection automated assistant system) as a testbed to study human trust in automation in the cyber domain. More specifically, we systematically investigated the influence of “description” (i.e., whether the user was informed about the actual reliability of the automated system) and “experience” (i.e., whether the user was provided feedback on their choices), in addition to the reliability level of the automated phishing detection system. These factors were varied in different conditions of response bias (false alarm vs. misses) and task difficulty (easy vs. difficult), which were found may be critical in a pilot study. Measures of user performance and trust were compared across different conditions. The measures of interest were human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).

Aiping Xiong, Robert W. Proctor, Weining Yang, Ninghui Li.  2017.  Is Domain Highlighting Actually Helpful in Identifying Phishing Webpages? Human Factors: The Journal of the Human Factors and Ergonomics Society.

Objective: To evaluate the effectiveness of domain highlighting in helping users identify whether webpages are legitimate or spurious.

Background: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which website they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. 

Method: We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of webpages in two phases. In phase one, participants were to judge the legitimacy based on any information on the webpage, whereas phase two they were to focus on the address bar. Whether the domain was highlighted was also varied.  Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations.

Results: Participants differentiated the legitimate and fraudulent webpages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the highlighted domains.

Conclusion: Failure to detect many fraudulent webpages even when the domain was highlighted implies that users lacked knowledge of webpage security cues or how to use those cues.

Aiping Xiong, Robert W. Proctor, Ninghui Li, Weining Yang.  2016.  Use of Warnings for Instructing Users How to Detect Phishing Webpages. 46th Annual Meeting of the Society for Computers in Psychology.

The ineffectiveness of phishing warnings has been attributed to users' poor comprehension of the warning. However, the effectiveness of a phishing warning is typically evaluated at the time when users interact with a suspected phishing webpage, which we call the effect with phishing warning. Nevertheless, users' improved phishing detection when the warning is absent—or the effect of the warning—is the ultimate goal to prevent users from falling for phishing scams. We conducted an online study to evaluate the effect with and of several phishing warning variations, varying the point at which the warning was presented and whether procedural knowledge instruction was included in the warning interface. The current Chrome phishing warning was also included as a control. 360 Amazon Mechanical-Turk workers made submission; 500¬ word maximum for symposia) decisions about 10 login webpages (8 authentic, 2 fraudulent) with the aid of warning (first phase). After a short distracting task, the workers made the same decisions about 10 different login webpages (8 authentic, 2 fraudulent) without warning. In phase one, the compliance rates with two proposed warning interfaces (98% and 94%) were similar to those of the Chrome warning (98%), regardless of when the warning was presented. In phase two (without warning), performance was better for the condition in which warning with procedural knowledge instruction was presented before the phishing webpage in phase one, suggesting a better of effect than for the other conditions. With the procedural knowledge of how to determine a webpage’s legitimacy, users identified phishing webpages more accurately even without the warning being presented.

2016-12-16
Jim Blythe, University of Southern California, Ross Koppel, University of Pennsylvania, Sean Smith, Dartmouth College.  2013.  Circumvention of Security: Good Users Do Bad Things.

Conventional wisdom is that the textbook view describes reality, and only bad people (not good people trying to get their jobs done) break the rules. And yet it doesn't, and good people circumvent.
 

Published in IEEE Security & Privacy, volume 11, issue 5, September - October 2013.

2016-12-09
2016-12-01
Harold Thimbleby, Swansea University, Ross Koppel, University of Pennsylvania.  2015.  The Healthtech Declaration. IEEE Security and Privacy. 13(6):82-84.

Healthcare technology—sometimes called “healthtech” or “healthsec”—is enmeshed with security and privacy via usability, performance, and cost-effectiveness issues. It is multidisciplinary, distributed, and complex—and it also involves many competing stakeholders and interests. To address the problems that arise in such a multifaceted field—comprised of physicians, IT professionals, management information specialists, computer scientists, edical informaticists, and epidemiologists, to name a few—the Healthtech Declaration was initiated at the most recent USENIX Summit on Information Technologies for Health (Healthtech 2015) held in Washington, DC. This Healthtech Declaration includes an easy-touse—and easy-to-cite—checklist of key issues that anyone proposing a solution must consider (see “The Healthtech Declaration Checklist” sidebar). In this article, we provide the context and motivation for the declaration.

2016-11-18
2016-11-17
Phuong Cao, University of Illinois at Urbana-Champaign, Ravishankar Iyer, University of Illinois at Urbana-Champaign, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Eric Badger, University of Illinois at Urbana-Champaign, Surya Bakshi, University of Illinois at Urbana-Champaign, Simon Kim, University of Illinois at Urbana-Champaign, Adam Slagell, University of Illinois at Urbana-Champaign, Alex Withers, University of Illinois at Urbana-Champaign.  2016.  Preemptive Intrusion Detection – Practical Experience and Detection Framework.

Using stolen or weak credentials to bypass authentication is one of the top 10 network threats, as shown in recent studies. Disguising as legitimate users, attackers use stealthy techniques such as rootkits and covert channels to gain persistent access to a target system. However, such attacks are often detected after the system misuse stage, i.e., the attackers have already executed attack payloads such as: i) stealing secrets, ii) tampering with system services, and ii) disrupting the availability of production services.

In this talk, we analyze a real-world credential stealing attack observed at the National Center for Supercomputing Applications. We show the disadvantages of traditional detection techniques such as signature-based and anomaly-based detection for such attacks. Our approach is a complement to existing detection techniques. We investigate the use of Probabilistic Graphical Model, specifically Factor Graphs, to integrate security logs from multiple sources for a more accurate detection. Finally, we propose a security testbed architecture to: i) simulate variants of known attacks that may happen in the future, ii) replay such attack variants in an isolated environment, and iii) collect and share security logs of such replays for the security research community.

Pesented at the Illinois Information Trust Institute Joint Trust and Security and Science of Security Seminar, May 3, 2016.

Eric Badger, University of Illinois at Urbana-Champaign, Phuong Cao, University of Illinois at Urbana-Champaign, Alex Withers, University of Illinois at Urbana-Champaign, Adam Slagell, University of Illinois at Urbana-Champaign, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar Iyer, University of Illinois at Urbana-Champaign.  2015.  Scalable Data Analytics Pipeline for Real-Time Attack Detection; Design, Validation, and Deployment in a Honey Pot Environment.

This talk will explore a scalable data analytics pipeline for real-time attack detection through the use of customized honeypots at the National Center for Supercomputing Applications (NCSA). Attack detection tools are common and are constantly improving, but validating these tools is challenging. You must: (i) identify data (e.g., system-level events) that is essential for detecting attacks, (ii) extract this data from multiple data logs collected by runtime monitors, and (iii) present the data to the attack detection tools. On top of this, such an approach must scale with an ever-increasing amount of data, while allowing integration of new monitors and attack detection tools. All of these require an infrastructure to host and validate the developed tools before deployment into a production environment.

We will present a generalized architecture that aims for a real-time, scalable, and extensible pipeline that can be deployed in diverse infrastructures to validate arbitrary attack detection tools. To motivate our approach, we will show an example deployment of our pipeline based on open-sourced tools. The example deployment uses as its data sources: (i) a customized honeypot environment at NCSA and (ii) a container-based testbed infrastructure for interactive attack replay. Each of these data sources is equipped with network and host-based monitoring tools such as Bro (a network-based intrusion detection system) and OSSEC (a host-based intrusion detection system) to allow for the runtime collection of data on system/user behavior. Finally, we will present an attack detection tool that we developed and that we look to validate through our pipeline. In conclusion, the talk will discuss the challenges of transitioning attack detection from theory to practice and how the proposed data analytics pipeline can help that transition.

Presented at the Illinois Information Trust Institute Joint Trust and Security/Science of Security Seminar, October 6, 2016.

Presented at the NSA SoS Quarterly Lablet Meeting, October 2015.

2016-11-16
2016-11-15
Hui Lin, University of Illinois at Urbana-Champaign, Homa Alemzadeh, IBM TJ Watson, Daniel Chen, University of Illinois at Urbana-Champagin, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign.  2016.  Safety-critical Cyber-physical Attacks: Analysis, Detection, and Mitigation. Symposium and Bootcamp for the Science of Security (HotSoS 2016).

Today's cyber-physical systems (CPSs) can have very different characteristics in terms of control algorithms, configurations, underlying infrastructure, communication protocols, and real-time requirements. Despite these variations, they all face the threat of malicious attacks that exploit the vulnerabilities in the cyber domain as footholds to introduce safety violations in the physical processes. In this paper, we focus on a class of attacks that impact the physical processes without introducing anomalies in the cyber domain. We present the common challenges in detecting this type of attacks in the contexts of two very different CPSs (i.e., power grids and surgical robots). In addition, we present a general principle for detecting such cyber-physical attacks, which combine the knowledge of both cyber and physical domains to estimate the adverse consequences of malicious activities in a timely manner.

Phuong Cao, University of Illinois at Urbana-Champaign, Eric Badger, University of Illinois at Urbana-Champaign, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar Iyer, University of Illinois at Urbana-Champaign.  2016.  A Framework for Generation, Replay and Analysis of Real-World Attack Variants. Symposium and Bootcamp for the Science of Security (HotSoS 2016).

This paper presents a framework for (1) generating variants of known attacks, (2) replaying attack variants in an isolated environment and, (3) validating detection capabilities of attack detection techniques against the variants. Our framework facilitates reproducible security experiments. We generated 648 variants of three real-world attacks (observed at the National Center for Supercomputing Applications at the University of Illinois). Our experiment showed the value of generating attack variants by quantifying the detection capabilities of three detection methods: a signature-based detection technique, an anomaly-based detection technique, and a probabilistic graphical model-based technique.

Keywhan Chung, University of Illinois at Urbana-Champaign, Charles A. Kamhoua, Air Force Research Laboratory, Kevin A. Kwiat, Air Force Research Laboratory, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign.  2016.  Game Theory with Learning for Cyber Security Monitoring. IEEE High Assurance Systems Engineering Symposium (HASE 2016).

Recent attacks show that threats to cyber infrastructure are not only increasing in volume, but are getting more sophisticated. The attacks may comprise multiple actions that are hard to differentiate from benign activity, and therefore common detection techniques have to deal with high false positive rates. Because of the imperfect performance of automated detection techniques, responses to such attacks are highly dependent on human-driven decision-making processes. While game theory has been applied to many problems that require rational decisionmaking, we find limitation on applying such method on security games. In this work, we propose Q-Learning to react automatically to the adversarial behavior of a suspicious user to secure the system. This work compares variations of Q-Learning with a traditional stochastic game. Simulation results show the possibility of Naive Q-Learning, despite restricted information on opponents.