Biblio
We proposed a multi-granularity approach to present risk information of mobile apps to the end users. Within this approach the highest level is a summary risk index, which allows quick and easy comparison among multiple apps that provide similar functionality. We have developed several types of risk index, such as text saying “High Risk” or number of filled circles (Gates, Chen, Li, & Proctor, 2014). Through both online and in-lab studies, we found that when presented the interface with the summary risk index, participants made more secure app-selection decisions. Subsequent research showed that framing of the summary risk information affects users’ app-selection decisions, and positive framing in terms of safety has an advantage over negative framing in terms of risk (Chen, Gates, Li, & Proctor, 2014).
In addition to the summary risk index, some users may also want more detailed risk information for the apps. We have been developing an intermediate-level risk display that presents only the major risk categories. As a first step, we conducted user studies to have expert users’ identify the major risk categories (personal privacy, monetary loss, and device stability) and validate the categories on typical users (Jorgensen, Chen, Gates, Li, Proctor, & Yu, 2015). In a subsequent study, we are developing a graphical display to incorporate these risk categories into the current app interface and test its effectiveness.
This multi-granularity approach can be applied to risk communication in other contexts. For example, in the context of communicating the potential risk associated with phishing attacks, an effective warning should be designed to include both higher-level and lower-level risk information: A higher-level index information about how likely an email message or website is a phishing one should be presented to users and inform them about the potential risk in an easy-to-comprehend manner; a more detailed explanation should also be available for users who want to know more about the warning and the index. We have completed a pilot study in this area and are initiating a full study to investigate the effectiveness of such an interface in preventing users from being phished successfully.
Norms are a promising basis for governance in secure, collaborative environments---systems in which multiple principals interact. Yet, many aspects of norm-governance remain poorly understood, inhibiting adoption in real-life collaborative systems. This work focuses on the combined effects of sanction and observability of the sanctioner in a secure, collaborative environment. We introduce ENGMAS (Exploratory Norm-Governed MultiAgent Simulation), a multiagent simulation of students performing research within a university lab setting. ENGMAS enables us to explore the combined effects of sanction (group or individual) with the sanctioner's variable observability on system resilience and liveness. The simulation consists of agents maintaining ``compliance" to enforce security norms while also remaining ``motivated" as researchers. The results show with lower observability, agents tend not to comply with security policies and have to leave the organization eventually. Group sanction gives the agents more motive to comply with security policies and is a cost-effective approach comparing to individual sanction in terms of sanction costs.
An enormous number of images are currently shared through social networking services such as Facebook. These images usually contain appearance of people and may violate the people's privacy if they are published without permission from each person. To remedy this privacy concern, visual privacy protection, such as blurring, is applied to facial regions of people without permission. However, in addition to image quality degradation, this may spoil the context of the image: If some people are filtered while the others are not, missing facial expression makes comprehension of the image difficult. This paper proposes an image melding-based method that modifies facial regions in a visually unintrusive way with preserving facial expression. Our experimental results demonstrated that the proposed method can retain facial expression while protecting privacy.
Sophistication and flexibility of software development make it easy to leave security vulnerabilities in software applications for attack- ers. It is critical to educate and train software engineers to avoid in- troducing vulnerabilities in software applications in the first place such as adopting secure coding mechanisms and conducting secu- rity testing. A number of websites provide training grounds to train people’s hacking skills, which are highly related to security test- ing skills, and train people’s secure coding skills. However, there exists no interactive gaming platform for instilling gaming aspects into the education and training of secure coding. To address this issue, we propose to construct secure coding duels in Code Hunt, a high-impact serious gaming platform released by Microsoft Re- search. In Code Hunt, a coding duel consists of two code segments: a secret code segment and a player-visible code segment. To solve a coding duel, a player iteratively modifies the player-visible code segment to match the functional behaviors of the secret code seg- ment. During the duel-solving process, the player is given clues as a set of automatically generated test cases to characterize sample functional behaviors of the secret code segment. The game aspect in Code Hunt is to recognize a pattern from the test cases, and to re-engineer the player-visible code segment to exhibit the expected behaviors. Secure coding duels proposed in this work are coding duels that are carefully designed to train players’ secure coding skills, such as sufficient input validation and access control.
Today, by widely spread of information technology (IT) usage, E-commerce security and its related legislations are very critical issue in information technology and court law. There is a consensus that security matters are the significant foundation of e-commerce, electronic consumers, and firms' privacy. While e-commerce networks need a policy for security privacy, they should be prepared for a simple consumer friendly infrastructure. Hence it is necessary to review the theoretical models for revision. In This theory review, we embody a number of former articles that cover security of e-commerce and legislation ambit at the individual level by assessing five criteria. Whether data of articles provide an effective strategy for secure-protection challenges in e-commerce and e-consumers. Whether provisions clearly remedy precedents or they need to flourish? This paper focuses on analyzing the former discussion regarding e-commerce security and existence legislation toward cyber-crime activity of e-commerce the article also purports recommendation for subsequent research which is indicate that through secure factors of e-commerce we are able to fill the vacuum of its legislation.
Best Poster Award, ACM SIGCOMM Conference on Principles of Advanced Discrete Simulation, London, UK, June 10-12, 2015.
Techno-stress has been a problem in recent years with a development of information technology. Various studies have been reported about a relationship between key typing and psychosomatic state. Keystroke dynamics are known as dynamics of a key typing motion. The objective of this paper is to clarify the mechanism between keystroke dynamics and physiological responses. Inter-stroke time (IST) that was the interval between each keystroke was measured as keystroke dynamics. The physiological responses were heart rate variability (HRV) and respiration (Resp). The system consisted of IST, HRV, and Resp was applied multidimensional directed coherence in order to reveal a causal correlation. As a result, it was observed that strength of entrainment of physiological responses having fluctuation to IST differed in surround by the noise and a cognitive load. Specifically, the entrainment became weak as a cognitive resource devoted to IST was relatively increased with the keystroke motion had a robust rhythm. On the other hand, the entrainment became stronger as a cognitive resource devoted to IST was relatively decreased since the resource also devoted to the noise or the cognitive load.
With the recent developments in the field of visual sensor technology, multiple imaging sensors are used in several applications such as surveillance, medical imaging and machine vision, in order to improve their capabilities. The goal of any efficient image fusion algorithm is to combine the visual information, obtained from a number of disparate imaging sensors, into a single fused image without the introduction of distortion or loss of information. The existing fusion algorithms employ either the mean or choose-max fusion rule for selecting the best features for fusion. The choose-max rule distorts constants background information whereas the mean rule blurs the edges. In this paper, Non-Subsampled Contourlet Transform (NSCT) based two feature-level fusion schemes are proposed and compared. In the first method Fuzzy logic is applied to determine the weights to be assigned to each segmented region using the salient region feature values computed. The second method employs Golden Section Algorithm (GSA) to achieve the optimal fusion weights of each region based on its Petrovic metric. The regions are merged adaptively using the weights determined. Experiments show that the proposed feature-level fusion methods provide better visual quality with clear edge information and objective quality metrics than individual multi-resolution-based methods such as Dual Tree Complex Wavelet Transform and NSCT.
Phishing is an online security attack in which the hacker aims in harvesting sensitive information like passwords, credit card information etc. from the users by making them to believe what they see is what it is. This threat has been into existence for a decade and there has been continuous developments in counter attacking this threat. However, statistical study reveals how phishing is still a big threat to today's world as the online era booms. In this paper, we look into the art of phishing and have made a practical analysis on how the state of the art anti-phishing systems fail to prevent Phishing. With the loop-holes identified in the state-of-the-art systems, we move ahead paving the roadmap for the kind of system that will counter attack this online security threat more effectively.
Norms are a promising basis for governance in secure, collaborative environments---systems in which multiple principals interact. Yet, many aspects of norm-governance remain poorly understood, inhibiting adoption in real-life collaborative systems. This work focuses on the combined effects of sanction and the observability of the sanctioner in a secure, collaborative environment. We present CARLOS, a multiagent simulation of graduate students performing research within a university lab setting, to explore these phenomena. The simulation consists of agents maintaining ``compliance" to enforced security norms while remaining ``motivated" as researchers. We hypothesize that (1) delayed observability of the environment would lead to greater motivation of agents to complete research tasks than immediate observability and (2) sanctioning a group for a violation would lead to greater compliance to security norms than sanctioning an individual. We find that only the latter hypothesis is supported. Group sanction is an interesting topic for future research regarding a means for norm-governance which yields significant compliance with enforced security policy at a lower cost. Our ultimate contribution is to apply social simulation as a way to explore environmental properties and policies to evaluate key transitions in outcome, as a basis for guiding further and more demanding empirical research.
Applications in mobile marketplaces may leak private user information without notification. Existing mobile platforms provide little information on how applications use private user data, making it difficult for experts to validate appli- cations and for users to grant applications access to their private data. We propose a user-aware-privacy-control approach, which reveals how private information is used inside applications. We compute static information flows and classify them as safe/un- safe based on a tamper analysis that tracks whether private data is obscured before escaping through output channels. This flow information enables platforms to provide default settings that expose private data for only safe flows, thereby preserving privacy and minimizing decisions required from users. We build our approach into TouchDe- velop, an application-creation environment that allows users to write scripts on mobile devices and install scripts published by other users. We evaluate our approach by studying 546 scripts published by 194 users, and the results show that our approach effectively reduces the need to make access-granting choices to only 10.1 % (54) of all scripts. We also conduct a user survey that involves 50 TouchDevelop users to assess the effectiveness and usability of our approach. The results show that 90 % of the users consider our approach useful in protecting their privacy, and 54 % prefer our approach over other privacy-control approaches.
Starting with the seminal work by Kempe et al., a broad variety of problems, such as targeted marketing and the spread of viruses and malware, have been modeled as selecting
a subset of nodes to maximize diffusion through a network. In
cyber-security applications, however, a key consideration largely ignored in this literature is stealth. In particular, an attacker often has a specific target in mind, but succeeds only if the target is reached (e.g., by malware) before the malicious payload is detected and corresponding countermeasures deployed. The dual side of this problem is deployment of a limited number of monitoring units, such as cyber-forensics specialists, so as to limit the likelihood of such targeted and stealthy diffusion processes reaching their intended targets. We investigate the problem of optimal monitoring of targeted stealthy diffusion processes, and show that a number of natural variants of this problem are NP-hard to approximate. On the positive side, we show that if stealthy diffusion starts from randomly selected nodes, the defender’s objective is submodular, and a fast greedy algorithm has provable approximation guarantees. In addition, we present approximation algorithms for the setting in which an attacker optimally responds to the placement of monitoring nodes by adaptively selecting the starting nodes for the diffusion process. Our experimental results show that the proposed algorithms are highly effective and scalable.
Fat-tree topologies have been widely adopted as the communication network in data centers in the past decade. Nowa- days, high-performance computing (HPC) system designers are considering using fat-tree as the interconnection network for the next generation supercomputers. For extreme-scale computing systems like the data centers and supercomput- ers, the performance is highly dependent on the intercon- nection networks. In this paper, we present FatTreeSim, a PDES-based toolkit consisting of a highly scalable fat-tree network model, with the goal of better understanding the de- sign constraints of fat-tree networking architectures in data centers and HPC systems, as well as evaluating the applica- tions running on top of the network. FatTreeSim is designed to model and simulate large-scale fat-tree networks up to millions of nodes with protocol-level fidelity. We have con- ducted extensive experiments to validate and demonstrate the accuracy, scalability and usability of FatTreeSim. On Argonne Leadership Computing Facility’s Blue Gene/Q sys- tem, Mira, FatTreeSim is capable of achieving a peak event rate of 305 M/s for a 524,288-node fat-tree model with a total of 567 billion committed events. The strong scaling experiments use up to 32,768 cores and show a near linear scalability. Comparing with a small-scale physical system in Emulab, FatTreeSim can accurately model the latency in the same fat-tree network with less than 10% error rate for most cases. Finally, we demonstrate FatTreeSim’s usability through a case study in which FatTreeSim serves as the net- work module of the YARNsim system, and the error rates for all test cases are less than 13.7%.
Best Paper Award
We conducted an MTurk online user study to assess whether directing users to attend to the address bar and the use of domain highlighting lead to better performance at detecting fraudulent webpages. 320 participants were recruited to evaluate the trustworthiness of webpages (half authentic and half fraudulent) in two blocks. In the first block, participants were instructed to judge the webpage’s legitimacy by any information on the page. In the second block, they were directed specifically to look at the address bar. Webpages with domain highlighting were presented in the first block for half of the participants and in the second block for the remaining participants. Results showed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. When participants were directed to look at the address bar, correct decisions increased for fraudulent webpage s (“unsafe”) but did not change for authentic webpages (“safe”). The percentage of correct judgments showed no influence of domain highlighting regardless of whether the decisions were based on any information on the webpage or participants were directed to the address bar. The results suggest that directing users’ attention to the address bar is slightly beneficial at helping users detect phishing web pages, whereas domain highlighting gives almost no additional protection.
Pervasiveness of smartphones and the vast number of corresponding apps have underlined the need for applicable automated software testing techniques. A wealth of research has been focused on either unit or GUI testing of smartphone apps, but little on automated support for end-to-end system testing. This paper presents SIG-Droid, a framework for system testing of Android apps, backed with automated program analysis to extract app models and symbolic execution of source code guided by such models for obtaining test inputs that ensure covering each reachable branch in the program. SIG-Droid leverages two automatically extracted models: Interface Model and Behavior Model. The Interface Model is used to find values that an app can receive through its interfaces. Those values are then exchanged with symbolic values to deal with constraints with the help of a symbolic execution engine. The Behavior Model is used to drive the apps for symbolic execution and generate sequences of events. We provide an efficient implementation of SIG-Droid based in part on Symbolic PathFinder, extended in this work to support automatic testing of Android apps. Our experiments show SIG-Droid is able to achieve significantly higher code coverage than existing automated testing tools targeted for Android.
Federal agencies are concerned about the risks associated with information and communications technology (ICT) products and services that may contain potentially malicious functionality, are counterfeit, or are vulnerable due to poor manufacturing and development practices within the ICT supply chain. These risks are associated with the federal agencies’ decreased visibility into, understanding of, and control over how the technology that they acquire is developed, integrated and deployed, as well as the processes, procedures, and practices used to assure the integrity, security, resilience, and quality of the products and services. This publication provides guidance to federal agencies on identifying, assessing, and mitigating ICT supply chain risks at all levels of their organizations. The publication integrates ICT supply chain risk management (SCRM) into federal agency risk management activities by applying a multitiered, SCRM- specific approach, including guidance on assessing supply chain risk and applying mitigation activities.
Attributing the culprit of a cyber-attack is widely considered one of the major technical and policy challenges of cyber-security. The lack of ground truth for an individual responsible for a given attack has limited previous studies. Here, we overcome this limitation by leveraging DEFCON capture-the-flag (CTF) exercise data where the actual ground-truth is known. In this work, we use various classification techniques to identify the culprit in a cyberattack and find that deceptive activities account for the majority of misclassified samples. We also explore several heuristics to alleviate some of the misclassification caused by deception.
In this paper a joint algorithm was designed to detect a variety of unauthorized access risks in multilevel hybrid cloud. First of all, the access history is recorded among different virtual machines in multilevel hybrid cloud using the global flow diagram. Then, the global flow graph is taken as auxiliary decision-making basis to design legitimacy detection algorithm based data access and is represented by formal representation, Finally the implement process was specified, and the algorithm can effectively detect operating against regulations such as simple unauthorized level across, beyond indirect unauthorized and other irregularities.
The cyber security exposure of resilient systems is frequently described as an attack surface. A larger surface area indicates increased exposure to threats and a higher risk of compromise. Ad-hoc addition of dynamic proactive defenses to distributed systems may inadvertently increase the attack surface. This can lead to cyber friendly fire, a condition in which adding superfluous or incorrectly configured cyber defenses unintentionally reduces security and harms mission effectiveness. Examples of cyber friendly fire include defenses which themselves expose vulnerabilities (e.g., through an unsecured admin tool), unknown interaction effects between existing and new defenses causing brittleness or unavailability, and new defenses which may provide security benefits, but cause a significant performance impact leading to mission failure through timeliness violations. This paper describes a prototype service capability for creating semantic models of attack surfaces and using those models to (1) automatically quantify and compare cost and security metrics across multiple surfaces, covering both system and defense aspects, and (2) automatically identify opportunities for minimizing attack surfaces, e.g., by removing interactions that are not required for successful mission execution.
Cloud computing is a technological breakthrough in computing. It has affected each and every part of the information technology, from infrastructure to the software deployment, from programming to the application maintenance. Cloud offers a wide array of solutions for the current day computing needs aided with benefits like elasticity, affordability and scalability. But at the same time, the incidence of malicious cyber activity is progressively increasing at an unprecedented rate posing critical threats to both government and enterprise IT infrastructure. Account or service hijacking is a kind of identity theft and has evolved to be one of the most rapidly increasing types of cyber-attack aimed at deceiving end users. This paper presents an in depth analysis of a cloud security incident that happened on The New York Times online using account hijacking. Further, we present incident prevention methods and detailed incident prevention plan to stop future occurrence of such incidents.